What Is Network Security Monitoring? | Indications and Warnings | InformIT
Network Security Monitoring is the collection, analysis, and escalation of indications and warnings to detect and respond to intrusions. This chapter examines these aspects in detail.
This chapter is from the book
Now that we’ve forged a common understanding of security and risk and examined principles held by those tasked with identifying
and responding to intrusions, we can fully explore the concept of NSM. In Chapter 1, we defined NSM as the collection, analysis, and escalation of indications and warnings to detect and respond to intrusions. Examining the
components of the definition, which we do in the following sections, will establish the course this book will follow.
Indications and Warnings
It makes sense to understand what we plan to collect, analyze, and escalate before explaining the specific meanings of those
three terms in the NSM definition. Therefore, we first investigate the terms indications and warnings. Appreciation of these ideas helps put the entire concept of NSM in perspective.
The U.S. Department of Defense Dictionary of Military Terms defines an indicator as “an item of information which reflects the intention or capability of a potential enemy to adopt
or reject a course of action.”
[1]
I prefer the definition in a U.S. Army intelligence training document titled “Indicators in Operations Other Than War.”
[2]
The Army manual describes an indicator as “observable or discernible actions that confirm or deny enemy capabilities and
intentions.” The document then defines indications and warning (I&W) as “the strategic monitoring of world military, economic
and political events to ensure that they are not the precursor to hostile or other activities which are contrary to U.S. interests.”
I&W is a process of strategic monitoring that analyzes indicators and produces warnings.
[3]
We could easily leave the definition of indicator as stated by the Army manual and define digital I&W as the strategic monitoring of network traffic to assist in the detection and validation of intrusions.
Observe that the I&W process is focused against threats. It is not concerned with vulnerabilities, although the capability
of a party to harm an asset is tied to weaknesses in an asset. Therefore, NSM, and IDS products, focus on threats. In contrast, vulnerability assessment products are concerned with vulnerabilities. While some authors consider vulnerability assessment “a special case of intrusion detection,”
[4]
logic shows vulnerabilities have nothing to do with threats. Some vulnerability-oriented products and security information
management suites incorporate “threat correlation” modules that simply apply known vulnerabilities to assets. There are plenty
of references to threats but no mention of parties with capabilities and intentions to exploit those vulnerabilities.
Building on the Army intelligence manual, we define indications (or indicators) as observable or discernible actions that confirm or deny enemy capabilities and intentions. In the world
of NSM, indicators are outputs from products. They are the conclusions formed by the product, as programmed by its developer. Indicators
generated by IDSs are typically called alerts.
The Holy Grail for IDS vendors is 100% accurate intrusion detection. In other words, every alert corresponds to an actual intrusion by a malicious
party. Unfortunately, this will never happen. IDS products lack context. Context is the ability to understand the nature of an event with respect to all other aspects of an organization’s environment. As
a simple example, imagine a no-notice penetration test performed by a consulting firm against a client. If the assessment
company successfully compromises a server, an IDS might report the event as an intrusion. For all intents and purposes, it is an intrusion. However, from the perspective of
the manager who hired the consulting firm, the event is not an intrusion.
Consider a second example. The IDS could be configured to detect the use of the PsExec tool and report it as a “hacking incident.”
[5]
PsExec allows remote command execution on Windows systems, provided the user has appropriate credentials and access. The
use of such a tool by an unauthorized party could indicate an attack. Simultaneously, authorized system administrators could
use PsExec to gain remote access to their servers. The granularity of policy required to differentiate between illegitimate
and legitimate use of such a tool is beyond the capabilities of most institutions and probably not worth the effort! As a
result, humans must make the call.
All indicators have value, but some have greater value. An alert stating a mail server has initiated an outbound FTP session to a host in Russia is an indicator. A spike in the amount of Internet Control Message Protocol (ICMP) traffic at 2 A.M. is another indicator. Generally speaking, the first indicator has more value than the second, unless the organization has
never used ICMP before.
Warnings are the results of an analyst’s interpretation of indicators. Warnings represent human judgments. Analysts scrutinize the
indicators generated by their products and forward warnings to decision makers. If indicators are similar to information,
warnings are analogous to finished intelligence. Evidence of reconnaissance, exploitation, reinforcement, consolidation, and
pillage are indicators. A report to management that states “Our mail server is probably compromised” is a warning.
It’s important to understand that the I&W process focuses on threats and actions that precede compromise, or in the case of
military action, conflict. As a young officer assigned to the Air Intelligence Agency, I attended an I&W course presented
by the Defense Intelligence Agency (DIA). The DIA staff taught us how to conduct threat assessment by reviewing indicators, such as troop movements, signals intelligence (SIGINT)
transcripts, and human intelligence (HUMINT) reports. One of my fellow students asked how to create a formal warning report
once the enemy attacks a U.S. interest. The instructor laughed and replied that at that point, I&W goes out the window. Once
you’ve validated enemy action, there’s no need to assess the intentions or capabilities.
Similarly, the concept of I&W within NSM revolves around warnings. It’s rare these days, in a world of encryption and high-speed networks, to be 100% sure that observed
indicators reflect a true compromise. It’s more likely the analysts will collect clues that can be understood only after additional
collection is performed against a potential victim. Additional collection could be network-based, such as recording all traffic
to and from a possible compromised machine. Alternatively, investigators could follow a host-based approach by performing
a live forensic response on a suspect victim server.
[6]
This contrast between the military and digital security I&W models is important. The military and intelligence agencies use
I&W to divine future events. They form conclusions based on I&W because they have imperfect information on the capabilities
and intentions of their targets. NSM practitioners use I&W to detect and validate intrusions. They form conclusions based on digital I&W because they have imperfect
perception of the traffic passing through their networks. Both communities make educated assessments because perfect knowledge
of their target domain is nearly impossible.
[7]