chapter -1shodhganga.inflibnet.ac.in/bitstream/10603/81961/10... · attacks. network security...
Post on 20-Apr-2020
4 Views
Preview:
TRANSCRIPT
[1]
CHAPTER -1 INTRODUCTION
1.1 Introduction to Security What is security?
“Security is a process, not an end state” Dr. Mitch Kabay
Security is required in any organisation/institute to protect its hardware, software
and data resources against known or sometimes unknown vulnerabilities. These
vulnerabilities cause risk to our system by means of exploitation of these
loopholes. Vulnerabilities can be inserted into system intentionally or
unintentionally. Security is the process of maintaining an acceptable level of
perceived risk[1]. It is the process to attain confidentiality, integrity and
availability of resources. It provides a framework to protect against intrusions or
attacks. Network security monitoring is defined as the collection, analysis and
escalation of indications and warnings to detect and respond to intrusions. The
security process revolves around four steps: Assessment, protection, detection and
response[1] as shown in Figure 1.1.
Fig 1.1 Security Process
[2]
Risk is measure of danger to an asset. Threat is a process of capability and
intention of exploitation. Vulnerabilities are introduced into assets via poor design,
implementation or containment. Asset value is a measurement of the time and
resources needed to replace an asset or restore it to its former state.
The risk equation is risk = threat * vulnerability* asset value
Intrusion is gaining unauthorized access/access exceeding one's privileges
to computing resources directly or through any victim. Any successful attempt
exploiting misconfiguration in system or application software, protocol, data or
service is an attack. It can lead to slow down of system, application non-
functioning, and denial of service or system crash. Attacks can be launched from
insiders (LAN users) or outsiders (from internet). Attacks can be launched on
servers, networking equipment, hosts and applications. Security therefore is not
only constrained to hosts or networks but it may be extended to almost any
hardware or software involved in communication till the destination is reached.
The same security applies to destination system also. Intrusion detection systems
may be used to provide the required security at host or network or application
level apart from encryption of information at various layers.
From where do attacks actually originate? What is the vector to the target?
The CSI/FBI study asked respondents to rate “internal systems,” “remote dial-in,”
“Internet” as frequent points of attack. In 2003, 78% cited the Internet, while only
30% cited internal systems and 18% cited dial-in attacks. The growth of Internet
has brought about great benefits to the modern society; meanwhile, the rapidly
increasing connectivity and accessibility to the Internet has posed a tremendous
security threat. The growth of attacks has roughly paralleled the growth of Internet
[2]. The most popular operating systems regularly publish updates, but the
combination of poorly administered machines, uninformed users, a vast number of
targets, and ever-present software bugs has allowed exploits to remain ahead of
patches.
[3]
There are five phases of compromise of a machine. The probability of detection
during these five phases is given in table1.1[1]
Table 1.1 Detecting intruders during the five phases of compromise
In the Indian scenario with e-commerce becoming popular in the last few years,
Phases of Compromise
Description Probability of detection
Attacker's advantage Defenders' Advantage
Reconnaissance Enumerate hosts, services and application versions
Medium to high Attackers perform host and service discovery over a long time frame using normal traffic patterns
Attackers reveal themselves by the differences between their traffic and legitimate user traffic
Exploitation Abuse, subvert or breach services
Medium Attackers may exploit services offering encryption or obfuscate exploit traffic
Exploits do not appear as legitimate traffic, and IDSs will have signatures to detect many attacks
Reinforcement Retrieve tools to elevate privileges and/or disguise presence(insiders)
High Encryption hides the content of tools
Outbound activity from servers can be closely watched and identified
Consolidation Communicate via a back door, typically using a covert channel
Low to Medium
With full control over both communication endpoints, the attackers' creativity is limited only by the access and traffic control offered by intervening network devices
Traffic profiling may reveal unusual patterns corresponding to the attackers' use of a backdoor
Pillage Steal information, damage the asset, or further compromise the organization
Low to Medium
Once operating from a trusted host, the attackers' activities may be more difficult to notice
Smart analysts know the sorts of traffic that internal systems should employ and will notice deviations
[4]
cybercrime is a term used to broadly describe criminal activity in which computers
or computer networks are a tool, a target, or a place of criminal activity and
include everything from electronic cracking to denial of service attacks [3].
Security Hierarchy
The figure 1.2 shows that how the field of security protects the general assets like
Fig.1.2 Hierarchy of the security specializations
hardware, software and information resources and its various subfields. Three
modes of security can be applied to any situation and three D’s of security are:
• Defence
• Deterrence
• Detection
Multi Level Model of Security As different are the security threats, in different environments, so are the best
ways to counter them. Following are the different levels of security[4]:
D -- Minimal Protection
C1 -- Discretionary Security Protection.
C2 -- Controlled Access Protection.
B1 -- Labelled Security Protection
[5]
B2 -- Structured Protection.
B3 -- Security Domains
A1 -- Verified Design.
The OSI Security Architecture: The OSI security architecture focuses on
the security services, mechanisms and attacks. ITU—T(International
Telecommunication Union Telecommmunication Standardization Sector)
recommends X.800, Security Architecture for OSI, which defines a systematic
approach to security[5].
Security Services (X.800) Various security services for the protection of
organization’s assets are as follows:
Authentication: Provides authentication services for
*Peer Entity Authentication
*Data Origin Authentication
Access Control: It provides permissions to users for access of resources
Data Confidentiality: Service providers work for the
*Connection Confidentiality
*Connectionless Confidentiality
*Selective Confidentiality
*Traffic flow Confidentiality
Data Integrity: It ensures that data is not modified. It is for
*Connection Integrity with Recovery
*Connection Integrity without Recovery
*Selective Field Connection Integrity
*Connectionless Integrity
*Selective Field Connectionless Integrity
Non Repudiation: It ensures that the sender of the message cannot deny later.
*Non Repudiation, Origin
[6]
*Non Repudiation, Destination
Table 1.2 shows the relationship between various services and mechanisms of
security[5]
Table 1.2 Relationship between the security services and mechanism
Security Mechanisms: Various security mechanisms provided to enhance the
security are given below
Mechanism
Service Encipher
Ment
Digital
Signature
Access
Control
Data
Integrity
Authenti-
cation
Exchange
Traffic
Padding
Routing
Control
Nota
rizati
on
Peer
Authenticati
on
Y Y Y
Data Origin
Authenticati
on
Y Y
Access
Control
Y
Confidentiali
ty
Y Y
Traffic Flow
Confidentiali
ty
Y Y Y
Data
Integrity
Y Y Y
Non –
Repudiation
Y Y Y
Availability Y Y
[7]
Specific Security Mechanisms: It deals with
* Encipherment
* Digital Signature
* Access Control
* Data Integrity
* Authentication Exchange
* Traffic Padding
* Routing Control
* Notarization
Pervasive Security Mechanisms : It deals with trusted functionality for event
detection, security audit trail and security recovery.
Security against Networking Threats/Challenges of Security: Security of systems, information, resources, policies, configuration etc. are
challenged by attacks. Various types of security attacks, vulnerability types are
listed below
Security Attacks: Security attacks are further classified as
Passive Attacks: Passive attacks include release of message contents, traffic
Analysis, emphasis on prevention rather than detection by encryption
Active Attacks: Active attacks include Masquerade, Replay, Denial of Service,
Splicing Attacks, Timing Attacks, Spoofing and hijacking, Hacker/ Cracker
Attacks, Session Hijacks, LoPht Crack, Network denial of service, TCP/IP
spoofing, SYN flooding, Modification of message attacks. Ping Attacks, TCP
sequence Guessing, IP/UDP Fragmentation, ICMP flooding (Smurf), DNS cache
poisoning, Viruses and Worms, Evasion Attacks.
Vulnerabilities Types: Vulnerabilities can be due to software, hardware,
configuration, policy, usage.
[8]
Analysing Network Security: Main types of security are for password Security,
password sniffing, network Services, protecting against attacks, access control
with TCP Wrappers, Pidentd Authentication Server, port scanning
1.2 Network Security Technologies
Cryptology: It is the science of hiding information which includes number
theory, group theory, combinatory logic, complexity theory, information theory. It
has two faces: cryptography and cryptanalysis[6].
Cryptography: It is the art of (overt) secret writing. Figure 1.3 shows the
classification of cryptography. Cryptography (Secret Writing)
Steganography Cryptography Proper (Covert secret writing) (Overt secret writing) Technical Linguistic
Steganography Steganography
Semagram Open code (Visibly concealed (Invisibly concealed secret writing) secret writing) Jargon code Concealment Cipher (Masked secret writing)(Veiled secret writing)
Cue Null Cipher Grille
Figure 1.3 Classification of the Cryptography
Layered classification of cryptography is as follows
*L2 Cryptography (WEP-Wired Equivalent Privacy for wireless LANS)
*Network Layer Cryptography (IPSec.)
*L5 to L7 Cryptography (SSL-Secure Socket Layer)
*File System Cryptography (Micro Soft’s Encrypting File System-EFS)
[9]
Cryptanalysis: Cryptanalysis is nurtured to a good part by encryption errors. It
is done by
*Exhausting Combinational Complexity
*Autonomy of language: patterns, frequencies.
*Polyalphabatic Case: Probable Words
*Compromises
*Linear Basic Analysis
Encryption: Data can be encrypted using various encryption methods.
*Polyalphabatic Encryptions: Keys, Families of alphabets
*Composition of classes of methods
*Super encryption
*Confusion Diffusion
*Secret key cryptography---- DES, IDEA, AES
*Open encryption key systems
*Symmetric and Asymmetric Encryption Methods,
*Hash Algorithms
*One way functions
*Diffie-Hellman
*RSA Method
*Digital Signature Standard ( DSS )
Encryption Security: It is basically about framing the rules that are intended to
make unauthorized decryption more difficult and is based on:
*Cryptographic Faults
*Maxims of Cryptology
*Shannon’s Yardsticks
Steganography: It is art of (covert) secret writing[6]. It is of following types:
*Watermarking and fingerprinting
*Steganography in Media
[10]
*Steganography Text
*Steganography Images
*Steganography Audio
1.3 Attacks
1.3.1 Attack Terminology Traditionally attacks on computers have included methods such as viruses,
worms, buffer-overflow exploits and denial of service attacks. Network attacks on
the other hand are mostly attacks on computers that use a network in some way. A
network could be used to send the attack (such as a worm), or it could be the
means of attack (such as Distributed Denial of Service attack). In general, network
attacks are a subset of computer attacks. However, there are several types of
network attacks that do not attack computers, but rather the network they are
attached to. Flooding a network with packets does not attack an individual
computer, but clogs up the network. Although a computer may be used to initiate
the attack, both the target and the means of attacking the target are network
related. There are lots of known computer system attack classifications and
taxonomies available in literature.
In the most widely used open source network intrusion prevention and
detection system, namely the Snort, attack classification is based on its impact on
the computer system. The attacks whose effect is the most critical have the highest
priority. The priority levels are divided into high, medium and low ones. High-
level priority attacks are such as the attempted administrator privilege gain, the
network “Trojan”, or the web application attack. Medium priority attacks are the
Denial of service (DoS) attacks, a nonstandard protocol or event, potentially bad
traffic, attempted log-in using a suspicious user etc. Low-level priority attacks are
the ICMP event, the network scan, the generic protocol command etc.[7]
Computer and network attacks have evolved greatly over the last few
decades. The attacks are increasing in number and also improving in their strength
[11]
and sophistication. Graph 1.1 is the well celebrated plot by Julia Allen[8] which
shows this trend and also some of the trends in the history of attacks. The Morris
Worm the first viruses were released in 1981, among them Apple Viruses 1, 2 and
3 which targeted the Apple II operating system. In 1983, Fred Cohen was the first
person to formally introduce the term “computer virus” in his thesis[9], which was
published in 1985.
Graph 1.1 Plot of Attack sophistication v/s Intruder Knowledge over the years[8]
More recently, new attacks such as denial of service (DoS) (mid 1990s),
distributed DoS (DDoS) attacks (in 1999), botnets and storm botnets have been
developed. Two major recent developments in computer and network attacks are
blended attacks and information warfare. The blended attacks first appeared in
2001 with the release of Code Red[10] and then followed by Nimda[11], Slammer
[12] and Blaster[13]. Blended attacks contain two or more attacks merged together
to produce a more potent attack.
1.3.2 Attack Objectives and motivations Attack motivation can be understood by identifying what the attackers do and how
[12]
they can be classified. A simple classification of attackers is as hackers, criminals
(spies, terrorists, corporate raiders and professional criminals) and vandals. The
main motivation of a hacker is to access to a system or data; the main motivation
of the criminal is financial or political gain; and the main motivation of the vandal
is to damage. In the thesis work of Howard[14] the problem with classifying
attackers into the three categories is highlighted with all the three categories
describing criminal behaviour. The incidents of cyber attacks that were serious and
harmful in nature can be seen to be motivated by political and social reasons as
pointed out by Denning[15]. The potential threat of cyber terrorism becoming unavoidable is due to the
critical infrastructures that are potentially vulnerable and studies show that the
vulnerabilities were steadily increasing, while the costs of attack were decreasing.
The statistics of attacks in the recent years appear in the web site for Web Server
Intrusion Statistics[16].
There are various classifications of Internet attacks. These can be
• By the goal of the attacker to
• By the effect on system like
• By the operating system on the target host
• By the attacked service
1.3.3 Commonly Encountered Attacks Following discussion gives an extensive view of the commonly encountered
attacks.
Viruses: Viruses are self-replicating programs that infect and propagate through
files. Usually they will attach themselves to a file, which will cause them to be run
when the file is opened. There are several main types of viruses as identified below:
File infectors: File infector viruses infect files on the victim’s computer by
inserting themselves into a file. Usually the file is an executable file, such as a
.EXE or .COM in Windows. When the infected file is run, the virus executes as
[13]
well.
System and boot record infectors: System and boot record infectors were the most
common type of virus until the mid 1990s. These types of viruses infect system
areas of a computer such as the Master Boot Record (MBR) on hard disks and the
DOS boot record on floppy disks. By installing itself into boot records, the virus
can run itself every time the computer is booted up.
Macro viruses: Macro viruses are simply macros for popular programs, such as
Microsoft Word, that are malicious. For example, they may delete information
from a document or insert phrases into it. Propagation is usually through the
infected files. If a user opens a document that is infected, the virus may install
itself so that any subsequent documents are also infected. Often the macro virus
will be attached as an apparently benign file to fool the user into infecting
themselves. The Melissa virus is the best known macro virus. The virus worked by
emailing a victim with an email that appeared to come from an acquaintance. The
email contained a Microsoft Word document as an attachment, that if opened,
would infect Microsoft Word and if the victim used the Microsoft Outlook 97 or
98 email client, the virus would be forwarded to the first 50 contacts in the victims
address book. Melissa caused a significant amount of damage, as the email sent by
the virus flooded email servers.
General properties of Virus: Viruses often have additional properties, beyond
being an infector or macro virus. A virus may also be multi-partite, stealth,
encrypted or polymorphic. Multi-partite viruses are hybrid viruses that infect files
and system and/or boot-records. This means multi-partite viruses have the
potential to be more damaging, and resistant. A stealth virus is one that attempts to
hide its presence. This may involve attaching itself to files that are not usually
seen by the user. Viruses can use encryption to hide their payload. A virus using
encryption will know how to decrypt itself to run. As the bulk of the virus is
encrypted, it is harder to detect and analyze. Some viruses have the ability to
[14]
change themselves as either time goes by, or when they replicate themselves. Such
viruses are called polymorphic viruses. Polymorphic viruses can usually avoid
being eradicated longer than other types of viruses as their signature changes.
Worms: A worm is a self-replicating program that propagates over a network in
some way. Unlike viruses, worms do not require an infected file to propagate.
There are two main types of worms: mass-mailing worms and network-aware
worms. Mass-mailing Worms: A mass-mailing worm is a worm that spreads through
email. Once the email has reached its target it may have a payload in the form of a
virus or Trojan.
Network-aware Worms: Network-aware worms generally follow a four stage
propagation model. The first step is target selection. The compromised hosts target
a host. The compromised host then attempts to gain access to the target host by
exploitation. Once the worm has access to the target host, it can infect it. Infection
may include loading Trojans onto the target host, creating back doors or
modifying files. Once infection is complete, the target host is now compromised
and can be used by the worm to continue propagation. Examples are Blaster, SQL
Slammer etc.
Trojans: Trojans appear to be benign programs to the user, but will actually
have some malicious purpose. Trojans usually carry some payload such as remote
access methods, viruses and data destruction. Trojans provide a back door for the
malicious attacker and gives them the following abilities: Session logging,
Keystroke logging, File transfer, Program installation, remote rebooting, Registry
editing, and Process management.
Logic bombs: Logic bombs are a special form of Trojans that only release their
payload once a certain condition is met. If the condition is not met, the logic bomb
behaves as the program it is attempting to simulate.
[15]
Buffer overflows: Buffer overflows are probably the most widely used means of
attacking a computer or network. They are rarely launched on their own, and are
usually part of a blended attack. Buffer overflows are used to exploit flawed
programming, in which buffers are allowed to be overfilled. If a buffer is filled
beyond its capacity, the data filling it can then overflow into the adjacent memory,
and then can either corrupt data or be used to change the execution of the program. There are two main types of buffer overflows described below.
Stack buffer overflow: A stack is an area of memory that a process uses to store
data such as local variables, method parameters and return addresses. Often
buffers are declared at the start of a program and so are stored in the stack. Each
process has its own stack, and its own heap. Overflowing a stack buffer was one of
the first types of buffer overflows and is one that is commonly used to gain control
of a process. In this type of buffer overflow, a buffer is declared with a certain
size. If the process controlling the buffer does not make adequate checks, an
attacker can attempt to put in data that is larger than the size of the buffer. An
attacker may place malicious code in the buffer. Part of the adjacent memory will
often contain the pointer to the next line of code to execute. Thus, the buffer
overflow can overwrite the pointer to point to the beginning of the buffer, and
hence the beginning of the malicious code. Thus, the stack buffer overflow can
give control of a process to an attacker.
Heap overflows: Heap overflows are similar to stack overflows but are generally
more difficult to create. The heap is similar to the stack, but stores dynamically
allocated data. The heap does not usually contain return addresses like the stack,
so it is harder to gain control over a process than if the stack is used. However, the
heap contains pointers to data and to functions. A successful buffer overflow will
allow the attacker to manipulate the process execution. An example would be to
overflow a string buffer containing a filename, so that the filename is now an
important system file. The attacker could then use the process to overwrite the
system file (if the process has the correct privileges).
[16]
Denial of Service attacks: Denial of Service (DoS) attacks[17], sometimes known as nuke attacks, are
designed to deny legitimate users of a system from accessing or using the system
in a satisfactory manner. DoS attacks usually disrupt the service of a network or a
computer, so that it is either impossible to use, or its performance is seriously
degraded.
There are three main types of DoS attacks:
• Host based DoS
• Network based DoS
• Distributed DoS
Host-based DoS: Host based DoS attacks aim at attacking computers.Either
vulnerability in the operating system, application software or in the configuration
of the host are targeted. Resource hogging is a possible way of DoS on a host.
Resources such as CPU time and memory use are the most common targets.
Crashers are a form of host based DoS that are simply designed to crash the host
system, so that it must be restarted. Crashers usually target vulnerability in the
host’s operating system. Many crashers work by exploiting the implementation of
network protocols by various operating systems. Some operating systems cannot
handle certain packets, and if received cause the operating system to hang or
crash.
Network-based DoS: Network based DoS attacks target network resources in an
attempt to disrupt legitimate use. Network based DoS usually flood the network
and the target with packets. To succeed in flooding, more packets than the target
can handle must be sent, or if the attacker is attacking the network, enough packets
must be flooded so that the bandwidth left for legitimate users is severely reduced.
Three main methods of flooding have been identified:
TCP Floods: TCP packets are streamed to the target.
ICMP Echo Request/Reply6: ICMP packets are streamed to the target.
[17]
UDP Floods: UDP packets are streamed to the target.
In addition to a high volume of packets, often packets have certain flags set to
make them more difficult to process. If the target is the network, the broadcast
address7 of the network is often targeted. One simple way of reducing network
bandwidth is through a ping flood. Ping floods can be created by sending ICMP
request packets of a large size to a large number of addresses (perhaps through the
broadcast address) at a fast rate.
Distributed DoS: Distributed DoS (DDoS) attacks are a recent development in computer and
network attack methodologies. The DDoS attack methodology was first seen in
1999 with the introduction of attack tools such as The DoS Projects Trinoo,
Figure 1.4 Typical DDoS attacks
The Tribe Flood Network and Stacheldraht. DDoS attacks work by using a large
number of attack hosts to direct a simultaneous attack on a target or targets. Figure
1.4 shows the process of a DDoS attack.Firstly, the attacker commands the master
nodes to launch the attack. The master nodes then order all daemon nodes under
[18]
them to launch the attack. Finally the daemon nodes attack the target
simultaneously, causing a denial of service. With enough daemon nodes, even a
simple web page request will stop the target from serving legitimate user requests.
The DDoS attack takes place when many compromised machines infected by the
malicious code act simultaneously and are coordinated under the control of a
single attacker in order to break into the victims system, exhaust its resources, and
force it to deny service to its customers.
There are mainly two kinds of DDoS attacks:
*Typical DDoS attacks
*Distributed Reflector DoS (DRDoS) attacks
Typical DDoS attacks: In a typical DDoS attack, the army of the attacker
consists of master zombies and slave zombies. The hosts of both categories are
compromised machines that have arisen during the scanning process and are
infected by malicious code. The attacker coordinates and orders master zombies
and they, in turn, coordinate and trigger slave zombies. More specifically, the
attacker sends an attack command to master zombies and activates all attack
processes on those machines, which are in hibernation, waiting for the appropriate
command to wake up and start attacking. Then, master zombies, through those
processes, send attack commands to slave zombies, ordering them to mount a
DDoS attack against the victim. In that way, the agent machines (slave zombies)
begin to send a large volume of packets to the victim, flooding its system with
useless load and exhausting its resources. A typical DDoS attacks either cause
bandwidth depletion or resource depletion. Band width depletion occurs due to
UDP or ICMP flood attacks and smurf or fraggle amplification attack. Protocol
exploit attack and malfunction packet attack cause resource depletion.
[19]
Typical DDoS attacks can be further classified which is given in Figure 1.5
Figure 1.5 General taxonomy of the DDoS
Distributed Reflector DoS attacks(DRDoS): In DRDoS attacks the army of the
attacker consists of master zombies, slave zombies, and reflectors. The scenario of
this type of attack is the same as that of typical DDoS attacks up to a specific
stage. The attackers have control over master zombies, which, in turn, have control
over slave zombies. The difference in this type of attack is that slave zombies are
led by master zombies to send a stream of packets with the victims IP address as
the source IP address to other uninfected machines (known as reflectors),
exhorting these machines to connect with the victim. Then the reflectors send the
victim a greater volume of traffic, as a reply to its exhortation for the opening of a
new connection, because they believe that the victim was the host that asked for it.
Therefore, in DRDoS attacks, the attack is mounted by non compromised
machines, which mount the attack without being aware of the action. Comparing
the two scenarios of DDoS attacks, we should note that a DRDoS attack is more
detrimental than a typical DDoS attack. This is because a DRDoS attack has more
machines to share the attack, and hence the attack is more distributed. A second
[20]
reason is that a DRDoS attack creates a greater volume of traffic because of its
more distributed nature.
Figure 1.6 graphically depicts a DRDoS attack.
Figure 1.6 Distributed Reflector DoS attack
Network-based attacks: There are several kinds of attacks that operate on
networks and the protocols that run the networks. Network spoofing is the process
in which an attacker passes themselves off as someone else. There are several
ways of spoofing in the standard TCP/IP network protocol stack, including: MAC
address spoofing at the data-link layer and IP spoofing at the network layer. By
spoofing who they are, an attacker can pretend to be a legitimate user or can
manipulate existing communications from the victim host.
Session Hijacking: Session hijacking is the process by which an attacker takes
over a session taking place between two victim hosts. The attack essentially cuts in
and takes over the place of one of the hosts. Session hijacking usually takes place
[21]
at the TCP layer, and is used to take over sessions of applications such as Telnet
and FTP. TCP session hijacking involves use of IP spoofing, as mentioned above,
and TCP sequence number guessing. To carry out a successful TCP session
hijacking, the attacker will attempt to predict the TCP sequence number that the
session being hijacked is up to. Once the sequence number has been identified, the
attacker can spoof their IP address to match the host they are cutting out and send
a TCP packet with the correct sequence number. The other host will accept the
TCP packet, as the sequence number is correct, and will start sending packets to
the attacker. The cut out host will be ignored by the other host as it will no longer
have the correct sequence number. Sequence number prediction is most easily
done if the attacker has access to the IP packets passing between the two victim
hosts. The attacker simply needs to capture packets and analyze them to determine
the sequence number. If the attacker does not have access to the IP packets, then
the attacker must guess the sequence number.
Password attacks: An attacker wishing to gain control of a computer, or a user’s
account, will often use a password attack to gain the needed password. Many tools
exist to help the attacker uncover passwords. Password Guessing/Dictionary
Attack Password guessing is the most simplest of password attacks. It simply
involves the attacker attempting to guess the password. Often the attacker will use
a form of social engineering to gain clues as to what the password is. A dictionary
attack is similar, but is a more automated attack. The attacker uses a dictionary of
words containing possible passwords and uses a tool to see if any are the required
password. Brute force attacks work by calculating every possible combination that
could make up a password and testing it to see if it is the correct password.
Information gathering attacks: The attack process usually involves information
gathering. Information gathering is the process by which the attacker gains
valuable information about potential targets, or gains unauthorized access to some
data without launching an attack. Information gathering is passive in the sense that
no attacks are explicitly launched. Instead networks and computers are sniffed,
[22]
scanned and probed for information.
Sniffing: Packet sniffers are a simple but invaluable tool for anyone wishing to
gather information about a network or computer. For the attacker, packet sniffers
provide a way to glean information about the host or person they wish to attack
and even gain access to unauthorized information. Traditional packet sniffers work
by putting the attackers Ethernet card into promiscuous mode. An Ethernet card in
promiscuous mode accepts all traffic from the network, even when a packet is not
addressed to it. This means the attacker can gain access to any packet that is
traversing on the network they are on. By gathering enough of the right packets
the attacker can gain information such as login names and passwords. Other
information can also be gathered, such as MAC and IP addresses and what
services and operating systems are being run on specific hosts. This form of attack
is very passive. The attacker is not sending any packets out, they are only listening
to packets on the network.
Mapping: Mapping is used to gather information about hosts on a network.
Information such as what hosts are online, what services are running and what
operating system a host is using, can all be gathered via mapping. Thus potential
targets and the layout of the network are identified. Host detection is achieved
through a variety of methods. Simple ICMP queries can be used to determine if a
host is on-line. TCP SYN messages can be used to determine whether or not a port
on a host is open and thus, whether or not the host is on-line. After detecting if a
host is on-line, mapping tools can be used to determine what operating system and
what services are running on the host. Running services are usually identified by
attempting to connect to a host’s ports. Port scanners are programs that an attacker
can use to automate this process. Basic port scanners work by connecting to every
TCP port on a host and reporting back which ports were open. Either the attacker
has to choose an attack using the information gathered, or more information needs
to be gathered through security scanning.
Security Scanning: Security scanning is similar to mapping, but is more active
[23]
and more information is gathered. Security scanning involves testing a host for
known vulnerabilities or weaknesses that could be exploited by the attacker. For
example, a security scanning tool may be able to tell the attacker that port 80 of
the target is running an HTTP server, with a specific vulnerability.
Blended attacks: While blended attacks are not a new development, they have
recently become popular with attacks such as Code Red and Nimda. Blended
attacks are attacks that contain multiple threats, for example multiple means of
propagation or multiple attack payloads. The first instance of a blended attack
occurred in 1988 with the first Internet worm named as Morris Worm. The
Internet is especially susceptible to blended threats, as was shown by the recent
SQL Slammer attack, in which the Internet suffered a significant loss of
performance.
Cyber security menaces: A list of the attacks most likely to cause substantial
damage was compiled by experts[18] in ranked order is provided below:
1. Increasingly sophisticated web site attacks that exploit browser vulnerabilities -
especially on trusted website.. Web site attacks on browsers are increasingly
targeting components, such as Flash and Quick Time that are not automatically
patched when the browser is patched. Placing better attack tools on trusted sites is
giving attackers a huge advantage over the unwary public.
2. Increasing sophistication and effectiveness in botnets The Storm worm started
spreading in January, 2007 with an email saying, “230 dead as storm batters
Europe,” and was followed by subsequent variants. Within a week it accounted for
one out of every twelve infections on the Internet, installing rootkits and making
each infected system a member of a new type of botnet. Previous botnets used
centralized command and control; the Storm worm uses peer-to-peer control.
3. Cyber espionage efforts by well resourced organizations looking to extract large
amounts of data - particularly using targeted phishing. Economic espionage will be
increasingly common as nation-states use cyber theft of data to gain economic
advantage in multinational deals. The attack of choice involves targeted spear
[24]
phishing with attachments, using well-researched social engineering methods to
make the victim believe that an attachment comes from a trusted source, and using
newly discovered Microsoft Office vulnerabilities and hiding techniques to
circumvent virus checking.
4. Mobile phone threats, especially against I-Phones and android-based phones;
plus VOIP Mobile phones are general purpose computers, so worms, viruses, and
other malware increasingly target them. A truly open mobile platform usher in
completely unforeseen security nightmares. The developer toolkits provide easy
access for hackers. Attacks on VoIP systems are on the horizon and expected to
surge. VoIP phones and the IP PBXs have had numerous published vulnerabilities.
Attack tools exploiting these vulnerabilities have been written and are available on
the Internet.
5. Insider attacks: Insider attacks are initiated by rogue employees, consultants
and/or contractors of an organization. Insider-related risk has long been
exacerbated by the fact that insiders usually have been granted some degree of
physical and logical access to systems, databases, and networks that they attack,
giving them a significant head start in attacks that they launch. More recently,
however, security perimeters have broken down, something that allows insiders to
attack both from the inside and from outside an organization’s network
boundaries.
6. Advanced identity theft from persistent bots: A new generation of identity theft
is being powered by bots that stay on machines for three to five months collecting
passwords, bank account information, surfing history, frequently used email
addresses, and more. They gather enough data to enable extortion attempts and
advanced identify theft attempts where criminals have enough data to pass basic
security checks.
7. Increasingly malicious Spyware: Tools that also increasingly target and dodge
anti-virus, anti-spyware, and anti-rootkit tools to help preserve the attacker’s
control of a victim machine for as long as possible.
[25]
8. Web application security exploits: Large percentages of web sites have cross
site scripting, SQL injection, and other vulnerabilities resulting from programming
errors. Web 2.0 applications are vulnerable because user-supplied data cannot be
trusted; your script running in the users’ browser still constitutes “user supplied
data.”
9. Increasingly sophisticated social engineering including blending phishing with
VOIP and event phishing: Blended approaches will amplify the impact of many
more common attacks. For example, the success of phishing is being radically
increased by first stealing IDs of users of other technologies. Tax filing scams and
scams based on the U.S. Presidential elections were widely used , and many of
them have succeeded. A second area of blended phishing combines email and
VoIP. An inbound email, apparently being sent by a credit card company, asks
recipients to “re-authorize” their credit cards by calling a 1-800 number. The
number leads them (via VoIP) to an automated system in a foreign country that,
quite convincingly, asks that they key in their credit card number, CVV, and
expiration date.
10. Supply chain attacks infecting consumer devices (USB Thumb Drives, GPS
Systems, Photo Frames, etc.), distributed by Trusted Organizations Retail outlets
are increasingly becoming unwitting distributors of malware. Devices with USB
connections and the CDs packaged with those devices sometimes contain malware
that infect victims’ computers and connect them into botnets.
The scripts written has many vulnerabilities induced due to poor program design,
logical errors, testing phase errors, maintenance errors, buffer overflow errors,
configuration errors and other human and machine errors introduced into the
software during its various phases. These vulnerabilities invokes the interceptors
to create exploitation scripts in various scenarios searching for the areas in the
program to insert these exploits. The program then transfers the control to the
exploit via jump instructions inserted in the program by the intruder. The intruder
then has control of the system taking users’ credentials for financial transactions.
[26]
1.4 Intrusion Detection Systems “If I have been able to see farther than others, it was because I stood on the
Shoulder of giants” Sir Isaac Newton
Intrusion Detection is a rapidly evolving and changing technology. Even though
the blooming took place in the early 1980s, all the early intrusion detection work
was done as research projects for US government and military organizations. The
major works in intrusion detection has happened in the mid and late 1990s along
with the explosion of the Internet. The early research work in the field of intrusion
detection often focused on host-based solutions, but the drastic growth of
networking changed the later efforts to be concentrated on network-based -
systems. Several surveys have indeed been published in the past[19, 20, 21, 22,
23, 24], but the growth of IDSs has been such that a lot of IDSs have appeared in
the meantime. This survey hence tries to present an updated view by starting with
the historical developments in the field of intrusion detection from the perspective
of the people who did the initial research and development and their projects,
providing us with a better insight into the motivation behind it.
History of Intrusion Detection Systems: James P Anderson is acknowledged
as the first person to document the need for automated audit trail review to support
security goals for the US Department of Defence in 1978. He published the
Reference Monitor concept in Computer Security Technology Planning Study, a
planning study for US Air Force and this report is considered to be the seminal
work on intrusion detection. Anderson also published a paper “Computer Security
Threat Monitoring and Surveillance”[25] in 1980 and this is widely considered to
be the first real work in the area of intrusion detection. The paper proposes
taxonomy of classifying internal and external threats to computer systems. He
points out that when a violation occurs, in which the attacker attains the highest
level of privilege, such as root or super user in UNIX, there is no reliable remedy.
He also comments on the problems associated with masquerades for which he
[27]
suggests that some sort of statistical analysis of user behaviour, capable of
determining unusual patterns of system use, might represent a way of detecting
masquerades. This suggestion was tested in the next milestone in Intrusion
detection, the IDES project.
The US Navy’s Space and Naval Warfare Systems Commands
(SPAWARS) in 1984 funded a project to research and develop a model for a real-
time intrusion detection system and Dorothy Denning and Peter Neumann came
up in 1988 with the Intrusion Detection Expert System (IDES) model. The rare or
unusual traces of traffic were referred to as anomalous and the assumptions made
in this project served as the basis for many intrusion detection research and system
prototypes of the late 1980s. The IDES model is based on the use statistical
metrics and models to describe the behaviour of benign users. The IDES prototype
used hybrid architecture, comprising an anomaly detector and an expert system.
The anomaly detector used statistical techniques to characterize abnormal
behaviour. The expert system used a rule-based approach to detect known security
violations. The expert system was included to mitigate the risk that a patient
intruder might gradually change his behaviour over a period of time to defeat the
anomaly detector. This situation was possible because the anomaly detector
adapted to gradual changes in behaviour to minimize false alarms.
Denning’s paper on An Intrusion Detection Model[22] in 1986 illustrates
the model of a real-time intrusion-detection expert system capable of detecting
break-ins, penetrations, and other forms of computer abuse. The model is based on
the hypothesis that security violations can be detected by monitoring a system’s
audit records for abnormal patterns of system usage. The model includes profiles
for representing the behaviour of subjects with respect to objects in terms of
metrics and statistical models, and rules for acquiring knowledge about this
behaviour from audit records and for detecting anomalous behaviour. The model is
independent of any particular system, application environment, system
vulnerability, or type of intrusion, thereby providing a framework for a general
[28]
purpose intrusion-detection expert system. This paper is considered to be the
stepping-stone for all the further works in this field.
The emergence of intrusion detection systems: In 1984, the US Navy’s
SPAWARS funded a research project Audit Analysis at Sytek and the prototype
system utilized data collected at shell level of a UNIX machine running in a
research environment. The data was then analyzed by using database tools. This
research helped in identifying the normal system usage from the abnormal system
usage. The researchers were Lawrence Halme, Teresa Lunt and John Van Horne.
In 1985 an internal research and development project named Discovery started at
TRW and this monitored the TRW’s online credit database application and not the
operating system for intrusions and misuse. Discovery used a statistical engine to
locate patterns in the input data and an expert system detecting and deterring
problems in TRW’s online credit database. The principal investigator was William
Tener. Haystack was developed for the US Air Force in 1988 to help security
officers detect insider abuse of Air Force Standard Base Level Computers.
Haystack was implemented on an Oracle database management system and
performed anomaly detection in batch mode.
Haystack characterized the information from system audit trails as sets of
features like session duration, number of files opened, number of pages printed,
number of CPU resources consumed in the session and the number of sub
processes created in the session. It used a two-stage statistical analysis to detect
anomalies in system activities. The first stage checked each session for unusual
activity and the second stage used a statistical test to detect trends in sessions. The
combination of the two techniques was designed to allow detection of both out-of-
bounds activities as well as activities that gradually deviated from normal over a
period of time. The principal investigators were Smaha and Stephen. Almost the
same time, Multi Intrusion Detection and Alerting System (MIDAS) was
developed by the National Computer Security Centre to monitor NCSC’s
Dockmaster system, which is a highly secure operating system. The MIDAS was
[29]
designed to take data from Dockmaster’s answering system audit log and used a
hybrid analysis strategy, combining statistical anomaly detection with expert
system rule-based approaches. In 1989, Wisdom and Sense from Los Alamos
National Laboratory and Information Security Officer’s Assistant (ISOA) from
Planning Research Corporation were developed. In 1990 Kerr and Susan reported
all the experimental as well as actually implemented IDSs in the Datamation
report titled Using AI to improve security. In the same year, an audit trail analysis
tool Computer Watch was developed by AT&T and was designed to consume
operating system audit trails generated by UNIX system. An expert system was
used to summarize system security relevant events and a statistical analyzer and
query mechanism allowed statistical characterization of system-wide events.
Network System Monitor (NSM) was developed at the University of California at
Davis in 1990, to run on a Sun UNIX workstation. NSM was the first system,
monitoring network traffic and using that traffic as the primary data source. NSM
was a significant milestone in intrusion detection research because it was the first
attempt to extend intrusion detection to heterogeneous network environments.
Principal researchers were Levitt, Heberlein and Mukherjee.
Network Audit Director and Intrusion Reporter (NADIR) was developed
by the Computer division of Los Alamos National Laboratory to monitor user
activities on the Integrated Computing Network (ICN) at Los Alamos. NADIR
performs a combination of expert rule-based analysis and statistical profiling.
NADIR being a successful intrusion detection system has been extended to
monitor systems beyond the ICN at Los Alamos. Shieh et al. in 1991 presented a
paper a pattern oriented ID model and its applications, with an entirely new
approach, which mentions that a pattern-oriented ID model can analyze object
privilege and data flows in secure computer systems to detect operational security
problems. This model addresses context-dependent intrusion and complements the
then popular statistical approaches to ID. In the same year, Snapp and Steven in
their paper A system for distributed ID presented a proposed architecture
[30]
consisting of the following components: a host manager with a collection of
processes running in background, a LAN manager for monitoring each LAN in the
system and a central manager that receives reports from various hosts and LAN
managers and processes these reports, correlates them and detects intrusions.
Intrusion detection is used to detect unauthorized use of resources or any other
malicious activity on the network, host or application. Intrusion detection systems
are employed behind firewalls to detect the successful exploitation attempts when
other security policies fail. The existing security solutions including firewalls were
not designed to detect attacks on the internet including worms, trojans, viruses,
root access, distributed denial of service.
Figure 1.7 A typical security scenario in any network
Fig 1.7 shows the location of IDS in the private network scenario. The traffic
approaching the firewall either matches up to applied rule and is allowed through
or the traffic is stopped and the firewall logs the blocked traffic. As a result, IDSs,
as originally introduced by Anderson[26] in 1980 and later formalized by Denning
[27] in 1987, have received increasing attention in the recent years. The IDSs
along with the firewall form the fundamental technologies for network security.
IDSs can be categorized into two classes, anomaly based IDSs and misuse based
IDSs. Anomaly based IDSs look for deviations from normal usage behaviour to
identify abnormal behaviour. Misuse based on the other hand recognizes patterns
of attack. Anomaly detection techniques rely on models of the normal behaviour
of a computer system. These models may focus on the users, the applications, or
the network. Behaviour profiles are built by performing statistical analysis on
[31]
historical data[28, 29], or by using rule based approaches to specify behaviour
patterns[30, 31, 32]. A basic assumption of anomaly detection is that attacks differ
from normal behaviour in type and amount. By defining what’s normal, any
violation can be identified, whether it is part of threat model or not. However, the
advantage of detecting previously unknown attacks is paid for in terms of high
false-positive rates in anomaly detection systems. It is also difficult to train an
anomaly detection system in highly dynamic environments. The anomaly
detection systems are intrinsically complex and also there is some difficulty in
determining which specific event triggered the alarms. On the other hand, misuse
detection systems essentially contain attack descriptions or signatures and match
them against the audit data stream, looking for evidence of known attacks[33].
1.4.1 Taxonomy of Intrusion Detection System: A large number of
concepts have been used to classify the IDSs. The classification is presented in
Figure 1.8 with a detailed discussion in this section.
Figure 1.8 Taxonomy of Intrusion Detection
[32]
Intrusion detection methods The basic intrusion detection methods are the two complementary approaches for
detecting intrusions, namely the
Anomaly detection (behaviour-based) approaches
Knowledge-based approaches (misuse detection).
Both methods have their distinct advantages and disadvantages as well as suitable
application areas of intrusion detection.
Anomaly detection methods Anomaly detection or the behaviour-based detection
or Heuristic detection methods use information about repetitive and usual
behaviour on the systems they monitor, and this approach identifies events that
deviate from expected usage patterns as malicious. Most anomaly detection
approaches attempt to build some kind of a model over the normal data and then
check to see how well new data fits into that model. In other words, anything that
does not correspond to a previously learned behaviour is considered intrusive.
Therefore, the intrusion detection system might not miss any attacks, but its
accuracy is a difficult issue, since it can generate a lot of false alarms. Examples of
anomaly detection systems are IDES, NIDES, EMERALD and Wisdom and
Sense.
Anomaly detection can be either by unsupervised learning techniques or by
supervised learning techniquesU.
Unsupervised learning systems
Unsupervised or self-learning systems learn the normal behaviour of the traffic by
observing the traffic for an extended period of time and building some model of
the underlying process. Examples include such techniques such as Hidden Markov
Model (HMM) and Artificial Neural Network (ANN)[34].
Supervised Systems In the programmed systems or the supervised learning
method, the system has to be taught to detect certain anomalous events. The
supervised anomaly detection approaches build predictive models provided
labelled training data (normal or abnormal users’ or applications’ behaviour) are
[33]
available. Thus the user of the system forms an opinion on what is considered
abnormal for the system to signal a security violation.
Advantages of behaviour-based approaches
• Detects new and unforeseen vulnerabilities.
• Less dependent on operating system-specific mechanisms.
•Detect ‘abuse of privileges’ types of attacks that do not actually involve
exploiting any security vulnerability.
Disadvantages of behaviour-based approaches
• The high false alarm rate is generally cited as the main drawback of behaviour-
based techniques.
• The entire scope of the behaviour of an information system may not be covered
during the learning phase.
• Behaviour can change over time, introducing the need for periodic on- line
retraining of the behaviour profile.
• The information system can undergo attacks at the same time the intrusion
detection system is learning the behaviour. As a result, the behaviour profile
contains intrusive behaviour, which is not detected as the entire scope of the
behaviour of an information system may not be phrased.
• Behaviour can change over time, introducing the need for periodic on-line
retraining of the behaviour profile.
• The information system can undergo attacks at the same time the intrusion
detection system is learning the behaviour. As a result, the behaviour profile
contains intrusive behaviour, which is not detected as anomalous.
It must be noted that very few commercial tools today implement such an
approach, leaving anomaly detection to research systems, even if the founding
paper by Denning recognizes this as a requirement for IDS systems.
Knowledge-based detection methods
Knowledge-based detection or misuse detection or signature detection methods
use information about known security policy, known vulnerabilities, and known
[34]
attacks on the systems they monitor. This approach compares network activity or
system audit data to a database of known attack signatures or other misuse
indicators, and pattern matches produce alarms of various sorts. All commercial
systems use some form of knowledge-based approach. Thus, the effectiveness of
current commercial IDS is based largely on the validity and expressiveness of their
database of known attacks and misuse, and the efficiency of the matching engine
that is used. It requires frequent updates to keep up with the new stream of
vulnerabilities discovered, this situation being aggravated by the requirement to
represent all possible facets of the attacks as signatures. This leads to an attack
being represented by a number of signatures, at least one for each operating
system to which the intrusion detection system has been ported. Examples for
product prototypes are Discovery, IDES, Haystack and Bro.
1.4.2 IDS Deployment Techniques The effectiveness of the Intrusion Detection System depends on their internal
design and even more importantly, on their position within the corporate
architecture. Generally, IDS can be classified into different categories depending
on their deployment.
Host-based monitoring
A host-based IDS is deployed on devices that have other primary functions such as
Web servers, database servers and other host devices. Host logs, comprised of the
combination of audit, system and application logs, offer an easily accessible and
non-intrusive source of information on the behaviour of a system. In addition, logs
generated by high-level entities can often summarize many lower- level events
such as a single HTTP application log entry covering many system calls, in a
context-aware fashion. A host-based IDS provides information such as user
authentication, file modifications/deletions and other host-based information, thus
designated as secondary protection to devices on the network. Examples of HIDS
products are EMERALD, NFR etc.
[35]
Advantages of Host-based Intrusion Detection Systems
Although overall host-based IDS is not as robust as network-based IDS, host-
based IDS does offer several advantages over network-based IDS:
• More detailed logging: - HIDS can collect much more detail information
regarding exactly what occurs during the course of an attack.
• Increased recovery: - Because of the increased granularity of tracking events in
the monitored system, recovery from a successful incident is usually more
complete.
• Detects unknown attacks: - Since the attack affects the monitored host, HIDS
detects unknown attacks more than the Network-based IDS.
• Fewer false positives: The way HIDS works provides fewer false alerts than
produced by Network-based IDS.
Disadvantages of Host-Based IDS
• Indecipherable information: Because of network heterogeneity and the profusion
of operating systems, no single host-based IDS can translate all operating systems,
network applications, and file systems. In addition, in the absence of something
like a corporate key, no IDS can decipher encrypted information.
• Indirect information: Rather than monitor activity directly (as do network-based
IDS), host-based IDS usually rely heavily or completely on an audit record of
activity that is created by a system or application. This audit record varies widely
in quality and quantity between different systems and applications, thus
dramatically affecting IDS effectiveness.
• Complete coverage: Host-based IDS are installed on the system being monitored.
On very large networks this can comprise many thousands of workstations.
Providing IDS on this scale is both very expensive and difficult to manage.
• Outsiders: Host-based IDS can potentially detect an outside intruder only after
the intruder has reached the monitored host system, not before, as can network-
based IDS. To reach a host system, the intruder must have already bypassed
network security measures.
[36]
• Host interference: Host-based IDS places such a load on the host CPU as to
interfere with normal host operations. On some systems, just invoking an audit
record sufficient for the IDS can result in unacceptable loading.
Network-based monitoring
The sole function of network-based IDS is to monitor the traffic of that network.
This ensures that the IDS can observe all communication between a network
attacker and the victim system, evolving many of the problems associated with log
monitoring. Typical Network-based IDS are Microsoft Network Monitor, Cisco
Secure IDS (formerly NetRanger), Snort etc.
Advantages of network-based intrusion detection
• Ease of deployment: Passive nature and hence few performance or compatibility
issues in the monitored environment.
• Cost:-Strategically placed sensors can be used to monitor a large organizational
environment where as a host-based IDS requires software on each monitored host.
• Range of detection: The variety of malicious activities able to be detected
through the analysis of network traffic is wider than the variety able to be detected
in host-based IDS.
• Forensics integrity: Since the network-based IDS sensors run on a host separate
from the target, they are more impervious to tampering.
• Detects all attempts, even failed ones:- Host-based IDS detects only successful
attacks because unsuccessful attacks do not affect the monitored host directly.
Disadvantages of Network-based IDS
• Direct attack susceptibility: A recently released study by Secure Networks, Inc.
of leading network-based IDS products found that network- based IDS are
susceptible to:
i. Packet spoofing, which tricks the IDS into thinking packets have come from an
incorrect location.
ii. Packet fragmentation attacks that retransmit sequence numbers so that the IDS
sees only what a hacker wants it to see.
[37]
• Indecipherable packets: Because of network heterogeneity and the relative
profusion of protocols, network-based IDSs often cannot decipher the packets they
capture. In addition, in the absence of something like a corporate key, no IDS can
decipher encrypted information.
• Failure when loaded: A recent evaluation of leading network-based commercial
products found that products that detect all tested attacks successfully on an empty
or moderately utilized network have been found to start missing at least some
attacks when the monitored network is heavily loaded.
• Failure at wire speed:-While network-based IDS can process packets on low-
speed networks (10Mbps), few claim to be able to keep up and miss no
information at 100Mbps or higher.
• Complete coverage: Most sensors are designed to be installed on shared access
segments, and can monitor only that traffic running through those segments. To
provide coverage, the IDS user must select key shared-access segments for IDS
sensors. Most frequently they place sensors in the de-militarized zone and, in some
cases, in front of port and server farms. To monitor distributed ports, internal
attack points, distributed Ethernet connections, and desktops, many sensors must
be installed. Even then, elastic or unauthorized connections such as desktop dial-
ins and modems will not be monitored.
• Switched networks: To make matters worse, switching has replaced shared
/routed networks as the architecture of choice. Switching effectively hides traffic
from shared-access network-based IDS products. Switched networks fragment
communication and divide a network into myriad micro segments that make
deploying shared-access IDS prohibitively expensive since to provide coverage,
very many sensors must be deployed. Alternatives could be attaching hubs to
switches wherever switched traffic must be monitored or mirroring selected
information such as that moving to specific critical devices, to a sensor for
processing. None of these are easy or ideal solutions.
• Insiders: Network-based IDS focus is on detecting attacks from outside, rather
[38]
than attempting to detect insider abuse and violations of local security policy
Host network monitoring: Host Network Monitoring is also called the network-
node or the hybrid intrusion detection, this approach is used in personal firewalls
and some IDS probe designs lies in combining network monitoring with host-
based probes. By observing data at all levels of the host’s network protocol stack,
the ambiguities of platform-specific traffic handling and the problems associated
with cryptographic protocols can be resolved. The data and event streams
observed by the probe are those observed by the system itself.
This approach offers advantages and disadvantages similar to both
alternatives listed above. It resolves many of the problems associated with
promiscuous network monitoring, while maintaining the ability to observe the
entire communication between victim and attacker. Like all host-based
approaches, however, this approach implies a performance impact on every
monitored system, requires additional support to correlate events on multiple
hosts, and is subject to subversion when the host is compromised. Sometimes this
hybrid intrusion detection system is considered as a subtype of network-based
intrusion detection system because it relies primarily upon the network traffic
analysis for detection. Example of hybrid IDS is prelude.
Target based monitoring: An attempt to resolve the ambiguities inherent in
protecting multiple platforms lies in combining network knowledge with traffic
reconstruction. These target- based ID systems typically use scanning techniques
to form an image of what systems exist in the protected network, including such
details as host operating system, active services, and possible vulnerabilities.
Using this knowledge, a probe can reconstruct network traffic in the same fashion,
as would be the case on the receiver system, preventing attackers from injecting or
obscuring attacks.
In addition, this approach allows IDS to automatically differentiate attacks that are
a threat to the targeted system, from those that target vulnerabilities not present –
thus refining generated alerts. Whether attacks that cannot succeed should be
[39]
reported is something of a contentious issue – offering a trade-off between lower
security alerts being generated, versus the possibility of recognizing novel attacks
when combined with known sequences. In addition, the need to maintain an
accurate map of the protected network – including valid points of vulnerability –
may reduce the ability of this class of system to recognize novel attacks.
Information source: The information that an IDS product can access is
determined by where it is deployed. Network-based IDS always capture and
analyze network packets, while host-based IDS products potentially have many
information sources on the hosts where they are installed.
The IDS classification based on the data source is listed below:
Network packets: The IDS includes a network-based sensor designed to capture
and process network packets and decipher at least one network protocol (e.g.
TCP/IP).
Audit trial: The IDS includes a host-based agent designed to process the audit
record of at least one specific operating system (e.g., Solaris, Ultrix, Unicos).
1.4.3 IDS Architecture The IDS should provide a distributed capability, since this component of
scalability is vital for effective deployment of IDS in the vast majority of corporate
networks. A distributed capability means that a central manager or managers and
local collection/processing agents placed as needed throughout the monitored
network provide the IDS functionality. However, some products are available in
both a local and distributed versions.
Monolithic systems: The simplest model of IDS is a single application, containing
probe, monitor, resolver and controller all in one called the monolithic or the
centralised system. This focuses on a specific host or system – with no correlation
of actions that cross system boundaries. Such systems are conceptually simple,
and relatively easy to implement. Their major weakness lies in the ability for an
attack to be implemented using a sequence of individually innocuous steps. The
alerts generated by such systems may in fact be aggregated centrally – but this
[40]
architecture offers no synergy between IDS instances.
Hierarchical systems: If one considers the alerts generated by an IDS instance to
be events in themselves, suitable for feeding into a higher-level IDS structure, an
intrusion detection hierarchy results. At the root of the hierarchy, lie a resolver unit
and controller. Below this lie one or more monitor components, with subsidiary
probes distributed across the protected systems. Effectively, the whole hierarchy
forms a macro scaled IDS. The use of a centralized controller unit allows
information from different subsystems to be correlated, potentially identifying
transitive or distributed attacks. For example, a simple address range probe, while
difficult to detect using a network of monolithic host IDS instances, can be trivial
to observe when correlating connections using a hierarchic structure.
Agent based systems: A more recent model of IDS architecture divides the system
into distinct functional units: probes, monitors, and resolver and controller units.
These may be distributed across multiple systems, with each component receiving
input from a series of subsidiaries, and reporting to one or more higher-level
components. Probes report to monitors, which may report to resolver units or
higher-level monitors, and so forth. This architecture, implemented in systems
such as EMERALD, allows great flexibility in the placement and application of
individual components. In addition, this architecture offers greater survivability in
the face of overload or attack, high extensibility, and multiple levels of reporting
throughout the structure. FIRE is a product prototype that uses agent-based
approach of intrusion detection.
Distributed systems: All the IDS architectural models described so far consider
attacks in terms of events on individual systems. A recent development, typified by
the GrIDS system, lies in regarding the whole system as a unit. Attacks are
modelled as interconnection patterns between systems, with each link representing
network activity. The graphs that form can be viewed at different scales, ranging
from small systems to the interconnection between large and complex systems
(where sub-networks are collapsed into points). This novel approach promises
[41]
high scalability and the potential to recognize widely distributed attack patterns
such as worm behaviour. Also this architecture is implemented in DIDS.
Analysis frequency: The classification depending on execution frequency or
periodicity is based on how often IDS analyzes data from its information sources.
Most commercial IDS claim real-time processing capability, and a few provide the
capability for batch processing of historical data.
Dynamic Execution: IDS are designed to perform concurrent and continuous
automated processing and analysis implying real-time operation or on-the-fly
processing. IDS deployable in real-time environments are designed for online
monitoring and analyzing system events and user actions.
Static execution: IDS are designed to perform periodic processing and analysis
implying batch or other sporadic operation. This may serve effective for the low
intensity probes the attacker makes to hide his presence by spreading his attack
over a very long duration period with appreciable gap between the consecutive
attacks. Audit trail analysis is the prevalent method used by periodically operated
systems.
Response: The behaviour on detection of an attack describes the response of the
IDS. IDS may respond to an identified attack, misuse, or anomalous activity in the
following three ways:
Passive: In passive response, the IDS simply generates alarms to inform
responsible personnel of an event by way of console messages, email, paging, and
report up- dates. Passive or indirect gathering of information aids in identifying
the source of attack using techniques such as DNS lookups, passive fingerprinting
etc.
Reactive: It is an active response to critical events, where it takes corrective action
that stops the attacker from gaining further access to resources thus mitigating the
effects of the attack. These responses are executed after the attack has been
detected by the IDS. Reactive responses change the surrounding system
environment, either in the host on which the IDS reside or outside in the
[42]
surrounding network. For example the IDS reconfigures another system such as
firewall or router to block out the attacker, or uses TCP reset frames to tear down
any connection attempts, correcting a system vulnerability, logging off a user,
selectively increasing monitoring, or disconnecting a port as specified by the user.
The main goal of these responses is to stop the attacker from gaining further
access to resources, thus mitigating the effects of the attack.
Proactive: This is an active response to critical events, where it takes proactive
action by intervening and actively stopping an attack from taking place. The only
difference between proactive and reactive responses is when they are executed. A
proactive response could be to drop a network packet before it has reached its
destination, thereby intervening and stopping the actual attack. A reactive response
would have been able to terminate the ongoing connection, but it would not have
stopped the packet that triggered the IDS from reaching its destination.
1.4.4 Latest Intrusion Detection Software Along with the intrusion detection systems that have made significant
contributions to the ongoing research in the field and mentioned above, there are a
few other products that deserve a special discussion. Most of the currently used
Open-Source and free Software Packages, Commercial Software Packages and
Academic Software Packages are included below: AirCERT Automated Incident
Reporting (AirCERT) is a scalable distributed system for sharing security event
data among administrative domains. Using AirCERT, organizations can exchange
security data ranging from raw alerts generated automatically by network intrusion
detection systems (and related sensor technology), to incident reports based on the
assessments of human analysts.
ISS Real Secure This IDS works satisfactorily at Gigabit speed. The high speed is
possible by IDS integrated into the switch or by using a specific port called the
span port, which mirrors all the traffic on the switch. The BlackICE technology of
this sensor includes protocol analysis and anomaly detection combined with the
Real Secure’s library of signature-based detection capabilities.
[43]
Real Secure Server Sensor It is a hybrid IDS, which resides on one host and still
monitors the network traffic and detects attacks in the network layer of the
protocol stack. However the sensor also detects attacks at higher layers and
therefore it can detect attacks hidden in encrypted sessions such as IP sec or SSL
encryptions. The sensors can also monitor application and operating system logs.
Snort Snort is an Open Source Network Intrusion Detection Systems that keeps
track of intrusion attempts, signs of possible ’bad’ behaviour or hacking exploits.
It is capable of performing real-time traffic analysis and packet logging on IP
networks. It can perform protocol analysis, content searching/matching and can be
used to detect a variety of attacks and probes, such as buffer overflows, stealth
port scans, CGI attacks, SMB probes, OS fingerprinting attempts, and much more.
It is non-intrusive, easily configured, utilizes familiar methods for rule
development, currently includes the ability to detect more than 1200 potential
vulnerabilities. Sourcefire Founded by the creators of Snort, the most widely
deployed Intrusion Detection technology worldwide, Sourcefire has been
recognized throughout the industry for enabling customers to quickly and
effectively address security risks. Today, Sourcefire is redefining the network
security industry by combining enhanced Snort with sophisticated proprietary
technologies to offer the first ever unified security monitoring infrastructure,
delivering all of the capabilities needed to proactively identify threats and defend
against intruders.
Shadow Shadow is an Intrusion Detection system developed on inexpensive PC
hardware running Open Source, public domain, or freely available software. A
SHADOW system consists of at least two pieces: a sensor located at a point near
an organization’s firewall, and an analyzer inside the firewall. Shadow performs
traffic analysis; the sensor collects packet headers from all IP packets that it sees;
the analyzer examines the collected data and displays user defined interesting
events on a web page.
Entercept Entercept is a HIDS that prevents and detects attacks; uses a
[44]
combination of signatures and behavioural rules; safeguards the server,
applications and resources from known and unknown worms and buffer-overflow
attacks; reduces false positives and protects customer data. McAfee Desktop
Firewall It is an HIDS which provides firewall protection and intrusion detection
for desktop; guards against threats from internal and internal intruders, malicious
code and silent attacks.
OKENA StormWatch It is an HIDS that intercepts all system calls to file, net-
work, COM and registry resources and correlates behaviours of such system
requests to make real time allow or deny decisions; supports XP, Win2K and
UNIX systems; scalable to 5000 intelligent agents manageable from one console.
Symantec Host IDS It is a HIDS that detects unauthorized and malicious activity
like access to critical files and bad logins, alerts administrators and takes
precautionary action to prevent information theft or loss, without any overhead to
the deployed monitoring machine. It has the advantage that it supports all the
popular operating systems.
SMART Watch It is a HIDS that performs file-change detection; provides a
restoration tool that reacts in near-real time without polling. GFI LANguard It is a
HIDS that monitors the security event logs of all Windows XP, Windows 2000,
and Windows NT servers and workstations on your network; alerts administrators
in real time about possible intrusions and attacks.
NetRanger NetRanger is a network based IDS that monitors network traffic with
special hardware devices that can be integrated into Cisco routers and switches or
act as stand-alone boxes. In addition to network packets, router log files can also
be used as additional source of information. The system consists of Sensors,
centralized data processing units called Directors and a proprietary communication
subsystem called Post Office. NetRanger is integrated into Cisco Secure Intrusion
Detection System. Network Flight Recorder NFR is a network based ID system
that uses filters for misuse detection. The NFR did not start as IDS but provides an
architecture to monitor and filter network packets, log results, perform statistical
[45]
evaluation and initiate alarms when certain conditions are met and therefore can be
used to detect intrusions as well. The NFR is designed to provide such post-
mortem analysis capability for networks when malicious activities have happened.
This can be used to shorten the lifetime of new attacks by quickly adding their
signature to the detection unit. Additionally, the system also performs statistic
gathering and provides information about usage growth of applications or traffic
peaks of certain protocol types. The architecture is built in a modular fashion with
interfaces between the main components to easily add new subsystems. The NFR
Security’s intelligent intrusion management systems, not only detects and deter
network attacks, but also integrates with popular firewall providers to prevent
future attacks.
Fuzzy Intrusion Recognition Engine FIRE is a network intrusion detection
system that uses fuzzy systems to assess malicious activity against computer
networks. The system uses an agent-based approach to separate monitoring tasks.
Individual agents perform their own fuzzy process of in- put data sources. All
agents communicate with a fuzzy evaluation engine that combines the results of
individual agents using fuzzy rules to produce alerts that are true to certain degree.
The results show that fuzzy systems can easily identify port scanning and denial of
service attacks. The system can be effective at detecting some types of backdoor
and Trojan Horse attacks. The paper [35] gives more details on this product.
Intelligent intrusion detection system IIDS is being developed to demonstrated the
effectiveness of data mining techniques that utilize fuzzy logic. This system
combines two distinct intrusion detection approaches: Anomaly based intrusion
detection using fuzzy data mining techniques, and Misuse detection using
traditional rule-based expert system techniques. The anomaly-based components
look for deviations from stored patterns of normal behaviour. The misuse detection
components look for previously described patterns of behaviour that are likely to
indicate an intrusion. Both network traffic and system audit data are used as
inputs.
[46]
DERBI DERBI is a computer security tool that targets at diagnosing and
recovering from network-based break-ins. The technology adopted has the ability
to handle multiple methods (often with different costs) of obtaining desired
information, and the ability to work around missing information. The prototype
will not be an independent program, but will invoke and coordinate a suite of
third-party computer security programs (COTS or public) and utility programs.
MINDS MINDS (Minnesota Intrusion Detection System) project is developing a
suite of data mining techniques to automatically detect attacks against computer
networks and systems. It uses an unsupervised anomaly detection system that
assigns a score to each network connection that reflects how anomalous that
connection is.
NetSTAT NetSTAT is a tool aimed at real-time network-based intrusion detection.
The NetSTAT approach extends the state transition analysis technique (STAT) to
network-based intrusion detection in order to represent attack scenarios in a
networked environment. Net-STAT is oriented towards the detection of attacks in
complex networks composed of several sub-networks.
BlackICE The BlackICE IDS scans network traffic for hostile signatures in much
the same way that virus scanners examine files for virus signatures. BlackICE runs
at 148,0 00 packets per second, checks all 7 layers of the stack and rates each
attack on a scale of 1 to 100 so that only attacks it considers serious are alerted.
There are two versions: desktop agent (BlackICE Defender) and network agent
(BlackICE Sentry). The desktop- agent runs on Win95/WinNT desktop. The
network agent runs just like any other sniffer-type IDS.
Cyclops Snort-based Cyclops IDS provides advanced and flexible intrusion
detection at Gigabit speeds and secures networks by performing high-speed packet
analysis to detect malicious activities in real-time and automatically launch
preventive measures before security can be compromised.
Dragon sensor detects suspicious activity with both signature based and anomaly
based techniques. Its library of attacks detects thousands of potential network
[47]
attacks and probes, and also hundreds of successful system compromises and
backdoors.
E-Trust E-Trust intrusion detection delivers state-of-the-art network protection
including DDoS attacks. All incoming and outgoing traffic is checked against a
categorized list of web sites to ensure compliance. It is then checked for content,
malicious codes and viruses and notifies the administrator of offending payloads.
Symantec ManHunt provides high-speed, network intrusion detection, real-time
analysis and correlation, and proactive prevention and response to protect
enterprise networks against internal and external intrusions and denial-of-service
attacks. The ability to detect unknown threats using protocol anomaly detection
helps in eliminating network exposure and the vulnerability inherent in signature-
based intrusion detection systems. Symantec ManHunt traffic rate monitoring
capability allows for detection of stealth scans and denial-of-service attacks that
can cripple even the most sophisticated networks.
Net Detector Net Detector is a network surveillance system for IP networks that
provides non- intrusive, continuous traffic recording and real-time traffic analysis.
Net Detector records network traffic, analyzes every packet, detects the activities
of intruders, sets alarms for real-time alerting, and gathers evidence for post-event
analysis.
1.4.5 Limitations of IDSs IDS just monitor and sniff network packets off wire to report maliciousness or
alert a network administrator regarding a threat but do not take preventive
measures against the attacks and this is the major limitation of IDSs. To overcome
the limitations of IDSs we use Inline IDSs called Intrusion Prevention System
(IPS)
Intrusion Prevention System (IPS): Going for beyond mere monitoring and
alerting, second-generation IDS are being called intrusion prevention systems
(IPS). They either stop the attack themselves or interact with an external system to
put down the threat. Inline IDS/IPS performs the following functions to ensure
[48]
safety of the system:
-Drop Packets
-Reset Connections
-Route suspicious traffic to quarantined areas for inspection.
• Categories of IPS: Following are two categories of IPS:
*Host IPS
*Network IPS
• Implementation Challenges: The challenges stem from the fact that IPS device is
designed to work inline, presenting a potential choke point and a single point of
failure. It must meet stringent network performance and reliability requirements as
a prerequisite to deployment.
• Limitations of IPSs :-
*Increase in network latency.
*Failure of inline device.
*Exacerbate the effects of a false- positive.
• Emerging Security Technologies:-
*Inline NIDS: If the IDS, is a mandatory inspection point with the ability to
filter real time traffic, it is called Inline NIDS. Most NIDS have two interfaces,
one for management and other for detection.
*Hybrid Host Solutions
*Application firewalls.
Future of IDS:-The future of IDS is bright if following improvements are made :
*Decreasing false-positive detection.
*IDSs using both anomaly detection and signature detection.
*IDSs becoming better expert systems.
*IDSs becoming IPSs.
*Convergence of functionality and vendors.
[49]
1.5 Organization of the thesis: The rest of the chapters in this thesis are
organized as follows: Chapter 2 of the thesis contains review of literature,
problem formulation, objectives of study and Research Methodology. Chapter 3 is
about the research tool (Snort IDS) used in the research work. It deals with the
basic fundamentals of the Snort IDS and it’s configuration, rule options etc.
Chapter 4 deals with smart Snort used to intercept packets from the network which
match defined patterns. It uses the aho-corasick algorithm and its automata
creation. Chapter 5 is about Key management issues, Diffie-Hellman Key
exchange Algorithm, Digital Signature Scheme (DSS), DSA (Digital Signature
Algorithm), RSA algorithm, Secure Hashing Algorithms (SHA1, SHA256,
SHA512) and MD5, Implementation of IDHDS with RSA, the Proposed
Algorithm and Proposed Hypothetical Security Model. In the end conclusion is
given in the thesis.
top related