towards automating intrusion alert analysis

Post on 29-Jan-2016

48 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Towards Automating Intrusion Alert Analysis. Peng Ning, Yun Cui, Douglas S. Reeves, and Dingbang Xu Cyber Defense Laboratory Department of Computer Science North Carolina State University. Background. Traditional intrusion detection systems (IDS) Focus on low-level attacks or anomalies - PowerPoint PPT Presentation

TRANSCRIPT

Computer Science

Towards Automating Intrusion Alert Analysis

Peng Ning, Yun Cui, Douglas S. Reeves, and Dingbang Xu

Cyber Defense Laboratory

Department of Computer Science

North Carolina State University

Computer Science 2

Background

• Traditional intrusion detection systems (IDS)– Focus on low-level attacks or anomalies– Actual alerts are mixed with false alerts– Intensive intrusions unmanageable amount of

alerts

• It’s necessary to develop automatic tools to construct attack scenarios and facilitate intrusion analysis.

Computer Science 3

Related Research

• Exploit similarities between alert attributes– Ex.: Valdes and Skinner (2001), Staniford et al. (2000)

• Exploit known attack scenarios– Ex.: Cuppens and Ortalo (2000), Dain and

Cunningham (2001), Debar and Wespi (2001)

• Use pre- and post-conditions of attacks– JIGSAW by Templeton and Levitt (2000)

• Cannot deal with missing detections and failed attacks• Our initial work is an extension to JIGSAW

– MIRADOR approach by Cuppens and Miege (2002)• Developed independently and in parallel to our work

– Our work (2002, 2003)• Others

– M2D2 by Morin et al. (2002), Mission-Impact by Porras et al. (2002)

Computer Science 4

Outline

• Construct attack scenarios from intrusion alerts via correlation– Correlation based on prerequisites and

consequences of attacks

• Analyze intensive alerts

• Extract attack strategies from correlated alerts

Computer Science 5

Correlation Based on Prerequisites and Consequences of Attacks

• Goal– Construct high-level attack scenarios from low-

level alerts

Computer Science 6

Correlation Based on Prerequisites and Consequences of Attacks (Cont’d)• Basic Idea

– Hyper-alert types: Encode our knowledge about each type of attacks

• Prerequisites and Consequences– Reason about hyper-alerts based on the knowledge

Prerequsite: ExistHost(VictimIP)^VulnerableSadmind(VictimIP)

Consequence: {GainAccess(VictimIP)}

Alert attributes: {VictimIP, VictimPort}

SadmindBufferOverflow

Computer Science 7

C(h1) = {VulnerableSadmind(152.1.19.5),

VulnerableSadmind(152.1.19.9)}

h1 h2

P(h2) = {ExistHost(152.1.19.5),

VulnerableSadmind(152.1.19.5)}

SadmindPing SadmindBufferOverfow

Correlation Based on Prerequisites and Consequences of Attacks (Cont’d)• Reasoning of alerts

– An earlier hyper-alert prepares for a later one if the former makes the later easier to be successful

• Decompose prerequisites and consequences into pieces of predicates

• Match the predicates

Computer Science 8

Experimental Evaluation

• Purposes of experiments– How well can the proposed method construct

attack scenarios?– Can alert correlation help differentiate between

true and false alerts?• Conjecture: correlated alerts are more possible to be true

alerts.

Computer Science 9

Experimental Evaluation (Cont’d)

• DARPA 2000 intrusion detection scenario specific datasets– A novice attacker installs components for and carries out a

DDOS attack – LLDOS 1.0 (inside and DMZ)– LLDOS 2.0.2 (inside and DMZ)

•NetPoke

•RealSecure

•Network Sensor•Isolated network

Computer Science 10

Hyper-Alert Correlation Graph Discovered from the Inside Traffic of LLDOS 1.0

Computer Science 11

Experimental Evaluation (Cont’d)

• Two measures– Completeness: How well can we correlate the related

alerts?

– Soundness: How correctly are the alert correlated?€

Rc =#Correctly Correlated Alerts

#Related Alerts

Rs =#Correctly Correlated Alerts

#Correlated Alerts

Computer Science 12

Experimental Evaluation (Cont’d)

0%10%20%30%40%50%60%70%80%90%

100%

DataSet1

DataSet2

DataSet3

DataSet4

CompletenessSoundness

Computer Science 13

Experimental Evaluation (Cont’d)

0102030405060708090

100

DataSet 1

DataSet 2

DataSet 3

DataSet 4

Before Correlation

After Correlation

0

10

20

30

40

50

60

70

80

Data Set 1 Data Set 2 Data Set 3 Data Set 4

False Positive Rate

Detection Rate

Computer Science 14

• Additional details can be found in– Peng Ning, Yun Cui, Douglas S. Reeves, "Constructing

Attack Scenarios through Correlation of Intrusion Alerts," in ACM CCS 2002, pages 245--254, November 2002.

Computer Science 15

Analyze Intensive Intrusion Alerts

• Limitations of the previous correlation technique– Difficult to cope with very large set of correlated

alerts

• Our solution– Interactive analysis utilities

• Independent• Complementary • Used as building blocks• Can be applied iteratively to a previous analysis results.

Computer Science 16

Interactive Analysis Utilities

• Hyper-alert generating utilities– Aggregation/disaggregation

– Clustering analysis

• graph decomposition: a special case

– Focused analysis

• Feature extraction utilities– Frequency analysis

– Link analysis

– Association analysis

Computer Science 17

Alert Aggregation/Disaggregation

• Aggregation– To simplify the correlation graph, the same type of hyper-

alerts can be aggregated together.• An interval constraint (e.g. 10 seconds) is used to control the

aggregation.

Computer Science 18

Aggregation/Disaggregation with Abstraction

• Alerts reported by IDSs usually are low-level alerts, and can be abstracted to more general alerts.

• Hyper-alerts can be aggregated together and form new hyper-alerts with more abstracted alert type.– The abstraction level to be aggregated

– Interval constraint

Computer Science 19

Alert Aggregation/Disaggregation (cont’d)

• Disaggregation – Aggregated hyper-alerts can be disaggregated to show

detailed information.

Disaggregate

Computer Science 20

Case Study with DEFCON8 Dataset

• Some common attack strategies were easily identified– e.g., Nmap_Scan PmapDump ToolTalk_Overflow

– e.g., HTTP-based attacks from 010.020.011.074 to 010.020.001.014, 010.020.001.015, 010.020.001.019…

• Observation– There were many BackOrifice and NetBus alerts

– i.e., attackers were coordinating multiple machines during their attacks

– Makes correlation and attack identification more difficult!

• Selected results

Computer Science 21

Using Adjustable Graph Reduction

• Most hyper-alerts of the same type are close to each other in time in the DEFCON8 dataset

0

5000

10000

15000

20000

25000

0 20 40

Interval constraint (seconds)

Count

# nodes

# edges

Computer Science 22

Largest Correlation Graph after Maximum Graph Reduction

Aggregated from a graph with

•2,940 nodes

•25,321 edges

Computer Science 23

Using Graph Decomposition

• Clustering Constraint:• (A1.srcIP = A2.srcIP) ^

(A1.destIP = A2.destIP)

Intuition: sharing the same source and destination IP addresses.

Computer Science 24

• Additional details can be found in– Peng Ning, Yun Cui, Douglas S. Reeves, "Analyzing

Intensive Intrusion Alerts Via Correlation," in RAID 2002, pages 74--94, October 2002.

Computer Science 25

Learning Attack Strategies from Correlated Alerts• It’s desirable, and sometimes necessary, to understand

attackers’ strategies– Intrusion response, incident handling, profiling attackers or

attacking tools, etc.

• Static vulnerability analysis – Example: Attack graphs– Requires specifications of security properties– Limited to combinations of known attacks

• Learning attack strategies from alerts– Complement static vulnerability analysis– Allow examination of attack strategies in different

granularities

Computer Science 26

Representation of Attack Strategies

• Attack strategy– Intrinsic relationships between steps in a sequence of

attacks

– Intuition: an attack strategy consists of attack steps and the constraints among these steps

• Attack strategy graph– A graph representation that captures the intrinsic

relationships between steps in an attack strategy.

Computer Science 27

Equality Constraint

• An equality constraint for hyper-alert types T1 and T2

– Equality relations between attributes in these two types.

– Given a type T1 alert h1 and a type T2 alert h2

• h1 prepares for h2 if they satisfy an equality constraint

– Can be derived from T1 and T2.

T1 T2

SadmindPing SadmindBufferOverfow

T1.destIP = T2.victimIP

•VulSadmind(VictimIP)•VulSadmind(destIP)

consequenceprerequisite

Computer Science 28

Attack Strategy Graph

• Extracted from LLDOS 1.0 alerts (IDS: RealSecure)

QuickTime™ and aTIFF (LZW) decompressorare needed to see this picture.

Computer Science 29

Learning Algorithm

• Two steps– Aggregate intrusion alerts that belong to the same

step of a sequence of attacks into one hyper-alert– Extract the constraints between the attack steps

• The result is represented as an attack strategy graph

Computer Science 30

• Additional details can be found in– Peng Ning, Dingbang Xu, "Learning Attack

Strategies from Intrusion Alerts," To appear in ACM CCS 2003, October, 2003.

Computer Science 31

Future Work

• Intrusion Alert Analysis– Integrate intrusion alerts with other information

sources– Hypothesize and reason about missed attacks

Computer Science 32

Thank You!

top related