data mining for malware detection

45
Data Mining for Malware Detection Dr. Mehedy Masud Dr. Latifur Khan Dr. Bhavani Thuraisingham The University of Texas at Dallas June 7, 2013

Upload: kirestin-barry

Post on 30-Dec-2015

65 views

Category:

Documents


3 download

DESCRIPTION

Data Mining for Malware Detection. Dr. Mehedy Masud Dr. Latifur Khan Dr. Bhavani Thuraisingham The University of Texas at Dallas. June 7, 2013. Outline. Data mining overview Intrusion detection, Malicious code detection, Buffer overflow detection, Email worm detection (worms and virus) - PowerPoint PPT Presentation

TRANSCRIPT

Data Mining for Malware Detection

Dr. Mehedy MasudDr. Latifur Khan

Dr. Bhavani Thuraisingham

The University of Texas at Dallas

June 7, 2013

2

04/19/23 16:39

Outline

0 Data mining overview

0 Intrusion detection, Malicious code detection, Buffer overflow detection, Email worm detection (worms and virus)

0 Novel Class Detection for polymorphic malware

0 Reference:

- Data Mining Tools for Malware Detection

- Masud, Khan and Thuraisingham

- CRC Press/Taylor and Francis, 2011

3

04/19/23 16:39

What is Data Mining?

Data MiningKnowledge Mining

Knowledge Discoveryin Databases

Data Archaeology

Data Dredging

Database MiningKnowledge Extraction

Data Pattern Processing

Information Harvesting

Siftware

The process of discovering meaningful new correlations, patterns, and trends by sifting through large amounts of data, often previously unknown, using pattern recognition technologies and statistical and mathematical techniques(Thuraisingham, Data Mining, CRC Press 1998)

4

04/19/23 16:39

What’s going on in data mining?

0 What are the technologies for data mining?

- Database management, data warehousing, machine learning, statistics, pattern recognition, visualization, parallel processing

0 What can data mining do for you?

- Data mining outcomes: Classification, Clustering, Association, Anomaly detection, Prediction, Estimation, . . .

0 How do you carry out data mining?

- Data mining techniques: Decision trees, Neural networks, Market-basket analysis, Link analysis, Genetic algorithms, . . .

0 What is the current status?

- Many commercial products mine relational databases

0 What are some of the challenges?

- Mining unstructured data, extracting useful patterns, web mining, Data mining, security and privacy

5

04/19/23 16:39

Data Mining for Intrusion Detection: Problem

0 An intrusion can be defined as “any set of actions that attempt to compromise the integrity, confidentiality, or availability of a resource”.

0 Attacks are:

- Host-based attacks

- Network-based attacks

0 Intrusion detection systems are split into two groups:

- Anomaly detection systems

- Misuse detection systems

0 Use audit logs

- Capture all activities in network and hosts.

- But the amount of data is huge!

6

04/19/23 16:39

Misuse Detection

0 Misuse Detection

7

04/19/23 16:39

Problem: Anomaly Detection

0 Anomaly Detection

8

04/19/23 16:39

Our Approach: Overview

TrainingData

Class

Hierarchical Clustering (DGSOT)

Testing

Testing Data

SVM Class Training

DGSOT: Dynamically growing self organizing tree

9

04/19/23 16:39

Hierarchical clustering with SVM flow chart

Our Approach

Our Approach: Hierarchical Clustering

10

04/19/23 16:39

Results

Training Time, FP and FN Rates of Various Methods

 

MethodsAverage

Accuracy

Total Training

Time

Average FP

Rate (%)

Average FN

Rate (%)

Random Selection

52% 0.44 hours 40 47

Pure SVM 57.6% 17.34 hours 35.5 42

SVM+Rocchio Bundling

51.6% 26.7 hours 44.2 48

SVM + DGSOT 69.8% 13.18 hours 37.8 29.8

11

04/19/23 16:39

Introduction: Detecting Malicious Executables using Data Mining

0 What are malicious executables?- Harm computer systems- Virus, Exploit, Denial of Service (DoS), Flooder, Sniffer, Spoofer,

Trojan etc.- Exploits software vulnerability on a victim - May remotely infect other victims- Incurs great loss. Example: Code Red epidemic cost $2.6

Billion

0 Malicious code detection: Traditional approach- Signature based- Requires signatures to be generated by human experts- So, not effective against “zero day” attacks

12

04/19/23 16:39

State of the Art in Automated Detection and Our new ideas OAutomated detection approaches:0Behavioural: analyse behaviours like source, destination address, attachment type, statistical anomaly etc.

0Content-based: analyse the content of the malicious executable- Autograph (H. Ah-Kim – CMU): Based on automated signature

generation process- N-gram analysis (Maloof, M.A. et .al.): Based on mining features

and using machine learning.✗Our approach

✗ Content -based approaches consider only machine-codes (byte-codes).

✗ Is it possible to consider higher-level source codes for malicious code detection?

✗ Yes: Diassemble the binary executable and retrieve the assembly program

✗ Extract important features from the assembly program

✗ Combine with machine-code features

-

13

04/19/23 16:39

Feature Extraction and Hybrid Model

✗Features✗Binary n-gram features

= Sequence of n consecutive bytes of binary executable✗Assembly n-gram features

= Sequence of n consecutive assembly instructions✗System API call features

0Collect training samples of normal and malicious executables.0Extract features0Train a Classifier and build a model0Test the model against test samples

14

04/19/23 16:39

Hybrid Feature Retrieval (HFR): Training and Testing

15

04/19/23 16:39

Binary n-gram features- Features are extracted from the byte codes in the form of n-

grams, where n = 2,4,6,8,10 and so on.

Example: Given a 11-byte sequence: 0123456789abcdef012345,

The 2-grams (2-byte sequences) are: 0123, 2345, 4567, 6789, 89ab, abcd, cdef, ef01, 0123, 2345

The 4-grams (4-byte sequences) are: 01234567, 23456789, 456789ab,...,ef012345 and so on....

Problem: - Large dataset. Too many features (millions!).

Solution: - Use secondary memory, efficient data structures - Apply feature selection

Feature Extraction

16

04/19/23 16:39

Assembly n-gram features- Features are extracted from the assembly programs in the form

of n-grams, where n = 2,4,6,8,10 and so on.

Example:

three instructions “push eax”; “mov eax, dword[0f34]” ; “add ecx, eax”;

2-grams

(1) “push eax”; “mov eax, dword[0f34]”; (2) “mov eax, dword[0f34]”; “add ecx, eax”;

Problem: - Same problem as binary

Solution: - Same solution

Feature Extraction

17

04/19/23 16:39

0 Select Best K features

0 Selection Criteria: Information Gain

0 Gain of an attribute A on a collection of examples S is given by

Feature Selection

)(

)(||

||)(),(

AValuesVv

v SEn tropyS

SSEn tropyASG a in

18

04/19/23 16:39

Experiments

0 Dataset- Dataset1: 838 Malicious and 597 Benign executables- Dataset2: 1082 Malicious and 1370 Benign executables- Collected Malicious code from VX Heavens (http://vx.netlux.org)

0 Disassembly - Pedisassem ( http://www.geocities.com/~sangcho/index.html )

0 Training, Testing- Support Vector Machine (SVM)- C-Support Vector Classifiers with an RBF kernel

19

04/19/23 16:39

Results

0 HFS = Hybrid Feature Set0 BFS = Binary Feature Set0 AFS = Assembly Feature Set

20

04/19/23 16:39

Results

0 HFS = Hybrid Feature Set0 BFS = Binary Feature Set0 AFS = Assembly Feature Set

21

04/19/23 16:39

Results

0 HFS = Hybrid Feature Set0 BFS = Binary Feature Set0 AFS = Assembly Feature Set

22

04/19/23 16:39

Data Mining for Buffer Overflow Introduction

0 Goal- Intrusion detection. - e.g.: worm attack, buffer overflow attack.

0 Main Contribution- 'Worm' code detection by data mining coupled with

'reverse engineering'.- Buffer overflow detection by combining data mining with

static analysis of assembly code.

23

04/19/23 16:39

Buffer Overflow

0 What is 'buffer overflow'?- A situation when a fixed sized buffer is overflown by a

larger sized input.

0 How does it happen?- example:

........char buff[100];gets(buff);........

buff Stackmemory

Input string

24

04/19/23 16:39

Problem with Buffer Overflow

........char buff[100];gets(buff);........

buff Stackmemory

Stack

Return address overwritten

buff Stackmemory

New return address points to this memory location

Attacker's code

buff

25

04/19/23 16:39

Handling Buffer Overflow

0 Stopping buffer overflow- Preventive approaches- Detection approaches

0 Preventive approaches- Finding bugs in source code. Problem: can only work

when source code is available.- Compiler extension. Same problem.- OS/HW modification

0 Detection approaches- Capture code running symptoms. Problem: may require

long running time.- Automatically generating signatures of buffer overflow

attacks.

26

04/19/23 16:39

CodeBlocker (Our approach with Penn State)

0 Detection Based on the Observation: Attack messages usually contain code while normal messages contain data.

0 Main Idea- Check whether message contains code

0 Problem to solve:- Distinguishing code from data

0 Formulate the problem as a classification problem (code, data)

0 Collect a set of training examples, containing both instances

0 Train the data with a machine learning algorithm, get the model

0 Test this model against a new message

0 Enhanced Penn State’s earlier model SigFree-

27

04/19/23 16:39

CodeBlocker Model

28

04/19/23 16:39

Feature extraction

0 Features are extracted using- N-gram analysis- Control flow analysis

0 N-gram analysis

Assembly program Corresponding IFG

What is an n-gram? -Sequence of n instructions

Traditional approach: -Flow of control is ignored

2-grams are: 02, 24, 46,...,CE

29

04/19/23 16:39

Feature extraction (cont...)

0 Control-flow Based N-gram analysis

Assembly program Corresponding IFG

What is an n-gram? -Sequence of n instructions

Proposed Control-flow based approach -Flow of control is considered

2-grams are: 02, 24, 46,...,CE, E6

30

04/19/23 16:39

Feature extraction (cont...)

0 Control Flow analysis. Generated features- Invalid Memory Reference (IMR)- Undefined Register (UR)- Invalid Jump Target (IJT)

0 Checking IMR- A memory is referenced using register addressing and

the register value is undefined- e.g.: mov ax, [dx + 5]

0 Checking UR- Check if the register value is set properly

0 Checking IJT- Check whether jump target does not violate instruction

boundary

31

04/19/23 16:39

Putting it together

0 Why n-gram analysis?- Intuition: in general,

disassembled executables should have a different pattern of instruction usage than disassembled data.

0 Why control flow analysis?- Intuition: there should be no invalid memory references or

invalid jump targets.

0 Approach- Compute all possible n-grams- Select best k of them- Compute feature vector (binary vector) for each training

example- Supply these vectors to the training algorithm

32

04/19/23 16:39

Experiments

0 Dataset- Real traces of normal messages- Real attack messages - Polymorphic shellcodes

0 Training, Testing- Support Vector Machine (SVM)

33

04/19/23 16:39

Results

0 CFBn: Control-Flow Based n-gram feature0 CFF: Control-flow feature

34

04/19/23 16:39

Novelty, Advantages, Limitations, Future

0 Novelty- We introduce the notion of control flow based n-gram- We combine control flow analysis with data mining to

detect code / data- Significant improvement over other methods (e.g. SigFree)

0 Advantages- Fast testing- Signature free operation - Low overhead - Robust against many obfuscations

0 Limitations- Need samples of attack and normal messages.- May not be able to detect a completely new type of attack.

0 Future- Find more features- Apply dynamic analysis techniques- Semantic analysis

35

04/19/23 16:39

Email Worm Detection using Data Mining

Training data

Feature extraction

Clean or Infected ?

Outgoing Emails

ClassifierMachine Learning

Test data

The Model

Task: given some training instances of both “normal” and “viral” emails, induce a hypothesis to detect “viral” emails.

We used:Naïve BayesSVM

36

04/19/23 16:39

Assumptions

0 Features are based on outgoing emails.

0 Different users have different “normal” behaviour.

0 Analysis should be per-user basis.

0 Two groups of features

- Per email (#of attachments, HTML in body, text/binary attachments)

- Per window (mean words in body, variable words in subject)

0 Total of 24 features identified

0 Goal: Identify “normal” and “viral” emails based on these features

37

04/19/23 16:39

Feature sets

- Per email features=Binary valued Features

Presence of HTML; script tags/attributes; embedded images; hyperlinks;

Presence of binary, text attachments; MIME types of file attachments

=Continuous-valued FeaturesNumber of attachments; Number of words/characters in

the subject and body- Per window features

=Number of emails sent; Number of unique email recipients; Number of unique sender addresses; Average number of words/characters per subject, body; average word length:; Variance in number of words/characters per subject, body; Variance in word length

=Ratio of emails with attachments

38

04/19/23 16:39

Data set

0 Collected from UC Berkeley.- Contains instances for both normal and viral emails.

0 Six worm types:

- bagle.f, bubbleboy, mydoom.m,

- mydoom.u, netsky.d, sobig.f

0 Originally Six sets of data:

- training instances: normal (400) + five worms (5x200)

- testing instances: normal (1200) + the sixth worm (200)0 Problem: Not balanced, no cross validation reported0 Solution: re-arrange the data and apply cross-validation

39

04/19/23 16:39

Our Implementation and Analysis

0 Implementation

- Naïve Bayes: Assume “Normal” distribution of numeric and real data; smoothing applied

- SVM: with the parameter settings: one-class SVM with the radial basis function using “gamma” = 0.015 and “nu” = 0.1.

0 Analysis

- NB alone performs better than other techniques

- SVM alone also performs better if parameters are set correctly- mydoom.m and VBS.Bubbleboy data set are not sufficient (very low detection

accuracy in all classifiers)

- The feature-based approach seems to be useful only when we have

identified the relevant features

gathered enough training data

Implement classifiers with best parameter settings

40

04/19/23 16:39

Directions

0 Malware is evolving continuously

0 Example: RAMAL; Reactively Adaptive Malware

0 Solution: Novel Class Detection

0 Our Tool: Stream-based Novel Class Detection (SNOD)

0 Applying for Malware: SNODMAL

41

04/19/23 16:39

The Problem0 Signature-based antivirus protection is increasingly challenged

- By polymorphic malware

- By potential self-mutating malware* to be emerged in near future

0 Antivirus must adapt itself to the changing environment

- For example, attackers’ strategies change over time

- Therefore, characteristics of malware also change continuously

0 Signature must be generated automatically

- To protect against polymorphic, self-mutating malware

0 New type of attacks should be detectable by the antivirus

- To guard against zero-day attacks

* Kevin W. Hamlen, Vishwath Mohan, Mohammad M. Masud, Latifur Khan, Bhavani M. Thuraisingham.“Exploiting an antivirus interface.” Computer Standards & Interfaces 31(6), p.p. 1182-1189, 2009

42

04/19/23 16:39

Our Approach (UTD/UIUC, Patent pending) 0 Data stream classification and novel class detection (SNOD)

- Addresses the infinite-length, concept-drift, and feature-evolution problem

- Automatically detects novel classes in stream

0 We are developing SNODMAL, a malware detector using SNOD

Table 1: Differences among different malware detectors

Functionality Signature-based Traditional Data mining based

SNODMAL

Automated signature generation X

Addresses zero-day attack X

Addresses polymorphism and metamorphism

X

Addresses the evolution of malware and benign executables over time

X X

Designed to detect new kind of attack

X X

43

04/19/23 16:39

Ensemble Classification of Data StreamsEnsemble Classification of Data Streams

0Divide the data stream into equal sized chunks- Train a classifier from each data chunk

- Keep the best L such classifier-ensemble

- Example: for L= 3

Data chunks

Classifiers

D1

C1

D2

C2

D3

C3

Ensemble C1 C2 C3

D4

Prediction

D4

C4C4

C4

D5D5

C5C5

C5

D6

Labeled chunk

Unlabeled chunk

Addresses infinite lengthand concept-drift

Note: Di may contain data points from different classes

44

04/19/23 16:39

Architecture of the SNODMAL Framework

Stream of benign & malicious

executablesTemporary

training bufferFeature extraction

and selection

Ensemble of L models

Train new model

Unknown executable

Feature extraction Classify

Ensemble update

Malware/Benign/Novel

Mohammad M. Masud, Jing Gao, Latifur Khan, Jiawei Han, and Bhavani Thuraisingham. “Integrating Novel Class Detection with Classification for Concept-Drifting Data Streams”. In Proceedings of 2009 European Conf. on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD’09), Bled, Slovenia, 7-11 Sept, 2009, pp 79-94 (extended version appeared in IEEE Transaction on Knowledge and Data Engineering (TKDE)).

45

04/19/23 16:39

Usefulness of SNODMAL

0 Capable of handling massive volumes of training data

- Also handles concept-drift

0 Capable of detecting novel classes (new type of malware)

- Existing techniques may fail to detect new type of malware

- SNODMAL should be able to detect the new type as a “novel class”

- SNODMAL will then quarantine the malware and raise alarm

- The quarantined binary would be analyzed by human experts

- The classification model would be updated with the new malware

0 Therefore, reduces damage caused by zero-day attacks

0 Use of cloud computing for feature extraction

- Makes it more applicable to large volumes of data and optimizes running time