how secure are your systems

67
How Secure are Your Systems? Quantitative and Qualitative Assessment of Digital Security Ilir Gashi, Vladimir Stankovic, Cagatay Turkay

Upload: city-unrulyversity

Post on 06-Aug-2015

96 views

Category:

Business


0 download

TRANSCRIPT

How Secure are Your Systems? Quantitative

and Qualitative Assessment of Digital Security

Ilir Gashi, Vladimir Stankovic, Cagatay Turkay

Slide 2

Presenters

• All three presenters are from the Department of Computer Science, School of

Mathematics, Computer Science and Engineering (MaCSE) at City University London

• Ilir Gashi

Centre for Software Reliability (CSR)

E: [email protected]

W: http://www.city.ac.uk/people/academics/ilir-gashi

• Vladimir Stankovic

Centre for Software Reliability (CSR),

E: [email protected]

W: http://www.city.ac.uk/people/academics/vladimir-stankovic

• Cagatay Turkay

giCentre

E: [email protected]

W: http://www.city.ac.uk/people/academics/cagatay-turkay

Slide 3

The plan for tonight

• Start between 6.15pm-6.30pm (depending on audience numbers and late

arrivals). Aim to finish by 8.30pm.

• Part I - First 30-45mins

– Introduction to the concepts of Information Security

• Part II - 30mins:

– 10 minute video: TED talk on Stuxnet

– 10 minute group discussion on the video

– 10 minute plenary discussion

• Part III - Final 30-45mins:

– Overview of research on information security at the CSR, City University

London

• Questions invited throughout. We should be available at the end also for

further discussions

• The timings above (especially for parts II and III) may have to be revised

depending on the level of interaction

Slide 4

Part I

Slide 5

What is Information Security

• Various names:

– Information Security

– Information Assurance

– Cyber-security

• Protecting Information and Information Systems,

from misuse

– Misuse: Unauthorised access, use, disclosure,

disruption, modification, inspection, recording or

destruction

Slide 6

The CIA Triad

Slide 7

Key Security Concepts

• Confidentiality

– Preserving authorised restrictions on information access and

disclosure, including means for protecting personal privacy and

proprietary information

• Integrity

– Guarding against improper information modification or

destruction, including ensuring information nonrepudiation and

authenticity

• Availability

– Ensuring timely and reliable access to and use of information

Slide 8

Contextualising Information Security

• Information security draws upon the best practices and experiences from multiple domains

Slide 9

Some “truisms” about Information security

• There is No Such Thing as Absolute Security

– Given enough time, tools and skills hacker(s) can break through

any security measure

• Defense in Depth as Strategy

– There is no single magic bullet; a layered approach is always

necessary

• Security Through Obscurity is Not an Answer

• Security = Risk Management

– E.g. Spending more on securing an asset than the intrinsic value

of the asset is a waste of resources (e.g. buying a £1000 safe for

a £200 jewellery)

Slide 10

Some “truisms” about Information security (2)

• The Three Types of Security Control Are

Preventative, Detective and Responsive

– Good security will have a balance of all 3

• Complexity Is the Enemy of Security

– The more “moving parts”, the more likely it is that

some thing will break

• Fear, Uncertainty and Doubt (FUD) is not the right

way to justify security

– FUD needs to be replaced with a professional,

rational and balanced approach to security

Slide 11

Some “truisms” about Information security (3)

• Security is a Socio-technical problem

– People, Process, and Terminology Are All Needed to

Adequately Secure a System or Facility

• Open Disclosure of Vulnerabilities Is Good for

Security!

– Users should know about defects in the products they

purchase, just as they have the right to know about

automobile recalls because of defects

– Bottom line: if you uncover an obvious problem, raise

your hand and let someone who can do something

about it know

Slide 12

Vulnerabilities, Threats, Intrusions, Attacks

• We will go through definitions, concepts, examples, trends and

further reading resources for information security:

– Vulnerabilities

– Threats

– Intrusions

– Attacks

Slide 13

Sources

• We start with some definitions which will help us better frame the

many concepts that can be found in the security literature

• We will stick with the definitions adopted in the EU MAFTIA project:

– http://research.cs.ncl.ac.uk/cabernet/www.laas.research.ec.org/maftia/

• The deliverable D21 of MAFTIA is recommended for further reading:

– http://research.cs.ncl.ac.uk/cabernet/www.laas.research.ec.org/maftia/d

eliverables/D21.pdf

Slide 14

Some definitions - MAFTIA

• Fault: adjudged or hypothesised cause of an error

• Error: that part of the system state that may lead to failure

• Failure: delivered service deviates from implementing the system function

• Attack: a malicious interaction fault, through which an attacker aims to

deliberately violate one or more security properties; an intrusion attempt.

• Vulnerability: a fault created during development of the system, or during

operation, that could be exploited to create an intrusion

• Intrusion: a malicious, externally-induced fault resulting from an attack that has

been successful in exploiting a vulnerability

Slide 15

Attack-Vulnerability-Intrusion-Error-Failure model(figure 8 of MAFTIA D21 deliverable)

Slide 16

Hierarchical causal chain of AVI

(figure 9 of MAFTIA D21 deliverable)

Slide 17

Summary of MAFTIA terminology

• Very useful terminology to conceptualise the security “threat” cycle

• However, as was the purpose of the MAFTIA project, the definitions tend to

come from an intrusion tolerance viewpoint

• A good source for definitions of security concepts can be found in the various

800-series of NIST Special Publications:

– http://csrc.nist.gov/publications/PubsSPs.html

Slide 18

Security Concepts and Relationships

(chapter 6 of Stallings and Brown (2nd Edition))

Slide 19

Vulnerability lifecycle

• Another useful way of conceptualising the security threats is to look at the

lifecycle of a vulnerability

• A very good resource for this is the PhD thesis of Stefan Frei, from IBM

Zurich. A shorter version of this work is given in the following publication :

– http://weis09.infosecon.net/files/103/paper103.pdf

• Looking at the vulnerability lifecycle in this way gives us an insight to the

different types and stages of risks that a system may have

• It also gives us an insight on the economic aspects of vulnerability

disclosure and the different types of incentives that effects the decision

making of both attackers and defenders

Slide 20

Vulnerability lifecycle

Slide 21

Economics of a vulnerability lifecycle(figure 3.2 of Frei thesis)

Slide 22

Economics of security

• We touched upon earlier in an area of research that has received a lot of

attention in the last decade, namely the economics of computer security

• This research was motivated by the realisation that security system failures were

caused at times as much due to wrong incentives as they were from technical

errors

• One of the pioneers of this research is Prof Ross Anderson and a good starting

point on further reading on this topic is from his webpage:

• http://www.cl.cam.ac.uk/~rja14/econsec.html

Slide 23

Types of threats

• The most publicised threats to security are:

• Malicious software – commonly referred to as malware

• Intruders – commonly referred to as hackers or crackers

• In the next few slides we will look into:

• Types and examples of malware;

• Types, examples and techniques used by intruders:

• We will also discuss the possible motives of the attackers, and the capabilities

they may need to launch these attacks successfully

Slide 24

Malware (malicious software)

NIST* defines malware as:

“a program that is inserted into a system, usually

covertly, with the intent of compromising the

confidentiality, integrity, or availability of the

victim’s data, applications, or operating system

or otherwise annoying or disrupting the victim.”

* http://csrc.nist.gov/publications/nistpubs/800-83/SP800-83.pdf

Slide 25

Malware: (some) categories and descriptions

Slide 26

Attacker motivations

• Financial

• Commonly from Criminals motivated by financial gain

• Political

• Nation state actors (defence or attack; activist/hacktivist groups (e.g. Anonymous

group))

• Military / Strategy

• Nation state actors (defence or attack (Stuxnet, Flame))

• Psychological

• Nation state attackers seeking to cause confusion and doubt in population etc.

Slide 27

Types of attacks

• As mentioned previously, attacks will seek to exploit a vulnerability in a system

• They may use malware as launching pads or for carrying out an exploit

• They may come from inside or outside the system

• They may be motivated by a variety of reasons

• Other examples of attacks (list is not exhaustive):

• Denial of service

• Distributed denial service

• Buffer over flow attacks

• Etc.

Slide 28

Other types of threats

• Organisational:

• Wrong or incomplete operating procedures

• Wrong or incomplete policy requirements, design or implementation

• Human error in carrying out a possibly correct policy or correct procedures

• Threats from misconfiguration of protection tools

• Supply chain threats from use of “software of uncertain pedigree” (COTS, OTS etc)

• Many of these threats also apply to other environments such as “cloud” or “mobile”

though with possibly different probabilities of being exploited and possibly different

“risk ownership”

Slide 29

Where to find more information

• Many security vendors publish “threat reports” frequently:

• See example from Symantec or IBM:

• http://www.symantec.com/threatreport/

• http://www-03.ibm.com/security/xforce/threats.html

• Though one has to be cautious when interpreting the findings of these reports and look

carefully at the data collection methods. Caution is especially necessary in cases where

cost estimates are reported:

• See a recent report co-authored by Prof Ross Anderson on this topic:

• http://weis2012.econinfosec.org/papers/Anderson_WEIS2012.pdf

Slide 30

Where to find more information (2)

• Other, independent, sources of evidence on threats and vulnerabilities:

• http://cve.mitre.org/ (which also contains links to other endeavours within the

Mitre corporation for enumerating attacks, weaknesses, platforms etc.)

• http://nvd.nist.gov/

• The NIST vulnerability database

• http://www.us-cert.gov/

• The US Department of Homeland Security's Computer Emergency Readiness

Team (US-CERT)

• http://www.ukcert.org.uk/

• UK CERT (mainly a portal with links to other worldwide CERTs)

Slide 31

In conclusion

• What advice can we give to an information systems professional?

• Know your system and your environment

• Be clear about what threats are relevant to your environment and what are not:

this help you make informed decisions on security spending

• Careful from marketing hype and be clear about the data collection and analysis

methods on threat reports – and whether the findings apply to your organisation

• Understand what the reports contain and how this should influence the decisions

you make on the security policy and security budget

• Be wary of risks in the supply chain and risks from use and deployment of different

environments (such as cloud and mobile)

Slide 32

References

• See slides for additional references

• Text-1 - Chapters 1 and 2

• Text-2 - Chapters 6, 7 and 8

• Check out TED talks from the series on hackers:

– http://www.ted.com/playlists/10/who_are_the_hackers.html

– These talks will give you a wider perspective of the types of attacks that are

being performed, by whom, and what the motivations may be

• Text-1: Information Security Principles and Practices by Mark Merkow and James Breithaupt,

Prentice Hall, 2006

• Text-2: Computer security : principles and practice, William Stallings, Lawrie Brown (2nd Edition

(International)), Pearson

Slide 33

Slide 34

Part II

Slide 35

Activity

• (10 mins) We will have a look at the following TED talk on Stuxnet:http://www.ted.com/talks/ralph_langner_cracking_stuxnet_a_21st_century_cyberweapon.html

• (10mins) There have been many articles published about Stuxnet

and its authorship has been attributed to many different sources

(though common consensus now is that it was most likely the work

of the US intelligence). In groups discuss who the attack sources

may be, and in particular for each discuss:

– What would be the motivation (economical, political, strategy, psychological)

– What level of skill would be needed to develop such a malware

– What are the short and long term implications of Stuxnet. Who is more likely to

be at risk in the long term from its disclosure

(think about the targets of Stuxnet)

• (10 mins) Plenary discussion

Slide 36

Part III

Slide 37

Acknowledgments

Work we will summarise next has been done with many colleagues at CSR:

Lorenzo Strigini, Eugenio Alberdi, Robin Bloomfield, Bev Littlewood, Peter Popov, Peter

Bishop, Andrey Povyakalo, Kizito Salako, Robert Stroud, David Wright, Katerina Netkachova,

Oleksandr Netkachov and several ex-colleagues and collaborators

Slide 38

Outline of presentation

– CSR and the experience we bring to security topics

– Some examples of research and applied work

Slide 39

Background

– Centre for Software Reliability (www.csr.city.ac.uk)

– Founded in 1983 to deal with problems surrounding the [un]reliability of

software

– Quickly expanded into a wider systems viewpoint, dependability/security of

socio-technical systems

– Distinctive features

– Emphasis on rigorous assessment (esp. probabilistic)

– Developed models for empirical assessment as well as for insight

– Reliability growth, very high reliability, fault tolerance

– Interdisciplinary approach

– Extensive work on redundancy, diversity, defence in depth (more on this

later in the presentation)

– Work with industry, e.g. long relationship with nuclear safety research;

collaborations with Adelard, a safety consultancy

– All this affects what we bring to security research

Slide 40

Some projects with security contents

www.csr.city.ac.uk/projects

• EU: PDCS (1989-1995) (Predictably Dependable Computer Systems)

• EPSRC: DIRC (2000-2006) (Interdisciplinary Research Collaboration in Dependability of

Computer-Based Systems)

• EU: ReSIST (2006-2008) (Resilience for Survivability in Information Society

Technologies(IST))

• EU: IRRIIS (2006-2009) (Integrated Risk Reduction of Information-Based Infrastructure

Systems)

• EPSRC: INDEED (2006-2010) (Interdisciplinary Design and Evaluation of Dependability)

• EU: AMBER (2008-2009) (Assessing, Measuring, and Benchmarking Resilience)

• PIA:FARA (2009 - 2010) (Probabilistic Interdependency Analysis: framework, data analysis

and on-line risk assessment)

• EU: AFTER (2012-2014) (A Framework for electrical power systems vulnerability

identification, defence and restoration)

• EU: SeSaMo (2012-2015) (Security and Safety Modelling)

• Leverhulme: UNcertainty and COnfidence in DEcision making

• EPSRC/CPNI/Research Institute in Trustworthy Industrial Control Systems: CEDRICS

(2014-1017)

• EPSRC: D3S (2015-2018) (Diversity and Defence in Depth for Security)

Slide 41

Some themes and topics

– The role of quantitative reasoning in security– Necessary basis for decision making

– Security as system issue– computing as part of a whole with people and physical systems– socio-technical systems: people’s behaviour in building, operating, working

as part of systems – Empirical study of threats

– e.g., data collection via Honeynets– Fault tolerance, diversity, defence in depth

– theory and modelling for decision support– empirical measurement, e.g. antivirus, biometrics

– Convergence of security, safety and reliability concerns– embedded systems (SeSaMo, CEDRICS)

– Publications at: www.csr.city.ac.uk/security

Slide 42

Fault tolerance and diversity for security

- To improve reliability and safety, it is normal to

– Replicate systems and information

– Use diverse systems to achieve the same purpose

• Less likely to fail in same situation

– The approach is valuable in IT security as well

• Vulnerability of "monoculture" to "infection"

• But curiously controversial

• Need to match techniques to threats, make trade-offs explicit

– Our contributions include modelling, empirical assessment

• How to assess potential benefit / risk of multiple layers of defence

• Experimental measurement

• Further details on CSR’s work in this area: www.csr.city.ac.uk/diversity

Slide 43

Some examples of our research

- In what follows we will give you a few examples on the CSR work on

security (the list is not exhaustive):

- Diversity with AntiVirus products and Operating systems

- Interoperability of fingerprint biometric systems

- Resilience and security/dependability for critical infrastructures

- Security review of European Railway Traffic Management System

Slide 44Empirical assessment of the effectiveness of

combining antivirus engines

Motivations

• Decision making in security – often driven by subjective anecdotal evidence and

marketing hype

• Objective empirical evidence is needed for sound decision making

– otherwise difficult to evaluate and objectively justify decisions

• A study on the potential benefits of using diversity with an important kind of off-the-

shelf software: AntiVirus products

– used and advertised in commercial products, but without evaluating effectiveness

• We have performed several studies in collaborations with our research partners at

Eurecom Institute in France, Symantec Research and University of Maryland.

• Here we describe the first study – conducted in 2008 :

– 32 AntiVirus engines hosted by the VirusTotal site

• We concentrated on 1 component of AVs – signature-based engines

• Our goal: evaluation of diversity effectiveness, not ranking of AVs!

– with 1599 malware samples collected from a distributed honeypot platform

– over a six month period between February and August 2008

Slide 45

Experimental setup and architecture

Anubis

Gate

way

SensorS

SH

Sensor

Malware

Sensor

Malware

Slide 46

A contour plot of the AV (x-axis) failure rates (z-axis, represented

by the intensity of the colour in the plot) over the malware (y-axis)

• Malware samples submitted to the AVs repeatedly (30+ days)

• Input to the analysis – triplet {Malwarei, AVj, Dayk}

Slide 47

Single AV Product Results - Regression

• Large number of AVs fluctuated

in their detection capability.

• In many cases, the AVs

regressed in their detection

decisions:• Detected the malware at first, and

then failed to detect the malware

at a later date

• A few of the AVs in the Top 10

for their detection capability are

among the AVs that regressed

Slide 48

Regression example

• Two AVs (AV-20 and AV-8) detect the malware on most days but not

every day.

• Both of the AVs fail to detect the same malware on the same day

only once: in other days when one AV fails the other does not.

• This would indicate gains from employing diverse AV engines

Slide 49

Main results of initial analysis

– Main findings from this data set:

• Considerable variability in detection capability

• None of the AVs on their own achieved 100% detection rate.

• Significant improvements in detection rates from using:

– 1-out-of-2 diverse pairs of engines

» 91 pairs, 18% “perfect” (zero failure rate)

– high levels of diversity (1oo3, 1oo4 etc)

» 163 pairs, ~33% better than the best single AV

– Further research performed and reported in:

• http://openaccess.city.ac.uk/1526/

• http://openaccess.city.ac.uk/2716/

• http://openaccess.city.ac.uk/2338/

Slide 50

Operating systems’ vulnerability diversity

• We have also looked at the extent to which operating systems are

diverse in the vulnerabilities they contain

• We used data from the NIST National Vulnerability Database to

empirically evaluate the vulnerability diversity

• We found a high degree of diversity amongst certain families of

operating systems

• This might have implications on the optimal choice of diverse

Operating systems that administrators may wish to use (especially

when deciding on which OSs to use for web server or database

server deployments etc.)

• Details:

– http://openaccess.city.ac.uk/2127/

Slide 51

Operating systems’ vulnerability diversity

Slide 52

Biometric fingerprint system interoperability

• In collaboration with colleagues from University of West Virginia, we

performed analysis of a large scale empirical study on the effects of

interoperability with biometric fingerprint systems.

• In our study, the fingerprint images from:

– 494 participants were collected with

– 4 different fingerprint image scanners, the quality of the images

assessed by

– 2 different image quality algorithms, and matching scores

calculated with

– 3 different matching algorithms

• The results were then further categorised by:

– Age, Gender, Height, Weight, Ethnicity etc.

• To allow us to make empirically-supported recommendations for

biometrics systems deployment

Slide 53

What are Biometrics?

• A means of authentication/identification of an individual based on

unique physical characteristics/traits.

• Static Biometrics

– Something that the individual IS

• Dynamic Biometrics

– Something that the individual DOES

• Passwords are something the individual knows.

Slide 54

Example Deployment

• Consider a Border Agency scenario, that requires foreign nationals to

verify their identity before entering the country.

– Enrolment

• As part of the visa application process, the individual must

present their unique biometric information (i.e. fingerprint image).

• This biometric information is stored in the system as a ‘template’ for the individual.

– Verification

• Upon attempting to enter the country, the individual presents their visa as

identification.

• To verify this identification, the fingerprint image is recaptured.

– The feature set is compared with the template to obtain a match similarity

score.

• Based on a set score threshold, the individual is either allowed/denied access to

enter the country.

Slide 55

Biometric System Performance

• Performance is based on the overlap of imposter and genuine scores.

– False Accept Rate = Imposter Accepted

– False Reject Rate = Genuine Rejected

– Equal Error Rate where (FAR == FRR)

• How should the acceptance Threshold be set?

– Depends on the operating environment.

• Types of operating environment.

– Low FAR – high security / military.

– Low EER – civilian applications.

– Low FRR – forensics or low-cost for false accept.

Slide 56

Visualisation – how can it help a decision maker

• We explore how (some initial) data visualization techniques may help a decision maker with

– deployment of biometric systems which

– balance the trade-offs between cost and failure rates

– when combining the different components (e.g., fingerprint image capture devices, image quality algorithms and matchers)

– for a given environment.

• The dataset that the decision-maker would use in this scenario is multi-dimensional

– (e.g., different components (image quality, matchers, image capture devices) and multiple demographic aspects (age, gender, ethnicity, height, weight etc),

• Hence, a decision maker would benefit from data visualization techniques that allow for identification of trends in the dataset, anomalies and trade-offs in a fast and intuitive manner.

Slide 57

Three tasks to help us with visualisation

design• T1 : Visualize the ”distinctiveness” for each probe device, gallery

device and algorithm combination: the extent to which a

threshold value clearly separates the genuine scores from the

impostor scores.

• T2 : Visualize the overall inter-device, intra-device matching

performance and matching algorithm performance.

• T3 : Investigate dynamically through visualization the relation

between soft biometrics (age, gender, height, weight) and the

matching performance.

Slide 59

Slide 60

Short video of the visualisation tool

• Available from here:

– https://www.youtube.com/watch?v=CneKJwp2_n4

• Papers on biometrics:

– http://openaccess.city.ac.uk/3507/

– http://openaccess.city.ac.uk/4091/

Slide 61

Critical Infrastructure Interdependencies

• A key issue for achieving CI resilience and CI

protection

– risk of CI disturbances propagating across ‘dependencies’ links

• Complex phenomena, not well understood

Slide 62Modelling the Resilience and Interdependency of Critical

Infrastructures

• PIA:FARA Toolkit Prototype

• PIA Designer – qualitative analysis of possible interdependencies

• PIA Run-time support – quantitative, probabilistic model of interacting CIs

– a simulator of the model behaviour in the presence of failures of the modelled entities for the

chosen model parameterisation

• Supporting various analyses

• e.g. which parameters are most critical (highest impact on likelihood of cascade

failures)

• Implications of short- to mid- term trends, e.g. growing fraction of renewable energy

• Short term predictions, e.g. risk of a service disruption in the next ΔT.

• Test case and scenario generation

• E.g. for training or demonstration purposes

– Recently, expanded this work towards modelling of security aspects, e.g. modelling

adversaries behaviour/attacks and their impact on the modelled system.

• Work developed in various projects:

• EU-funded: IRRIIS (EU FP6 - IP), AFTER (EU FP7 - STREP)

• National funding: PIA-FARA (TSB, EPSRC), Cetifs (CPNI, EPSRC, TSB)

– Further details: http://openaccess.city.ac.uk/3091/ , http://openaccess.city.ac.uk/4361/

http://openaccess.city.ac.uk/4363/

Slide 63

Example: Security analysis of ERTMS

– the European Railway Traffic Management System (ERTMS) is a new standard

intended to create a seamless European railway system

– we were asked for a study of its security implications

• Upgrade of the UK railway signalling infrastructure over the next 30 years –

new system will be in operation for at least 50 years.

• Centralised control implies risk of an electronic attack bringing down the

entire network.

• Safety has always been paramount but now need to consider the security

implications.

• Need to understand how secure the ERTMS/ETCS specifications are and

whether the industry implementations may compromise the security

requirements.

• Investigate whether the technology is secure from external interference,

identify vulnerabilities and provide advice to mitigate any security flaws

– Main results of our study are still restricted, but a high level paper with results is

here: http://openaccess.city.ac.uk/1522/

Slide 64ERTMS system architecture

Balise

Control Centre

Controls and Indications

Interlocking

Train Detection

Track Occupancy

RBC

Movement Authority

Position Information

GSM-R

MSC/BSC

BTS

European Vital Computer

EVC

Balise antenna

DMI

Odometry

GSM-R Radio

Slide 65

Further information and resources

• InfoSec Europe is an annual industry event that takes place in London. This

year it will be in Kensington Olympia:

http://www.infosecurityeurope.com/

• We run two MSc courses on information Security at City:

– MSc in CyberSecurity

• http://www.city.ac.uk/courses/postgraduate/cyber-security

– MSc in Management of Information Security and Risk (MISR)

• http://www.city.ac.uk/courses/postgraduate/management-

information-security-risk-msc

• All of MISR modules can be taken individually as CPD modules

• We are also happy to discuss further any possibilities for research

collaborations. For example via Knowledge Transfer Partnerships:

– http://ktp.innovateuk.org/

Slide 66

D3S project – recruiting a research associate

• We are starting with a new Research Project on defence in depth for

security:

– http://www.city.ac.uk/news/2015/march/researchers-at-citys-centre-for-

software-reliability-are-the-recipients-of-a-563,890

• We will be recruiting a research associate with a 3 year full-time contract

• There may also be a possibility of studying towards a PhD in parallel to this

employment

• If you are interested, please get in touch:

[email protected]

Slide 67