software security growth modeling: examining vulnerabilities with reliability growth models
DESCRIPTION
Software Security Growth Modeling: Examining Vulnerabilities with Reliability Growth Models. First Workshop on Quality of Protection Milan, Italy September 15, 2005. Andy Ozment Computer Security Group Computer Laboratory University of Cambridge. Overview. - PowerPoint PPT PresentationTRANSCRIPT
Software Security Growth Modeling:Examining Vulnerabilities with
Reliability Growth Models
Andy Ozment
Computer Security Group
Computer Laboratory
University of Cambridge
First Workshop on Quality of Protection
Milan, Italy
September 15, 2005
Andy Ozment, University of Cambridge 2
Overview
• Reasons to measure software security• Security growth modeling: using reliability growth models
on a carefully collected data set• Data collection process• Data characterization challenges: failure vs. fault• The problem of normalization• Results of the analysis• Future directions
Andy Ozment, University of Cambridge 3
Motivation
• Reduce the Market for Lemons effect– Info asymmetry in the market results in universally lower quality
• Security return on investment (ROI)– E.g. ROI for MS after it’s 2002 efforts
• Evaluate different software development methodologies• Metrics needed for risk measurement and insurance
We need a means of measuring software security
– Ideal measure: $ € £ ¥– Goal: both absolute & relative measure
Andy Ozment, University of Cambridge 4
Security Growth Modeling
• Utilize software reliability growth modeling to consider security
• Problems– Data collection for faults is easier and more institutionalized– Hackers like abnormal data– Normalizing time data for effort, skill, etc.
• Previous work– Eric Rescorla: “Is finding security holes a good idea?”– Andy Ozment: “The Likelihood of Vulnerability Rediscovery and
the Social Utility of Vulnerability Hunting”– 5th Workshop on Economics & Information Security (WEIS 2005)
Andy Ozment, University of Cambridge 5
Data Collection
• OpenBSD 2.2, December 1997• Vulnerabilities obtained from ICAT, Bugtraq, OSVDB,
and ISS• Search through source code
– Identify ‘death date,’ when vulnerability was fixed– Identify ‘birth date,’ when vulnerability was first written
• Group vulnerabilities according to the version in which they were introduced
Andy Ozment, University of Cambridge 6
Data Characterization Problems
• Inclusion– Localizations– Specific hardware– Default install not vulnerable– Broad definition of vulnerability
• Uniqueness– Bundle patch from third-party– Simultaneous discovery of multiple related flaws
• Decided to try two perspectives– Failure: bundles & related were consolidated– Flaw: bundles & related were broken down into individual vulns
Andy Ozment, University of Cambridge 7
Data Normalization
• Normalize time data for effort, skill, holidays, etc.• Not possible with this data
• This analysis of non-normalized data: ‘real-world security’– Small business owner– Concerned with automated exploits
• An analysis of normalized data: ‘true security’– Necessary for ROI, assessing development practices, etc.– Of concern to governments & high-value targets that may be the
subject of custom attacks
Andy Ozment, University of Cambridge 8
Applying the Models
• Used SMERFS reliability modeling tool to test 7 models• Analyzed both failure- and fault-perspective data sets
– Failure data points: 68– Flaw data points: 79
• Models were tested for predictive accuracy– Bias (u-plots)– Trend (y-plots)– Noise
• No models were successful for flaw-perspective data• Three models were successful for failure-perspective data.• Most accurate successful model: Musa’s Logarithmic
– Purification level (% of total vulns that have been found): 58.4%– After 54 months ,the MTTF is: 42.5 days
Andy Ozment, University of Cambridge 9
Andy Ozment, University of Cambridge 10
Future Research
• Normalize the data for relative numbers• Examine the return on investment for a particular
situation• Utilize more sophisticated modeling techniques
– E.g. recalibrating models
• Combine vulnerability analysis with traditional software metrics
• Compare this program with another
Andy Ozment, University of Cambridge 11
Conclusion
• Software engineers need a means of measuring software security
• Security growth modeling provides a useful measure• However, the data collection process is time-consuming• Furthermore, characterizing the data is difficult• Nonetheless, the results shown here are encouraging• More work is needed!
Andy Ozment, University of Cambridge 12
Questions?
Andy Ozment
Computer Security Group
Computer Laboratory
University of Cambridge
Andy Ozment, University of Cambridge 13
Number of vulnerabilities identified per year
Perspective 1998 1999 2000 2001
½ of
2002 Total
Treated as failures 19 17 17 13 2 68
Treated as flaws 24 18 22 13 2 79
Andy Ozment, University of Cambridge 14
Successful applicability results for models applied to the failure-perspective data:
StatisticMusa’s
LogarithmicGeometric
Littlewood/Verrall Linear
Prequential Likelihood
148.35 (1) 150.23 (2) 150.50 (3)
Bias (u-plot) 0.12 (1) 0.13 (2) 0.18 (3)
Noise 0.31 (1) 2.39 (2) 2.44 (3)
Trend (y-plot) 0.20 (3) 0.18 (2) 0.14 (3)
Andy Ozment, University of Cambridge 15
Estimates Made by Successful Models
StatisticMusa’s
LogarithmicGeometric
Littlewood/Verrall Linear
Initial Intensity Function
0.059 0.062 0.066
Current Intensity Function
0.031 0.030 0.030
Purification Level 0.584 0.505 N/A
Current MTTF 42.5 33.1 33.8