vulnerability management at the crossroads, part 2

4
June 2008 Network Security 9 Let’s look at the three different strategies that the security industry is adopting these days. Kill them before they grow Arguably the best way to mitigate the effects of security vulnerabilities is to remediate them as early in their lifecycle as possible. This not only removes vul- nerabilities as quickly as possible, thus minimising the window of potential exploitation, but also optimises security expenditures. It is a widely known and accepted software development axiom that it is less expensive to detect and fix bugs in the initial phases of the software development process than to do so in the later stages. Consequently, many vulnerability management practitioners have adopted the strategy of moving the process closer to the development stages to integrate with the initial phases of a software development lifecycle (SDLC). This strategy seems to fit particularly for Vulnerability management at the crossroads, part 2 Ivan Arce, Core Security Technologies In the previous edition of Network Security, I explored the history of vulner- ability management, and concluded that conventional approaches to fighting attack were destined for failure in the light of dynamic and aggressive threat evolution in the modern blackhat community. The bottom line is that vulner- ability scanning tools are gradually losing the battle against an attacker com- munity that is becoming faster and more adept at developing zero-day exploits. This second and final part of the article will provide a brief description of how we might reinvent the vulnerability management process. to overplay their importance and place a higher priority on issues of which the value can be calculated. An over-reliance on quantitative data may be as damaging and unhelpful as fear, uncertainty and doubt. Companies should guard against manip- ulating figures by carrying out limited or misguided actions to improve the metric value. Such actions may improve the fig- ures, but they could give the company a false sense of security and leave it unpro- tected against critical security threats. If a business wants to determine how secure its systems are, security metrics alone will not provide the answer. They will, however, help to determine the potential for exposure to a virus attack. Security metrics, best practice, and compliance The decision to implement a programme of metrics and measurements is not to be taken lightly. The quality of the results will be directly attributable to the integrity and accuracy of the raw information collected and the stated objectives of the metric framework. ISO27004 is designed to be implemented as part of the wider ranging ISO27001 information security manage- ment standard, since companies need to ensure that their information-gathering infrastructure meets or exceeds compliance guidelines as well as industry-recognised standards. This will assist them to withstand scrutiny from auditors and accountants. The use of security metrics and meas- urements is likely to increase. If properly researched and implemented, they will prove to be a valuable resource for IT security officers and the company as a whole. They will also encourage best practice while ensuring regulatory com- pliance. If poorly implemented, however, they will provide only meaningless fig- ures, which may result in infrastructure weaknesses and a false sense of security. For this reason effective policies and pro- cedures must be the cornerstone for reli- able security metrics and measurements. References 1. United Kingdom. Computer Misuse Act 1990. http://www.opsi.gov.uk/Acts/ acts1990/Ukpga_19900018_en_1.htm 2. United Kingdom, Regulation of Investigatory Powers Act 2000. http:// www.opsi.gov.uk/acts/acts2000/ukpga_ 20000023_en_1 3. United Kingdom. Data Protection Act 1998. http://www.opsi.gov.uk/acts/ acts1998/ukpga_19980029_en_1 4. United Kingdom. Companies Act 2006. http://www.opsi.gov.uk/acts/ acts2006/ukpga_20060046_en_1 5. Introduction to ISO 27004 (ISO 27004). The ISO 27000 Directory. The 27000.org directory. http:// www.27000.org/iso-27004.htm 6. US Department of Commerce. NIST SP 800-55. Security Metrics Guide for Information Technology Systems. 2003. 7. US Department of Commerce. NIST (Draft) SP 800-80. Guide for Developing Performance Metrics for Information Security. 2006. 8. US Department of Commerce. NIST SP 800-26 Security Self Assessment Guide for Information Technology Systems. 9. Bruce Schneier. CRYPTO-GRAM, January 15 2007 http://www.schneier. com/crypto-gram.html Ivan Arce VULNERABILITIES

Upload: ivan-arce

Post on 05-Jul-2016

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Vulnerability management at the crossroads, part 2

June 2008 Network Security9

Let’s look at the three different strategies that the security industry is adopting these days.

Kill them before they growArguably the best way to mitigate the effects of security vulnerabilities is to

remediate them as early in their lifecycle as possible. This not only removes vul-nerabilities as quickly as possible, thus minimising the window of potential exploitation, but also optimises security expenditures. It is a widely known and accepted software development axiom that it is less expensive to detect and fix bugs in the initial phases of the software

development process than to do so in the later stages.

Consequently, many vulnerability management practitioners have adopted the strategy of moving the process closer to the development stages to integrate with the initial phases of a software development lifecycle (SDLC). This strategy seems to fit particularly for

Vulnerability management at the crossroads, part 2Ivan Arce, Core Security Technologies

In the previous edition of Network Security, I explored the history of vulner-ability management, and concluded that conventional approaches to fighting attack were destined for failure in the light of dynamic and aggressive threat evolution in the modern blackhat community. The bottom line is that vulner-ability scanning tools are gradually losing the battle against an attacker com-munity that is becoming faster and more adept at developing zero-day exploits. This second and final part of the article will provide a brief description of how we might reinvent the vulnerability management process.

to overplay their importance and place a higher priority on issues of which the value can be calculated. An over-reliance on quantitative data may be as damaging and unhelpful as fear, uncertainty and doubt.

Companies should guard against manip-ulating figures by carrying out limited or misguided actions to improve the metric value. Such actions may improve the fig-ures, but they could give the company a false sense of security and leave it unpro-tected against critical security threats.

If a business wants to determine how secure its systems are, security metrics alone will not provide the answer. They will, however, help to determine the potential for exposure to a virus attack.

Security metrics, best practice, and compliance The decision to implement a programme of metrics and measurements is not to be taken lightly. The quality of the results will be directly attributable to the integrity and accuracy of the raw information collected and the stated objectives of the metric

framework. ISO27004 is designed to be implemented as part of the wider ranging ISO27001 information security manage-ment standard, since companies need to ensure that their information-gathering infrastructure meets or exceeds compliance guidelines as well as industry-recognised standards. This will assist them to withstand scrutiny from auditors and accountants.

The use of security metrics and meas-urements is likely to increase. If properly researched and implemented, they will prove to be a valuable resource for IT security officers and the company as a whole. They will also encourage best practice while ensuring regulatory com-pliance. If poorly implemented, however, they will provide only meaningless fig-ures, which may result in infrastructure weaknesses and a false sense of security. For this reason effective policies and pro-cedures must be the cornerstone for reli-able security metrics and measurements.

References

1. United Kingdom. Computer Misuse Act 1990. http://www.opsi.gov.uk/Acts/acts1990/Ukpga_19900018_en_1.htm

2. United Kingdom, Regulation of Investigatory Powers Act 2000. http://www.opsi.gov.uk/acts/acts2000/ukpga_20000023_en_1

3. United Kingdom. Data Protection Act 1998. http://www.opsi.gov.uk/acts/acts1998/ukpga_19980029_en_1

4. United Kingdom. Companies Act 2006. http://www.opsi.gov.uk/acts/acts2006/ukpga_20060046_en_1

5. Introduction to ISO 27004 (ISO 27004). The ISO 27000 Directory. The 27000.org directory. http://www.27000.org/iso-27004.htm

6. US Department of Commerce. NIST SP 800-55. Security Metrics Guide for Information Technology Systems. 2003.

7. US Department of Commerce. NIST (Draft) SP 800-80. Guide for Developing Performance Metrics for Information Security. 2006.

8. US Department of Commerce. NIST SP 800-26 Security Self Assessment Guide for Information Technology Systems.

9. Bruce Schneier. CRYPTO-GRAM, January 15 2007 http://www.schneier.com/crypto-gram.html

Ivan Arce

VULNERABILITIES

Page 2: Vulnerability management at the crossroads, part 2

10Network Security June 2008

organisations that have well-established software development practices and teams (for example, software vendors) or those that embarked on in-house development of web applications.

Not surprisingly, many security ven-dors that provide vulnerability assess-ment products have moved to aim their solutions at software development teams and their processes. Commercial web application vulnerability scanners are the most visible example of this trend in vulnerability management.

Although web application vulnerability scanners (WAVS) are relative newcomers to the vulnerability management space, the rapid growth of in-house development of web applications at many organisations and the increased focus on security by vendors that cater to software develop-ment organisations quickly moved startup companies in this segment into the spot-light during 2007. Last year, both IBM and Hewlett Packard acquired two of the leading web application security com-panies – Watchfire and SPI Dynamics, respectively – to incorporate their products into their offerings.

A plethora of software security vendors have also adopted the same stance from the opposite end. Software develop-ment and testing and quality assurance tools with an increased focus on security are part of a vulnerability management arsenal including automated source code analysis, runtime instrumentation and fault injection tools that integrate with defect tracking systems. Finally, a mul-titude of security-oriented features have been incorporated into development tools (such as source code compilers, linkers, development frameworks and IDEs, test-ing and quality assurance programs pro-vided by Microsoft) and are widely used by software developers around the globe.

This approach may very well be the most effective way for many organisa-tions to reinvent vulnerability man-agement and achieve a more efficient process that will show the best results in the medium to long term. However, at least two important issues should be addressed for this to happen.

Focusing the vulnerability management process on the initial stages of the software development lifecycle requires substantial

investments in security training and security awareness by a traditionally averse stake-holder: the software development organisa-tion. Additionally, changing software devel-opment processes to incorporate security best practices may be time-consuming and cumbersome at the expense of losing the flexibility needed to cope with a rapidly evolving threat landscape.

Increasing vulnerability management efforts in operational areas cannot be avoided anyway. Thousands of vulnera-ble applications are already deployed and running in production environments. In-house development teams may active-ly maintain many such applications. Their source code may not be available to implement fixes, and operational requirements may prevent deployment of bug fixes and patches.

One security policy to rule them allThe emergence of new types of vulner-abilities and attack trends is forcing network vulnerability scanners to evolve beyond the basic design principles used to guide their original roadmaps 10 years ago. Today, it is increasingly dif-ficult to test for vulnerabilities accurately and unintrusively in applications such as web browsers, email readers, media players, file viewers and office productiv-ity software suites, and multi-tier web applications and their respective back ends. In addition, new potential attack targets such as web applications, portable devices, wireless networks and virtualisa-tion technologies require security testing strategies that can’t be implemented eas-ily by either active or passive network vulnerability assessment tools.

Compliance with security regulations such as PCI, SOX, and HIPPA, also trig-gers more demanding requirements for vulnerability management vendors, and simply scanning for known vulnerabili-ties isn’t enough to meet them.

Thus, a second strategy to reinvent vul-nerability management followed by some practitioners and security vendors revolves around embracing solutions that check for and verify security policy compliance on servers and endpoint systems. By inte-grating vulnerability assessment and patch

management systems with endpoint and network security policy and asset manage-ment solutions, the vulnerability manage-ment process retains its operational focus while gaining a more comprehensive secu-rity testing scope and a tighter coupling with the regulation requirements and business needs of organisations.

New (or slightly modified exist-ing) vulnerability management solu-tions that integrate with Cisco’s NAC and Microsoft’s NAP initiatives have appeared since early 2001. They require the deployment of agents or credentialed (or otherwise authorised) access to the managed systems. These solutions are mostly based on the same scan and patch principles from over a decade ago, but were extended to accommodate the exist-ing technology to fulfil more compre-hensive security requirements.

From a technology perspective, this approach is hardly an innovative attempt to reinvent the vulnerability manage-ment process. To begin with, it does not address current issues and design flaws of vulnerability management, yet by coupling it with ‘softer’ security compli-ance requirements, security practitioners can quickly gain alignment with business needs. It can give them a higher degree of visibility that could help to improve the catch-up game with the adversary on the offensive side. The premise is that over time, all assets at risk will end up covered by a unified security policy, access control and identity management umbrella that will blanket the entire organisation with an integrated and comprehensive security solution run by either IT/network operations or the information security office team.

In theory, for large organisations, this may work out in the long term and pro-vide a robust framework for information security throughout the corporate network. Unfortunately the approach suffers from a few drawbacks that impose severe restric-tions on what can be effectively achieved.

In principle, it perpetuates the secu-rity quest for complete coverage of the vulnerability assessment and remediation throughout the organisation. The ulti-mate goal of this approach is to have a comprehensive enterprise-wide integrated security solution–a ‘security ERP’ of

VULNERABILITIES

Page 3: Vulnerability management at the crossroads, part 2

June 2008 Network Security11

sorts–that would cover all the weaknesses and keep the ship watertight and afloat. Organisations with heterogeneous net-works and diverse systems are less likely to succeed with this approach. More impor-tantly, it leaves little margin for failure.

It is not easy to adopt emerging new technologies that still may be immature in terms of security, although highly desirable for improving business processes under an overarching security policy management framework. Custom web applications, mobile devices, software-as-a-service (SaaS) offerings and service oriented architecture (SOA) security are examples of technolo-gies that lack well-defined security policy management solutions with integrated access control frameworks.

Go hack yourself!The third approach is to change the vul-nerability management process in a more suitable way to cope with new attack vectors and trends that became prevalent in the second half of the past decade. By focusing on assessing actual threats or attack paths and deploying ad hoc mitiga-tion measures, rather than merely focus-ing on vulnerabilities and patches, organi-sations could effectively incorporate effec-tive risk management practices that can cope with today’s rapidly evolving threats, and provide tangible (albeit partial) results to show incremental security improve-ments and a more measurable return on security investment over time.

This strategy borrows its foundations from the penetration testing and threat management practices born in the early 1970s in the context of military-grade security assurance and combines them with the same ideas from Farmer and Venema that facilitated the growth of a substantial portion of the information security industry during the 1990s.

The rationale is that you can improve your security posture by systematically attacking your own systems to identify real attack paths in your organisation, assess the impact of those attacks and deploy the most appropriate and cost-effective countermeasure to stop or con-tain attacks, as opposed to just trying to fix an arbitrarily prioritised long list of uncorrelated vulnerabilities.

In the past few years, this strategy has been adopted by an increasing number of worldwide organisations and a hand-ful of security vendors that provide penetration testing and risk management software and services.i This approach has several advantages over the other two strategies and a few disadvantages.

“Moving from a vulnerability-centric viewpoint to an attack or threat-centric perspective of vulnerability management provides a more realistic and pragmatic scenario to improve security in today’s rapidly changing landscape”

By focusing on real attacks and present and existing threats, the organisation’s security assurance or risk management function can understand and demon-strate more clearly what the potential impact of a successful attack is. This in turn, facilitates alignment with business processes and the deployment of cost-effective risk mitigation mechanisms in a rational manner. Is it no longer necessary to attempt to fix the complete universe of vulnerabilities in some arbitrarily-given order, but instead, just to deploy countermeasures to prevent or contain a number of very specific attacks. By implementing a consistent and repeat-able methodology for assessing specific (but real) threats, deploying correspond-ing countermeasures and auditing their effectiveness, the organisation can incre-mentally improve its security posture after each iteration of the process in a very pragmatic manner.

This also brings the perspective of potential attackers into the security equa-tion. Organisations adopting this strategy will look not only at what needs to be defended and why, but will also seek to understand which attacks and attack-ers they should be protecting themselves against. Moving from a vulnerabil-ity-centric viewpoint to this attack or threat-centric perspective of vulnerability management provides a more realistic and pragmatic scenario to improve security in today’s rapidly changing landscape.

The deployment of patches for an enumeration of vulnerabilities with a

corresponding risk metric does not nec-essarily correlate a plausible attack sce-nario that the organisation must prevent from happening. It may be much more effective and less expensive to implement very precise countermeasures (patches or otherwise) that will effectively stop or contain specific attacks at a faster pace.

Likewise, today’s attackers do not limit themselves to a number of pre-defined vulnerabilities, target technologies or attack vectors. Instead, they combine multiple techniques, publicly known and unknown vulnerabilities, secu-rity configuration flaws and lax poli-cies to build attacks that leverage the organisation’s weaknesses in the most cost effective manner. The financially motivated adversaries of today may not devise the most ingenious and techni-cally elegant attacks, but will certainly find the one that is most effective and requires the least amount of investment to carry out. Attackers have been quick to align their security technologies and techniques with their business processes. Organisations that seek to match emerg-ing threats and the increasing sophistica-tion of their attacks should understand and perhaps adopt a similar viewpoint.

A more comprehensive view of an organisation’s security posture that includes qualification of potential attack-ers and real attacks and the resulting impact on the organisation aligns vulner-ability management and business risk management practices. It may prove to be a much more agile and flexible way of coping with the increased sophistica-tion of attacks, dynamic evolution of the threat landscape and the rapid emer-gence and adoption of new technologies (and corresponding security issues) that organisations seek to deploy at a faster pace than ever before.

On the other hand, there are a handful of drawbacks associated to this ‘hack-yourself ’ strategy. The iterative and incre-mental security improvement proposed should be backed up by solid risk man-agement and information security foun-dations in the organisation. Incremental improvements aren’t possible if the proc-ess ends up as a random walk driven by self-inflicted attacks or real security incidents. Security awareness, incident

VULNERABILITIES

Page 4: Vulnerability management at the crossroads, part 2

12Network Security June 2008

response and forensics, and sound secure software development and IT security operations practices are complementing components for this strategy.

Finally, adopting this approach requires a substantial shift in the way information security strategy is thought of within an organisation. Here are some implicit decisions associated with adoption of this approach.

The organisation will employ offensive technologies–not just defensive ones–to improve its security posture. The organi-sation is willing to test its own defences in a systematic manner using some form of realistic modelling of attacks and attack-ers (either real or simulated attacks). The implications of potential service disrup-tion or self-compromise of internal sys-tems should be understood and accepted as a necessary evil of this strategy.

The organisation will explicitly aban-don the search for perfect security and 100% vulnerability assessment & patch coverage. The decision to attack itself in order to identify specific attacks that must be mitigated immediately acknowl-edges implicitly that an organisation can’t fix every possible security vulner-ability. It is a further admission that it has decided to focus on the specific weaknesses that are currently known to be exploitable in an attack.

Operational security is still a necessary component of the information security strategy. Although addressing vulnerabil-ities early in the development or deploy-ment stages is preferable to minimise the costs of mitigation, security assurance processes must also consider the opera-tional conditions and security incident response capabilities of the organisation.

The summer is ended and we are not savedIn just over a decade, the information security landscape changed substantially,

and vulnerability management products and services became widely adopted throughout the information security practices of organisations worldwide. As information security practitioners increasingly relied on vulnerability scan-ners and patch management systems to address the mounting number of vul-nerabilities in their networks, attackers improved at combining multiple tech-niques and technologies to identify and exploit the weaknesses that would yield the best returns with the least possible investment. Traditional vulnerability management has failed to cope with the new breed of financially motivated attackers, and with the explosion in the number of vulnerabilities disclosed every year.

As a result, the vulnerability manage-ment process is at a crossroads. Either it reinvents itself to provide cost-effective ways to improve security for organi-sations, or becomes an increasingly meaningless part of the security process that can’t address the modern threat landscape.

In this article, I’ve presented three cur-rent strategies that many organisations are exploring to improve their security. Many security vendors, industry ana-lysts and information security experts advocate the implementation of each of these strategies as the perfect means to achieve the elusive goal of perfect secu-rity. However, the smartest organisations understand that effective security can only be achieved through the combina-tion of imperfect means to obtain the best possible results in alignment with the business.

Figuring out which combination of approaches best fits the reader’s organi-sation is a necessary exercise because today’s attackers have no confusion about their aims. They also have a clear understanding of the imperfection of their means, which gives them a decisive

strategic advantage against adversaries in the security game.

About the author

Ivan Arce, who is a co-founder of Core Security Technologies, sets the technical direction for the company and is responsi-ble for overseeing the development, testing and deployment of all Core products. Arce also writes for numerous technical publi-cations, speaks frequently at industry events and is commonly quoted in industry publications. He also currently serves as the Associate Editor of the IEEE Security & Privacy Magazine. He is a member of the Association for Computing Machinery (ACM), the IEEE Computer Society and a former project advisor to the Open Web Application Testing Project.

Prior to co-founding Core, Arce most recently served as vice president of research and development at VirtualFon, a com-puter telephony-integration company where he was responsible for the develop-ment, testing and deployment of mission-critical computer telephony applications. Previously, Arce spent eight years as an information security consultant and soft-ware developer for banks, government agencies and financial and telecommuni-cations corporations.

Footnotes

i. Full disclosure: The author works for one such vendor. Although security services vendors have followed this strategy for many years or even several decades, Core Security Technologies pioneered in this area with the intro-duction of commercial automated penetration testing software (named CORE IMPACT) in 2002, making sophisticated penetration testing and risk management practices available to a wider range of organisations. For more information about this you can visit www.coresecurity.com

VULNERABILITIES