university of california, davisnob.cs.ucdavis.edu/bishop/papers/2011-isecure/future.pdf · 2013. 3....

25
ISeCure The ISC Int'l Journal of Information Security January 2011, Volume 3, Number 1 (pp. 3–27) http://www.isecure-journal.org Invited Paper Computer Security in the Future Matt Bishop a a Department of Computer Science University of California at Davis, 1 Shields Ave. Davis, CA 95616-8562 USA ARTICLE I N F O. Article history: Received: 2 December 2010 Revised: 11 January 2011 Accepted: 12 January 2011 Published Online: 16 January 2011 Keywords: Computer Security, Computer Systems, Networking Infrastructure, Internet ABSTRACT Until recently, computer security was an obscure discipline that seemed to have little relevance to everyday life. With the rapid growth of the Internet, e-commerce, and the widespread use of computers, computer security touches almost all aspects of daily life and all parts of society. Even those who do not use computers have information about them stored on computers. This paper reviews some aspects of the past and current state of computer security, and speculates about what the future of the field will being. c 2011 ISC. All rights reserved. 1 Introduction For many years, computer security was an orphan of computer science. It did not fit readily into any single discipline, because it cut across the realms of theory, systems development, software engineering, programming languages, networking, and other dis- ciplines. Further, it was considered a strictly applied matter, with little theory that was useful in practice. Perhaps this attitude originated from the nature of “computer security” before computer use became widespread. Securing computers involved controlling physical access to the systems, because users were generally trusted. As networks began to connect sys- tems, the user community was still trusted, so network protocols were designed to provide robustness by han- dling failures and to provide only very basic security services. But the rapid growth of networking, com- bined with the increasing availability of computers, changed this environment. Now, people from different institutions, with different criteria for access, had the ability to connect to systems throughout the network. Email address: [email protected] (M. Bishop). ISSN: 2008-2045 c 2011 ISC. All rights reserved. Indeed, the first RFC addressing security was RFC 602, which recommended taking precautions against unauthorized remote access (by choosing passwords that are difficult to guess and not posting remote access telephone numbers), and noting that there was a “lingering affection for the challenge of breaking someone’s system . . . despite the fact that everyone knows that it’s easy to break systems” [1]. As the number of systems connected to the networks grew, the number of institutions housing those computers grew, and the number of interconnected networks grew, so did the security problems. In the mid-1980s, the consequences of neglecting se- curity became clear. Computer viruses, first described by Fred Cohen [2], proliferated. The Internet worm of 1988 [3] disrupted the Internet by overloading sys- tems within hours, and very quickly re-infecting those from which it was purged. Studies showed that it had spread rapidly through the network, infecting several thousand systems. 1 Other worms, such as the Decnet worm in 1988, attacked specific networks (in this case, the NASA SPAN network). 1 The actual number of systems infected is not known. A good statistical estimate is approximately 2600 systems [4]. ISeCure

Upload: others

Post on 18-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • ISeCureThe ISC Int'l Journal ofInformation Security

    January 2011, Volume 3, Number 1 (pp. 3–27)

    http://www.isecure-journal.org

    Invited Paper

    Computer Security in the Future

    Matt Bishop aaDepartment of Computer Science University of California at Davis, 1 Shields Ave. Davis, CA 95616-8562 USA

    A R T I C L E I N F O.

    Article history:

    Received: 2 December 2010

    Revised: 11 January 2011

    Accepted: 12 January 2011

    Published Online: 16 January 2011

    Keywords:

    Computer Security, ComputerSystems, Networking

    Infrastructure, Internet

    A B S T R A C T

    Until recently, computer security was an obscure discipline that seemed to

    have little relevance to everyday life. With the rapid growth of the Internet,

    e-commerce, and the widespread use of computers, computer security touches

    almost all aspects of daily life and all parts of society. Even those who do not

    use computers have information about them stored on computers. This paper

    reviews some aspects of the past and current state of computer security, and

    speculates about what the future of the field will being.

    c© 2011 ISC. All rights reserved.

    1 Introduction

    For many years, computer security was an orphanof computer science. It did not fit readily into anysingle discipline, because it cut across the realms oftheory, systems development, software engineering,programming languages, networking, and other dis-ciplines. Further, it was considered a strictly appliedmatter, with little theory that was useful in practice.

    Perhaps this attitude originated from the natureof “computer security” before computer use becamewidespread. Securing computers involved controllingphysical access to the systems, because users weregenerally trusted. As networks began to connect sys-tems, the user community was still trusted, so networkprotocols were designed to provide robustness by han-dling failures and to provide only very basic securityservices. But the rapid growth of networking, com-bined with the increasing availability of computers,changed this environment. Now, people from differentinstitutions, with different criteria for access, had theability to connect to systems throughout the network.

    Email address: [email protected] (M. Bishop).

    ISSN: 2008-2045 c© 2011 ISC. All rights reserved.

    Indeed, the first RFC addressing security was RFC602, which recommended taking precautions againstunauthorized remote access (by choosing passwordsthat are difficult to guess and not posting remoteaccess telephone numbers), and noting that there wasa “lingering affection for the challenge of breakingsomeone’s system . . . despite the fact that everyoneknows that it’s easy to break systems” [1]. As thenumber of systems connected to the networks grew,the number of institutions housing those computersgrew, and the number of interconnected networksgrew, so did the security problems.

    In the mid-1980s, the consequences of neglecting se-curity became clear. Computer viruses, first describedby Fred Cohen [2], proliferated. The Internet wormof 1988 [3] disrupted the Internet by overloading sys-tems within hours, and very quickly re-infecting thosefrom which it was purged. Studies showed that it hadspread rapidly through the network, infecting severalthousand systems. 1 Other worms, such as the Decnetworm in 1988, attacked specific networks (in this case,the NASA SPAN network).

    1 The actual number of systems infected is not known. A good

    statistical estimate is approximately 2600 systems [4].

    ISeCure

  • 4 Computer Security in the Future — M. Bishop

    The number of users and systems connected to thenetworks grew dramatically as connecting networksbecame simpler, and the development of web browsersand servers dramatically accelerated this process. Theresulting interconnected global networks became theInternet, and average people—untrained in any realmof computer science or computer use—began to use itfor everyday chores such as paying bills and banking.Similarly, organizations began to place more and morematerial online. This enabled people to correlate in-formation to draw (sometimes incorrect) conclusions.As an example, employers often do web searches onprospective employees, and in some cases have de-clined to hire them based on the information they find[5, 6].

    Thus, security problems arising from the correla-tion of information grew as the interconnectivity anduser population of the Internet grew. Perhaps a morepernicious threat arose from the lack of security ofthe systems connected to the Internet, and indeed thesecurity weaknesses within the Internet infrastructureand protocols themselves. Botnets exploit the inabil-ity of governments, commercial and non-commercialorganizations, and home users to secure their systems.Computer worms, viruses, and other forms of mal-ware attack systems through vulnerabilities in theirsoftware and configuration. Phishing, spearphishing,and other forms of social engineering trick people intobypassing controls, or to taking actions that opentheir systems for attack. Thus, in general, neither theInternet nor individual systems are secure.

    This state of affairs has several consequences, amongthem the following:

    • People use the Internet as a resource, but have noway to determine the accuracy of the information.For example, Wikipedia is an online encyclopediawritten by contributors. The consequence of thisis best demonstrated by the “Seigenthaler inci-dent,” in which someone posted a blatantly false(and libelous) biography of the well-respectednewsman John Seigenthaler [7].

    • When people provide services with sensitive data,that data is usually stored on systems connectedto the Internet. If those systems are compromised,the sensitive data can enable the attacker to ac-cess the original provider’s account, possibly ac-counts on other servers, and even impersonatethe original provider of the data, enabling iden-tity theft.

    • It is unnecessary to compromise the serviceprovider’s systems to obtain the data mentionedearlier. Compromising the user’s system and in-stalling malware such as key loggers and memorymonitors enables the attacker to obtain the data

    before it ever leaves the victim’s system.• Worse, the Internet infrastructure itself can be

    compromised. In 1997, an organization used aDNS cache poisoning attack to route traffic tocertain top-level domains through an alternatedomain name registry [8]. The Border GatewayProtocol, the Internet’s inter-domain routing pro-tocol, has many known security problems, buteffective solutions are as yet to be deployed [9].

    • When different institutions, with different secu-rity policies, share data over the Internet, thepolicies that one organization uses to protect itsdata may not apply when the data is resident onthe other organization’s systems. For example,“hate speech,” which is protected in the UnitedStates, is illegal in France. If an international cor-poration stores data in the United States thatconstitutes “hate speech” in France, can a Frenchcourt order that the data be removed from theUnited States servers? [10]

    • Desktop and home computers come with securitysettings that seem appropriate to the vendor. Fur-ther, patches distributed from the vendor maychange security settings without the user’s con-sent. This can cause unexpected security prob-lems. For example, Microsoft’s Service Pack 2for Windows XP “locked down” Windows XPsystems by activating the host-based firewall toblock various network ports. Many of these portsare used by popular games. The effect was tomake these games unplayable [11].

    The practice and theory of security will need to evolveas technology evolves. Indeed, the requirements thatdefine security change over time. Privacy, for example,is a relatively new concept in history, and its definitionvaries from place to place and over generations. Thosewho grow up in a world where people tweet theirthoughts and feelings have a very different view ofprivacy than those who grew up before the WorldWide Web. This paper examines how these changesmay be reflected in the practice and study of security.

    The next section examines changes to computersystems in the recent past, and suggests what mayhappen in the near future. We then look at the Inter-net, and computing, infrastructure to the same end.Finally, we conclude with some thoughts about soci-etal changes that may occur, or that are occurring,and suggest how those will affect our view of, andpractice of, security.

    2 Changes in Systems

    In the past few years, numerous constraints and eventshave affected computer systems. How they are used,

    ISeCure

  • January 2011, Volume 3, Number 1 (pp. 3–27) 5

    and in what environments, dictates what security con-siderations affect their design, implementation, con-figuration, and use. This section explores several ar-eas: standards; compliance with standards and otherrequirements; the increasing connections among sys-tems and the convergence of different media and typesof systems such as cellular telephoned, personal digi-tal assistants, laptops, and other equipment; and theaggregation of data, which has been exacerbated bythe great increase in connectivity—which promisesto increase even more in the future. We also look atwhat these changes imply for the average user who isnot knowledgeable about computers. We begin withstandards.

    2.1 Standards and Compliance

    Standards describe requirements that a system is to becompared to. Thus, standards describe some aspectsof the system: their required functionality, the level ofassurance required, details of the implementation, orsome other aspect that affects the development or useof the system. The nature of standards in computersecurity has evolved greatly. In particular, standardshave become more specialized, applying to differenttypes of systems (such as firewalls, general-purposesystems, and cryptographic software and specializedhardware) and different environments (such as busi-ness, education, military, and civilian government).

    One of the earliest, and certainly most influential,standards was the U.S. Department of Defense TrustedComputer System Evaluation Criteria [12] (knownas the TCSEC or, more colloquially, the “OrangeBook” 2 ). This standard defined 7 levels of systems,ranging from A1 (formally verified design, rigorous im-plementation) to D (for systems that did not meet thecriteria of any other levels). The classes combined spe-cific functional requirements with evidence of assur-ance, with both the nature and number of functionalrequirements and the strength of the assurance evi-dence growing throughout the categories. The processof certification took considerable time, because theanalysts examined design documentation and sourcecode as well as the system itself, and the system wascertified as a whole. Thus, any change to any part ofthe system required the system to be recertified. TheRatings Maintenance Program (RAMP) later allowedthe vendor to gather much of the assurance evidencefor new versions of a certified system under certainspecific conditions, rather than having to undergo thefull certification process.

    The TCSEC influenced future standards in the field.In 1991, the European Union adopted a similar set of

    2 The cover of the TCSEC was orange.

    standards, called the Information Technology Secu-rity Evaluation Criteria (ITSEC) [13], which differedfrom the TCSEC in several ways. The ITSEC levelsincluded assessing the security measures protectingthe development environment; the TCSEC had nosuch requirement. Further, the ITSEC required thesystem documentation to be analyzed to determinehow the system could be misused; the TCSEC did notrequire this. Certified, licensed evaluation facilitiesevaluated systems under the ITSEC for a fee, whereasfor the TCSEC, the U.S. government performed theevaluation without fee to the vendor. Perhaps mostinteresting was that vendor stated the functional re-quirements of the system, whereas the TCSEC statedthe requirements that the vendor had to met. Thus,the ITSEC provided 7 levels of assurance (from notmeeting other levels to formal methods and a partialmapping of executable code to source code) for what-ever functional requirements the vendor supplied.

    This separation of functionality from assurance wasa key step to the next set of standards, called theCommon Criteria (CC) [14–16]. The CC adopted theidea of separating functional requirements from assur-ance requirements. Protection profiles (PP) embodiedfunctional requirements for specific purposes, so forexample different protection profiles exist for clientVPN applications, cryptographic modules, operatingsystems, and so forth. The PPs are composed of secu-rity functional requirements; the CC defines 11 classesof these. Orthogonal to the PPs are the Evaluated As-surance Levels (EALs), of which there are seven. Eachis also composed of specific assurance requirements,selected from 10 classes. The lowest level, EAL1, ap-plies to systems for which no serious security threatsexist, but which require some (minimal) assurance ofcorrect operation. The highest, EAL7, applies whenthe target of the evaluation is to be used in very high-risk environments, and requires substantial securityengineering. Like the ITSEC, the CC is international,with each member country controlling the evaluationprocess; for example, in the United States, the Na-tional Institute for Science and Technology accreditscommercial laboratories to perform the evaluation.Interestingly, a nation is under no obligation to recog-nize another’s evaluations. In practice, many countriesdo have such agreements in place.

    Another type of standard focuses on systems de-signed for a specific task. Two good examples of thisare the FIPS 140-2 standard used by the United Statesand Canada, and the Voluntary Voting System Guide-lines developed by the U.S. Election Assistance Com-mission.

    FIPS 140-2 [17] describes requirements for cryp-tographic modules. The lowest level simply requires

    ISeCure

  • 6 Computer Security in the Future — M. Bishop

    the use of a FIPS-approved algorithm; it is intendedfor general-purpose computers. The highest level re-quires the use of a protected cryptographic modulethat is tamperproof and immune to compromise byenvironmental changes, such as fluctuations in voltage.Certification laboratories in both the United Statesand Canada do the evaluations. It is a well-regardedstandard, with an effective validation process.

    Voting systems in the United States depend heav-ily on computers, and the Voluntary Voting SystemGuidelines (VVSG) [18], promulgated by the U.S. Elec-tion Assistance Commission, provide requirementsthat such systems should meet. They are “voluntary”because states (which run elections in the UnitedStates) may use systems not certified to meet thoserequirements. In practice, those states that do not re-quire certification to the VVSG have their own (oftenmuch stronger) standards. The standards have beencriticized as not being based on clearly articulatedthreat and process models, and as a result many ofthe requirements appear arbitrary [20]. Further, sys-tems certified under these standards have been com-promised in several studies [21–26]. New standardsare currently in draft form.

    Efforts to require systems to meet specific securityand assurance standards have had mixed success. Re-quiring the use of such systems for specific tasks (suchas voting) does work, but raises questions about theeffectiveness of such certification. In some cases, theeffectiveness is apparent. In others, the lack of effec-tiveness is equally apparent. Clear standards, with afirm basis in the problem being solved, the threatswhich the system and environment face, and that pro-vide realistic remediation, are key.

    Standards will continue to be developed and refined,as the environments in which computers, and thetechnology itself, changes. Extrapolating from thepast, the groups providing the certification will includecommercial firms certified by the management bodyassociated with the standard. Efforts to standardizethe testing certifiers will grow in importance, as willthe quality and methodology of the testing.

    The key to these standards will be realism andapplicability to the area or system for which they aredeveloped.

    A key element in any testing is compliance: how dothe testing labs show that they implement the testingmethodology correctly whenever a test is performed?Do they use a checklist, or some other method? Thisraises the general issue of compliance.

    Compliance is a demonstration that a system, proce-dures, or environment meet a stated set of conditions.An example of a compliance tool is a checklist identi-

    fying specific properties that a computer system mustenforce (such as “no passwords of less than 8 charac-ters”), and evidence of compliance would entail goingthrough the checklist to be sure the properties hold.

    Two techniques are used to demonstrate compli-ance: paperwork and examination. Some institutionsand rules require the use of both methods to validatecompliance with policies, procedures, rules, regula-tions, or laws.

    When auditors examine a system, site, or artifact(“system” for convenience) to determine whether itcomplies with standards or regulations, perhaps themost common approach is to use a checklist that enu-merates what the system’s characteristics are to be.Does the system require a passphrase with entropyabove a certain value? When an external client triesto connect using a high-numbered port, is it blocked?Are keys to the room with the supercomputer num-bered and accounted for? Users and system adminis-trators are interviewed, and procurement paperworkand written policies and procedures checked, to en-sure the items on the checklist are satisfied. Note thatthe interviewer might not check the answers for accu-racy, accepting instead that the interviewees all gaveaccurate answers.

    Examination is a second approach rapidly gaining inpopularity. It requires the auditors to study documentsand requirements, as in the first technique, but thento go further and ensure that the material in thosedocuments is accurate, and that the system does in factmeet the stated requirements. This type of testing mayinvolve requirements tracing through the design to theimplementation of the system, executing commandson the system, looking at configuration files, andobserving the actual execution of procedures. It allowsthe auditors direct contact with the system ratherthan contact filtered through those who implement,maintain, and use the system.

    Penetration testing is one of the methods used toexamine the system [27–29]. It can be expensive, asexpertise is relatively rare, but it is also very effectiveand can uncover problems that other methods donot find. In this form of testing, the auditors assumethe role of attackers, and attempt to compromise thesystem. The tests are conducted in such a way thatthey do not interfere with production use of the system.A specific system may be designated as the one theattackers are to use, and this system is then treatedas a production system. 3 This enables the auditorsto test the system in the environment in which it is

    3 The staff may not be told that a test is under way, so theywill not be more careful than usual to follow procedures for

    the system.

    ISeCure

  • January 2011, Volume 3, Number 1 (pp. 3–27) 7

    used, and evaluate the system and the operationalpolicies and procedures as practiced. An alternativeis audit the system alone; this is common when theproduction system cannot be analyzed while in use (forexample, electronic voting systems cannot be attackedwhen being used for an election, as that could corruptthe results of the election). In this case, the analyststypically state under what conditions the system willfail to comply with the regulations and standards, andthen determine whether those conditions are met inpractice.

    A simple way to ensure compliance seems to be tomandate a particular configuration and set of proce-dures that have been approved as meeting the relevantstandards, rules, and regulations. In environmentswhere the local system administrators can change thesystem configuration, compliance checking is still nec-essary. However, starting from a configuration knownto comply with rules and regulations allows a quickcompliance check: just compare the systems and elim-inate the local data. Another approach is to deny thelocal system administrators the power to change thoseparts of the system relevant to compliance by assign-ing roles and privileges appropriately.

    In the future, compliance testing and measurementwill shift from paper-based evaluation to examination.For mission-critical systems, penetration testing willbe a key component of the compliance evaluation. Thisis already occurring, for example, in many states inthe United States for electronic voting systems, whichare ley to accurate and valid elections [21, 23, 26, 30],and in other organizations. This is a recognition thatattackers may find ways to compromise systems notcovered by the checklists.

    Organizations are also creating standardized distri-butions so that configuration and updating is undercentral administrative control. This ensures that localsystem administrators cannot accidentally misconfig-ure systems, causing problems. It also ensures thatthe central administrators can deal with problemsquickly, and provide expertise to help local sites withany problems that do arise. Also, the central admin-istrative control can test patches from the vendor ornew software to determine whether those would inter-fere with the organization’s mission. This is critical fororganizations like financial firms, where unexpecteddown time can cost millions of dollars, and militaryor emergency response units, where quick response isvital to the success of the organization.

    In the future, vendors may assume much of the bur-den of standardizing configurations. An organizationmay either supply the configuration, or ask the vendorto create one. Indeed, something similar will undoubt-edly happen for home and small business computers.

    Currently, some vendors distribute patches to theirsystems automatically. Others require users (admin-istrators) to request the patch be downloaded andinstalled. As computers for home and small businessenvironments evolve (see Section 2.5 for one possibleevolution), vendors will create policies that these sys-tems implement. Then they can test their patches be-fore distribution to determine the effects of installingthe patch, and ensure the results are consistent withthe desired standards. This will avoid problem like theWindows XP Service Pack 2 issues described above.

    Standards, and compliance with those standards,are necessary to connect different systems to the samenetwork. For the Internet, of course, this standard isthe TCP/IP suite. We now turn to connectivity toexamine the state of the art, and how it may evolve.

    2.2 Connectivity and Convergence

    A basic rule of computer security is that those whocannot access your resources cannot compromise them.In the early days of computing, the security perimeterwas small: the users and administrators, and possiblypeople who knew a telephone number that they couldcall to connect to the computer. 4 As connectivityincreased, so did the number of people with access tothe system. Note here that “access” does not mean“authorized user.” Those who can simply connect tothe system can reach the security perimeter. Theycan then try to break through the protections at thatperimeter.

    Further, the very definition of “security perimeter”changed as technology evolved. The introduction ofvirtual private networks (VPNs) extended the secu-rity perimeter beyond that part of the site under thephysical control of the administration. Now, employ-ees could take their portable computing devices (suchas laptops) to geographically distant places, connectto their home site using a VPN, and thus the laptopmoves behind the security perimeter. This means thata device (the laptop) can sometimes be inside theperimeter, and sometimes outside. When it is outside,the administration does not control the protectionsfor the device, and those that are active may be incon-sistent [31]. For example, the site may enforce withfilters a policy forbidding users to browse web sitesknown to infect systems with malware. But if the lap-top user does so when the laptop is not connected tothe site, the filters will not be applied and the laptopmay become infected. When the user then connects tothe site with the VPN, an infected system is now be-hind the perimeter and the malware may compromise

    4 As the number of people who knew the phone numbers grew,

    so did the perimeter; see [1] for an early warning about this.

    ISeCure

  • 8 Computer Security in the Future — M. Bishop

    the site.

    This illustrates a security problem growing in mag-nitude as connectivity increases. The security policyimplemented on the laptop conflicts with the securitypolicy of the site to which it connects via the VPN.In this case, the laptop’s policy—more precisely, thepolicy resulting from the configuration of the laptop—allows connections that the site policy forbids. Han-dling this requires detecting non-compliant systemswithin the perimeter, which many sites can do. Farmore complex is when differing sites with differingsecurity policies interoperate.

    Consider a military organization, whose policiesemphasize confidentiality. There, soldiers who postinformation about their location (even if only indi-rectly) reveal information that could endanger theirfellow soldiers, themselves, and their mission. Socialnetworking organizations like Facebook and Twitterexist to disseminate information. The two have con-flicting policies. Thus, either or both must changetheir policy, or take into account the effects of theothers’ policy. Failing to do so, or having soldiers whoignore the policy conflicts, can interfere with militaryactions [32]. In practice, many military organizationswill allow access but restrict the information that itsmembers can post. For example, the U.S. military al-lows access to social networks, but gives commandersthe option of blocking them if necessary to protect amission [33].

    Conflicts can arise in more subtle ways, especiallywhen public access is an ancillary part of the policy,rather than the primary purpose. The U.S. courtsallow a company to request a filing be sealed whenit contains a trade secret; if the judge agrees, thedocument is not available to the public. 5 Unless sucha request is made, the filing is available to the public.But the purpose of the courts is litigation; makingfilings available to the public is an ancillary effect ofhow U.S. law operates. In 2001, the DVD CopyrightControl Association filed suit to block publicationof a program that would decipher the contents of aDVD, enabling anyone to copy and play the movie. Todemonstrate the code in question would work, theyfiled their implementation of the algorithm. One daylater, they realized they had not asked the court to sealthe filing, and did so—after the court had posted thedeclaration to its web site, and the document copiedto several other Internet web sites, including one fromwhich the document had been downloaded over 21,000times! [34]

    The DVD escapade demonstrates another aspectof increasing connectivity, namely the widespread

    5 It is of course available to others involved in the litigation.

    dissemination of data. In the past, data was essentiallylocalized, and would be disseminated through letters,publications, and (if important enough) through newsmedia. Now, a simple posting to a web page makesthe data available to anyone in the world with a webbrowser. In some cases, this is advantageous to theposters, as when repressive regimes take actions thatthe posters wish to publicize. In other cases, it isdisadvantageous, particularly when the informationis embarrassing, incorrect, or libelous.

    This suggests three trends for the future.

    The first is an increasing interest in the composi-tion of security policies. This problem, first studied indetail by McCullough [35], presents deep theoreticalquestions involving restrictiveness [36]. But the bulkof the effort will be in the practice of policy compo-sition, and examine the use of procedural as well astechnological controls.

    This leads to the question of the actual policy as op-posed to the implemented policy. That the two differis widely known; the question is how to determine theimplemented policy, and then express it in a way thatis useful for compositional analysis. One method ana-lyzes configurations; current methods focus on firewallrule sets [37, 38]. A second method analyzes log filesto see what queries (or processes) are executed [39].Future work on policy discovery, which will extendbeyond firewalls to include systems and sites, mustconsider not just the actual configurations of the com-puters and infrastructure systems, but also the actualprocedures (as opposed to the ones written down).

    The second trend draws upon the interconnectionof societal infrastructure with the Internet. As power,water, and other distribution network controllers con-nect with the Internet, the vulnerabilities of thosecontrollers expose to remote attack the distributionmechanisms for basic needs [40]. But the ability toadminister these distribution grids remotely is alsocritical, so balancing the two is emerging as a cen-tral theme. The controllers and protocols need to bemade more secure, but in such a way that upgrad-ing or replacing existing controllers does not disruptthe distribution. Both the question of what “security”means in this context, and how to make the changeswith minimal disruption, raise issues of security andsecurity management.

    The third is the increased flow of information al-luded to earlier. Insiders, or people trusted with accessto information critical to the operation of an orga-nization, can use the increased connectivity to sendinformation to competitors [31, 41]. This could harmthe organization financially, through loss of revenue. Itcould also embarrass the organization or its members,

    ISeCure

  • January 2011, Volume 3, Number 1 (pp. 3–27) 9

    or hinder the work of the organization. The greater theconnectivity, the more exposure this information has,and the more people it reaches. Determining whereinformation flows, who has had access to it, and insome cases how it left the organization, is an area ofboth theoretical and applied research that will growin importance.

    The increase in connectivity makes convergence, orthe provision of multiple services over the same net-work, attractive because the infrastructure needed forall those services is the same. Currently, many cellphone manufacturers enable their phones to use eitherthe cellular telephone network or a TCP/IP-basednetwork. Then when the cell phone moves into an arealacking cellular coverage but having wireless coverage,the cell phone shifts into “voice over IP” (VoIP) modeand uses the wireless network instead of the cellularnetwork. Similarly, mechanisms like Google Voice en-able someone to provide a single telephone numberto everyone, and arrange that calls to that numberbe forwarded to whatever device (office phone, homephone, cell phone, or computer) is closest without thecaller being told.

    As data flows are switched to various devices andnetworks, the originator and sender of the data has noidea over which networks, or through which devices,the data flows. The sender cannot rely on anythingalong the path; thus, link protection mechanisms areuseless here. End-to-end security mechanisms seemappropriate, provided the receiving system is trusted.But with true convergence, the sender may not knowthe relevant properties of the final (receiving) system—not even whether the (human) recipient can trustthe end system! Thus, something beyond end-to-endmechanisms are needed to ensure a rogue receivingsystem cannot interfere with the presentation of themessage. Whether such a mechanism can exist is anopen question.

    Many of the other security issues are similar to thosethat increased connectivity raises. Convergence movesdata and instructions over devices and networks thatare available to people who may not have access tothe original communications medium, and thereforechanges the risk assessment. The use of other com-munications media means that the data may passthrough organizations with security policies incom-patible with the original media. For example, in someplaces, the rules for monitoring wireless communica-tions are different than those for monitoring wiredcommunications, because monitoring wired communi-cations requires a physical tap into the wire (whichmay require the wiretappers to enter a house or build-ing), whereas a passive radio can monitor wirelesscommunications. Also, the organizations controlling

    the devices through which the messages are routedcan have their own rules for managing traffic. Unlessthe sender is aware of the rules for all organizationswhose communication media the messages may transit(and possibly go to), the sender may find the trafficinterfered with or monitored in unexpected ways andplaces. Again, this is a problem with composition ofsecurity policies. But in this case, the policies associ-ated with two messages sent from the same source tothe same destination may vary wildly.

    2.3 Data Aggregation

    Data aggregation is the assembling and correlatingof information to draw inferences about something orsomeone. Marketers use this to determine shoppingpatterns of people in a geographical area. Medicalepidemiologists use this to examine the spread of dis-eases. Law enforcement authorities aggregate reportsof crime to compile statistics as well as identify pat-terns that may lead them to the perpetrators. Compa-nies such as Amazon and Netflix aggregate browsingand purchase data to suggest movies, books, and otherproducts that people might want to purchase. Finally,the sale of information to credit bureaus and otherfinancial institutions enables them to aggregate infor-mation to assess the creditworthiness and financialstability of the subjects of the data. So data aggre-gation has become a mainstay of our world. With itsbenefits, though, come problems.

    In 1974, the U.S. American Civil Liberties Union(ACLU) surveyed the use of computers. After identi-fying and discussing several systems called ALERT,CLEAN, CONNECT, GIPSY, LEAPS, MULES,MUMPS, and ORACLE, each of which allowed usersto manipulate information about people in a narrowdomain, the report states [42, p. 162]:

    The great worry for citizens is the ability of all thesemachines to get together. If MULES gets MUMPSand GIPSY LEAPS to the ALERT and CON-NECTS with CLEAN ORACLE, we are doomed.

    In 1974, networking was in its infancy, and commu-nication between organizations relied on the physicaltransportation of some medium (such as pen and pa-per or magnetic tapes). Thus, aggregating informa-tion about an individual took time, and required theaggregator to know whom to contact. Figuring outwhere to look was also a time-consuming task.

    Widespread networking of systems, and in particu-lar the Internet, changed all this. With search enginessuch as Google and Bing, and the ubiquity of network-ing, obtaining information about individuals is muchsimpler than before. As noted in the Introduction,many employers do this on a small scale when consid-

    ISeCure

  • 10 Computer Security in the Future — M. Bishop

    ering whom to hire. On a much larger scale, one canbuild a fairly complete picture of people from data notonly in social networks but also on government repos-itories, web pages, and pages of social and politicalorganizations (especially in societies where politicaldonations are required to be disclosed).

    When an “adversary” finds information about a“victim” and assembles it, the adversary can drawcertain inferences about that victim. In some cases,these inferences are correct. In others, they are not.The saga of the New York Times’ investigation of theAOL data release illustrates both points.

    On August 3, 2006, AOL posted 21,011,340 searchqueries from March to May 2006. The data set hadanonymized user identifiers, the query, the time of thequery, and whether the user clicked on any responses(and if so, the rank and URL of the item followed).The data was taken down on August 7, 2006.

    On August 9, 2006, the New York Times publisheda story inferring the identity of anonymized user4417749 from the published data [43]. The reportersnoticed that anonymized user 4417749 made severalqueries about landscapers in the city of Lilburn in thestate of Georgia (GA). Other queries from that sameuser looked up several people with the last name of“Arnold”, and about home sold in the Shadow Lake sub-division of Gwinnett County (which contains Lilburn).With these leads, the reporters quickly identified user4417749 as Thelma Arnold of Lilburn, GA.

    Some of Ms. Arnold’s queries presented a misleadingpicture, however. The queries “nicotine effects on thebody,” and “bipolar” lead to an inference that shewas looking for information about her own medicalconditions. In fact, she searched for information tohelp friends who needed help or were anxious abouttheir conditions; for example, she said she wanted tohelp one of her friends quit smoking, leading to thesearch about nicotine.

    In this context, beyond the invasion of privacy, 6 theerroneous inferences were harmless. In other contexts,they can be very harmful. Consider a search for “howto grow marijuana,” “where to buy marijuana,” and“marijuana types.” These could be from someone whowants to buy and use marijuana, an illegal drug, orsomeone who is researching its cultivation and use fora high school report on the dangers of using drugs.Were authorities to assume the first, and act on it, thesearcher could be trapped in a Kafkaesque nightmaretrying to clear himself of something he never evencontemplated.

    6 Ms. Arnold gave permission for the New York Times reportersto name her in the story. She stated that she planned to cancel

    her subscription to AOL.

    The persistence of information can aggravate this.Past indiscretions, which in the past would have nevercome to light, return to haunt people. For example,in 2010, Christine O’Donnell, a candidate for the U.S.Senate from the state of Delaware, spent much of hecampaign trying to counter statements she made inthe 1990s, and that were distributed on YouTube andon television [44]. And removing data once posted tothe Internet is not feasible in practice. Even thoughthe data that AOL had posted was quickly taken down,it had already been copied and remains available onthe web [45].

    One area of active research is to develop faster andmore effective data aggregation algorithms, and tobuild better data aggregation tools. This will enablebetter marketing of products; it will also allow politicalcandidates to target potential voters more effectively.Undoubtedly, it will be used as a tool in the intelligencecommunity to develop information on adversaries (andpotential adversaries) and identify emerging threats.

    Countering the effectiveness of these algorithms andtools will also be a research area of some importance.Preventing any information from being available issimply impossible in our world, because basic infor-mation about shopping, travel, and other ordinaryaspects of daily life involve interaction with groupsthat disseminate information about those interactions.One technique is fuzzing, in which data belonging tomultiple entities is conflated to limit the ability ofthe adversary to draw accurate conclusions. A secondtechnique is deception, in which one provides deliber-ately misleading information in order to mislead theadversary. The trick with data aggregation is to ensurethat the data sources are (somewhat) aligned with thedeception. The history of secret operations providesmany examples of this (see [46–48] for examples).

    2.4 Users and Human Factors

    The number of people who use computers has growngreatly in the past 20 years. One reason is the increas-ing availability and affordability of the technology,and its packaging in a form that anyone can use with-out extensive set-up. Another reason is the wide rangeof applications that perform tasks the average personneeds done, such as balancing checkbooks, writing let-ters, and sending and receiving mail. A third reasonis the new tasks that the computer makes possible,though the World Wide Web, which exploded in pop-ularity about 15 years ago: now people can shop, readnews, and do research from their home or office, ratherthan having to go to a library or travel elsewhere,

    The majority of computer users are not experts, oreven particularly knowledgeable, about how comput-ers work, how to configure them, and how to maintain

    ISeCure

  • January 2011, Volume 3, Number 1 (pp. 3–27) 11

    them. Nor do they want to be. They view their com-puter as an appliance that performs certain tasks, andwant it to function as reliably as a television set or tele-phone or automobile—and be as simple to use. Theirgoal, after all, is to get their particular tasks done, andnot figure out how the underlying technology actuallyperforms that task.

    Security is a supporting service, not an end in andof itself. So people expect the computer to provideany necessary security for their work, and for theirenvironment. To them, “security” is an amorphousconcept that simply means they can do their workwithout someone stealing their personal information(such as credit card numbers, social security or otherpersonal identification numbers, or other data thatcould be used to steal identity) or interfering with whatthey are doing (for example, decreasing the usablecapacity of their network connection). If pressed, mostpeople will also want to be sure that someone elsedoes not do anything illegal on their computer, suchas install a zombie used by a botnet to steal others’information (but most people will not think of thisby themselves, as they assume the controls on theircomputer will prevent this).

    Although there has been much discussion of how toeducate this type of computer user about security andsecuring their system, ultimately such efforts will fail.The primary reason will not be a lack of resources oreffort (although these may be contributing factors). Itis simply that some people are not capable of learningthe technological underpinnings necessary to deter-mine how to configure a system to be secure—andeven if they can, it is unclear if they will succeed.Government agencies and commercial firms are de-fended by experts using the most advanced securitytools available—and yet intrusions still occur at thosesites. This suggests that not even the experts can ade-quately defend computer systems. If experts cannotdo so, it is unreasonable to expect non-experts to beable to do so.

    But home and small business computers are targetsfor attackers looking for resources to use. The typicalform of compromise is placing a bot on the system,thus making the computer one of thousands availablefor the attacker’s use. So in the near future, the needto protect these systems will be recognized as critical.As the purchasers will be unable to do so, the onuswill fall on the vendor.

    Now the different uses for home and small busi-ness computers (called “small computers” for brevity)come into play. Vendors will not be able to design asingle “secure” configuration, because the needs ofthe consumers will vary. But it is very likely thatlarge groups of consumers will have the same secu-

    rity needs, so vendors can provide a selection of smallcomputers designed for specific uses. Of course, ratherthan describe the security settings (“this system doesnot block outgoing connections over high-numberedports”), they will describe the effects of those settings(“this system supports games that communicate withweb servers”) so the consumers can understand whatthe vendor is providing.

    This is very similar to the centralized system con-figurations mentioned earlier, but key differences willmake the task of supplying these configurations muchharder. First, the ways in which consumers use smallcomputers varies much more widely than the way a sin-gle organization’s members use its computers. Thus, asetting that secures some systems will break others, ashappened with Microsoft’s Windows XP Service Pack2 [11]. Secondly, the environments are much more var-ied, so the vendor cannot expect the system to beconnected to the Internet, or even turned on, duringthe day. Third, the vendor cannot expect the user tobe able to articulate what he or she wants, not to beable to understand any of the technical details that avendor would normally use to describe its products orsettings.

    This recognizes that many people are not techno-logically savvy. May people simply do not care abouthow technology works; they only want to know how touse it. As an example, consider an author who writesfiction. He is skilled with words, ideas, and the ex-pression of those ideas. His writings can make peopleweep, laugh, think, and act. But he does not knowthe correct technological model to describe his secu-rity needs, and so cannot construct a security policyfor the vendor (or someone else) to implement. Thus,vendors must find a way to communicate with thewriter that the writer can understand, so he can makeinformed choices. How to do so is an area of researchin communications and psychology that will increasein importance. Of equal importance will be integrat-ing existing mechanisms, and possibly developing newones, to protect such users.

    2.5 Computers as Appliances

    One approach is to treat computers as appliances.When a consumer goes to purchase a computer, theconsumer looks for a system that will perform thedesired functions. Upon purchase, the user simplyturns on the system and calls up the program they wishto use. The user never sees anything else; the systeminsulates them completely from everything except theprograms they want to run. Further, software andhardware are sold as “plug-ins”; one simply connectsthe module with the system (possibly through a USBplug, or some other connector) and the contents of

    ISeCure

  • 12 Computer Security in the Future — M. Bishop

    that module may now be used.

    The “appliance” computer will require a descriptionof what it does and what plug-ins are compatible withit. In particular, if the base system does little (perhapsonly provide a web browser) and modules add theability to type business letters, use a spreadsheet, andso forth, the vendor must ensure that the plug-insdo not interfere with the purpose of the base system.How to express these attributes in a way that a non-computer savvy consumer can understand is a problemrequiring research, and one that will become moreimportant in the future.

    The self-contained modules will require consider-able sophistication to handle errors. Currently, errorrecovery is poorly implemented (and poorly taught inschools). As technology and the integration of tech-nology matures, vendors will improve reliability inorder to minimize costs of assisting customers as wellas attract new ones.

    Vendors will take over the maintenance of the sys-tems they sell. This is already being done to somedegree with automatic patching of systems, wherethe system contacts vendors to download the latestpatches, and then install those patches. But vendorsin the future will have to go farther: they will have tobe able to restore systems that have been successfullyattacked, or enable the customer’s work to continuewhile the system is compromised so that the vendor,or other authorities (such as law enforcement) caninvestigate.

    We see this to some extent in the rise of “securityas a service.” That phrase means that one contractswith an external service to provide security, much asone contracts with an Internet service provider (ISP)to provide a network connection. When one does thelatter, one need not understand how to install thephysical network, set up the routers, DNS, and otherinfrastructure services. The ISP provides that for thecustomer. Similarly, a company offering “security asa service” provides anti-virus mechanisms, firewallmechanisms, and other security mechanisms in such away that the customer need not monitor or maintainthem; the service provider does so.

    These changes reflect an approaching paradigmshift: computing is moving from a technologically ori-ented discipline to a human-oriented discipline.

    Some aspects of this paradigm are emerging. In ad-dition to the earlier observations, the rise of socialnetworking is changing how people communicate. Thishas inspired several areas of research. A new methodof routing is based on social connections rather thantraditional metrics such as hop count or minimumdelay time. Recommendation systems and other sys-

    tems grounded in people underlie many trust models,and in fact are themselves subject to essentially socialattacks such as the Sybil attack. Information systemscan also monitor people closely, and provide this datato caregivers—or others.

    This last point bears amplifying. Pervasive com-puting requires placing sensors in an environment sothat a person can be continuously monitored. Thismight be used, for example, to enable an elderly orsick person to live as an outpatient but, in case of aproblem, receive immediate care. It can also be usedin less beneficial ways, leading to a society such as inGeorge Orwell’s novel 1984 [49], because it exposesextremely personal information to observers.

    2.6 Summary

    As technology and the use of computers evolve, ordi-nary users will become more insulated from the inter-nals of the computer. Vendors will assume the burdenof managing and securing the system. As users’ needsgrow, the systems will move to providing basic servicesand mechanisms only, and both vendors and users willaugment these with plug-ins that are designed to workwith these appliance computers without compromis-ing the security of those systems or other applications.

    Two concepts provide the basis for this view ofcomputing. The first is the increase in connectivityand the convergence of different computing devices.In order for devices to transition from one network toanother, they must be able to switch from one type ofnetwork to another without user intervention. Cloudcomputing is another example of this trend, becausethe services provided by the proprietor(s) of the cloudmust be those needed by the customers of the cloud—that is, the customers must be able to connect to thecloud service provider. This raises numerous securityissues such as security policy composition, systemvulnerabilities, and information flow.

    Interconnection and convergence require an adher-ence to standards. This is the second concept. Thestandards have many parts, and a critical part wouldbe the security-related components of the standards.These components must take the technology, the envi-ronment and procedures into account. Further, stan-dards of secure operation and maintenance give as-surance that the services provided have the properprotections. Finally, compliance evidence shows thatthe procedures supporting proper implementation ofthe standard have been instituted.

    The element of privacy will continue to grow in im-portance, both for individuals and for organizations.As noted earlier, data aggregation methods will helpobservers infer information about the entities. Various

    ISeCure

  • January 2011, Volume 3, Number 1 (pp. 3–27) 13

    techniques to disrupt this aggregation will improve,but so will the inference techniques. Direct monitor-ing may be simpler conceptually, but legal as wellas practical limitations may hinder such monitoring.Governments and law enforcement agencies also willwant the ability to bypass security controls when theydeem it necessary. The events in Greece are a caution-ary tale; there, attackers used the build-in wiretapfeatures to monitor calls between government officials[50]. Undoubtedly more such unauthorized uses ofbypass features will occur.

    3 Changes in Infrastructure

    An “infrastructure” is “a collective term for the sub-ordinate parts of an undertaking; substructure; foun-dation” [51]. In the field of information technology,“infrastructure” refers to the networks, servers, andassociated protocols and devices that support com-puting and networks. We use the term in a slightlybroader sense. In addition to the ordinary meaning,we include non-technical resources that support com-puting and networks, such as human resources, man-agement procedures and policies, and other resourcesused to ensure the infrastructure and computers thatuse it function properly.

    We look at the future of the security of this infras-tructure by examining several components: the Inter-net protocols, associating attributes such as originwith messages, testing, societal impacts of the infras-tructure, and security problems attendant on experi-menting with the next generation of infrastructure.

    3.1 Internet Protocols

    The ARPANET protocols were not designed to pro-vide secure networks. When they were originally devel-oped, the main concern was with network robustnessand reliability rather than thwarting attackers whotried to subvert the network. Thus, the foundationalprotocols (specifically, IPv4, TCP, and UDP) empha-sized reliability and continued communication in theface of catastrophic failure of a large part of the net-work rather than protection of data or authenticationof sources.

    As the ARPANET, and other networks, evolved intothe Internet, security became a more important consid-eration. Many mechanisms were suggested to providethe necessary protection. For example, the founda-tional protocols do not provide end-to-end security orauthentication. In the 1990s, Netscape developed theSecure Socket Layer (SSL) protocol to provide con-fidentiality and integrity at the transport layer [52].The Internet Engineering Task Force (IETF) used theexperience gained from SSL’s deployment to develop a

    successor, the Transport Layer Security (TLS) proto-col [53]. As Internet commerce grew, support for theseprotocols was added to web browsers and servers, andthey are now an integral part of Internet commerceand security.

    In 1998, IPv6, the successor to IPv4, was released[54]. IPv6 provides many security enhancements, in-cluding end-to-end host authentication and packet-level data encryption [55, 56]. This end-to-end securitydiffers from that provided by SSL and TLS, whichauthenticate based on the entities (users) rather thanthe host.

    Key to the integration of the transport and networklayers is the Domain Name Service (DNS) [57, 58]that binds network-layer (IP) addresses to transport-layer addresses (host names). The DNS, developedin the mid-1980s, is a distributed database whereineach domain has a DNS server that answers requestsfor the IP address associated with a host name, andvice versa. Various optimizations make the DNS veryefficient. For example, a response to a DNS requestmay include multiple records, and the querier cachesthem to speed future lookups. However, various at-tacks take advantage of some of these optimizationsto provide bogus mappings. For example, in a DNScache poisoning attack, an attacker appends a bogusrecord to the DNS response, and this record will becached along with the legitimate ones. Then when thevictim sends a message to the host named in the bo-gus record, the victim sends messages to the site theattacker has selected rather than the intended site.

    In response, a new protocol called DNS SecurityExtensions (DNSSEC) was developed [59–61]. Thisprotocol provides digitally signed DNS records. Then,in the above attack, the bogus record would not vali-date properly—either it will be unsigned or signed byan unknown key. So the intended victim would rejectit as untrustworthy. Unfortunately, the complexity ofthe protocol and the overhead induced by early imple-mentations have slowed its adoption, and DNSSEChas yet to be widely deployed.

    Like DNSSEC, IPv6 is in use but has not achievedwidespread popularity; IPv4 is still the dominant net-work layer protocol. Perhaps this is in part due to theincreased size of IPv6 packets (which use, for exam-ple, 128-bit addresses as opposed to the 32-bit IPv4addresses). Further, many management, analysis, andsecurity tools exist for IPv4. Few tools exist for thesepurposes for IPv6. It is unclear whether this is a resultof IPv6’s lack of widespread use, or a cause delayingits adoption.

    The security enhancements of IPv6, collectivelycalled IPsec [62], have been implemented for IPv4,

    ISeCure

  • 14 Computer Security in the Future — M. Bishop

    thereby giving sites that use IPv4 the benefits of thosemechanisms. One issue is that both IPv4 endpointsmust use IPsec for those benefits to be realized.

    Many protocols, like IPsec, SSL, and TLS, aregrounded in cryptography. As that field evolved, sodid the algorithms used. Flaws were found in crypto-graphic hash functions, the venerable Data EncryptionStandard (DES) [63] is being supplanted by the Ad-vanced Encryption Standard (AES) [64], and practi-cal identity-based encryption schemes were developed.The length of cryptographic keys in public key systemsincreased as computational power increased. Theseadvances provide a basis for improving the strengthof the cryptography supporting Internet protocols.

    In the future, the use of security-related protocolswill increase as the number of attacks against theinfrastructure increases. Use of IPv6 will continue toexpand, but slowly; the spread of DNSSEC will alsospread slowly. But the use of TLS and other transport-level protocols will continue to increase, as will thedevelopment and deployment of other security-relatedprotocols.

    The greatest barrier to the adoption of new proto-cols is inertia. Introducing new protocols, and new im-plementations of old protocols, risk introducing flawsinto systems that currently work. With organizations,and indeed much of society, so dependent on the Inter-net and other infrastructures working correctly, theold adage “if it isn’t broken, don’t fix it” applies here.In the future, though, vulnerability to attacks, andthe success of some attacks, may make clear that theexisting infrastructure, in some sense, “is broken” andso the price of not “fixing it” exceeds the risks of doingso.

    3.2 Public Key Infrastructures

    Cryptography supports most security protocols. Forexample, IPsec uses cryptography to provide confi-dentiality. SSL and TLS use public key cryptographyfor both confidentiality and integrity.

    Central to public key cryptography is the idea that apublic-private key pair is bound uniquely to an identity.The identity may be an organization, like Amazon ora bank; it may be an individual, such as the author;or it may be a system, such as a home computer. Thepublic keys are used to encrypt secret keys that arethen used to encipher the message. Private keys areused to digitally sign messages; the signatures canthen be verified using the corresponding public key.

    Certificates bind a public key to an identity (thesubject). An issuer then signs the certificate. To vali-date the certificate, one obtains the public key of theissuer, which itself is in a certificate. The infrastruc-

    ture for managing certificates is called a Public KeyInfrastructure (PKI).

    Two different models of PKIs emerged. The first isthe hierarchical model [65]. It views the PKI as a tree,with interior nodes being the issuers or certificationauthorities (CAs). The root node issues certificatesfor its children, who in turn issue certificates for theirchildren, and so forth. This model tends to be usedfor business-oriented matters, because the issuing of acertificate may require a contract between the issuerand the subject. Each CA can publicize the require-ments that someone must meet to obtain a certificatefrom the CA. Thus, the recipient of a certificate canassess the degree of trust it wants to place in the pub-lic key-subject binding, and in the accuracy of thesubject identification.

    The Web of Trust model takes a very different ap-proach. Rather than a hierarchy, it is modeled by a di-rected graph. As implemented in PGP [66], anyone cansign anyone else’s certificate. Signing is distinguishedfrom issuing. Typically, someone creates a certificateand signs it (this is referred to as “self-signing”). Oth-ers can also sign the certificate, and along with thesignature enter a level of trust in the validation ofidentity (ranging, for example, from “untrusted” to“ultimate trust”). One effect of the lack of a centralizedcertification authority is that the definition of eachlevel of trust lies in the signer. So “ultimate trust” forone may mean that the subject is physically presentand has verified the certificate is his; for another, itmay mean that the subject emailed the signer froma known mailbox. Thus, the recipient has no way ofassessing trust unless she knows one of the signers,and the criteria that signer uses for assigning the trustlevel.

    In the past, people believed that a single, cohesivePKI structured using the hierarchical model couldprovide for most needs. A unified structure makesmanaging public keys straightforward, and—perhapsmore importantly—provides a single framework inwhich certificate recipients could assess the degree oftrust they can place in the binding between the keyand the subject in the certificate.

    But non-technical barriers blocked such a singlePKI. For example, which organization will be trustedto be the root? In practical terms, no such node existsfor the world. As an example, there is no organizationthat both North Korea and South Korea would trust.Thus, a set of distinct PKIs grew. Rather than a singlehierarchy, a forest of hierarchies existed, with rootnodes cross-certifying one another when appropriate.

    A key question is how the PKIs support anonymity.The Web of Trust supports it directly: one can simply

    ISeCure

  • January 2011, Volume 3, Number 1 (pp. 3–27) 15

    create a certificate issued to “anonymous” (or someother suitable pseudonym), and self-sign it. But the hi-erarchy model poses a problem: as the CA is vouchingfor the identity of the subject in some way, a specialtype of CA must be created. This CA’s policy for issu-ing certificates makes no claim that the identity in thecertificate is verified; thus the subject identifier canbe any name. The CA issues persona (or anonymous)certificates [65].

    The usefulness of anonymous certificates is ques-tioned periodically. A good example of their utility isverifying that a sequence of messages is received assigned (integrity verification) and that those messagescame from the same source (origin authentication). Awhistleblower, for example, might need to respond toclaims made by the company involved after the firstset of documents is released. By signing her responsewith the same private key, so it can be verified usingthe same certificate, the whistleblower establishes theconnection between the first and second messages.

    The future will not bring a single PKI. Given thefailures of the past 30 years to do so, there is littlereason to believe future attempts will succeed. Farmore likely are many PKIs, each serving a particularconstituency such as an organization or a collection oforganizations with a common purpose. Governmentregulation may also require the use of PKIs for signingmessages for legal or administrative purposes, in orderthat they be attributable to particular individuals ororganizations.

    This raises an area in which some work has beendone, but much more remains to be done: attribution.

    3.3 Attribution and Forensics

    Attribution is the association of a characteristic withdata. Perhaps the most common instance, authentica-tion, attributes an origin or identity to a process ormessage. Much of the technical work on attributionfocuses on IP traceback [67–70] to determine the origi-nating IP address of a packet (regardless of the sourcefield in the header); this addresses source spoofing inflooding attacks. Other papers extend this work to de-termine accountability of attackers [71] and creatorsof network traffic (not necessarily attackers) [72].

    These works build on the lack of attribution ca-pabilities within the existing Internet infrastructure.The value in the source field of the IP packet header,for example, can be easily forged, so IP tracebackmust rely on routers and other intermediate systemsfor information. Due to the large numbers of packetsthat infrastructure systems handle, many IP trace-back schemes are probabilistic. Thus, a flood of pack-ets may be traced to their origin, but a single trans-

    mission with few packets may not have any packetsmarked for tracing.

    But characteristics other than identity are asso-ciated with an entity. For example, a message sentthrough a network has an associated transit time, aroute taken, and other characteristics, all of which areattributes as well. Beyond that, there is ambiguityin many characteristics. For example, “origin” is usu-ally interpreted as “IP address” or “network address.”Many contexts require origin to be attributed to aperson or organization. As of now, work in computersecurity has focused only on the technical aspects ofattribution, assuming that others will translate it toexternal entities.

    As attribution on the Internet becomes more im-portant in non-technical areas such as law, technologywill be improved to provide the necessary information.Several different types of attribution may be desirable[73].

    • When anyone can determine the values of thecharacteristics under consideration, perfect attri-bution has occurred. The legal community willfind this useful to track court and other legaldocuments. Law enforcement will also use thisto track messages or packets involved in criminalactivities.

    • When no-one can determine the values of thecharacteristics under consideration, perfect non-attribution has occurred. Dissidents in a countrywith a repressive government who wish to com-municate will want this form of attribution.

    • When only some entities can determine the valuesof the characteristics under consideration, perfectselective attribution has occurred. For example,Anna may want the tax bureau to know her salary,but not anyone else.

    • When anyone can determine values of the at-tributes in question, but those values are incor-rect, then false attribution has occurred. Supposean intelligence agency wants to access a terror-ist web site, but not let the terrorists know whois doing so. The agency would find this type ofattribution useful.

    The last type brings up an interesting point. Lawenforcement considers attribution a crucial tool intracking down criminals, because it enables the officersto trace the activities of the criminals as well as provideevidence that can be used in court. If attribution isbuilt into the network, though, the criminals can alsotrack the law enforcement investigators as they usethe network to carry out their investigation. Thus,the implementation of attribution must take societalconstraints into account.

    ISeCure

  • 16 Computer Security in the Future — M. Bishop

    This is especially true for forensics, which combineselements of technology and technical expertise withlaw, communications, and psychology. Forensics is theability to analyze an event or a state, to determine asmany of the traditional characteristics of who, what,where, when, why, and how as possible.

    Forensics has two aspects. When a system event suchas an attack is discovered, the technical analysis willprovide details that enable the system administrators,auditors, and others to figure out who (user ID orother entity identifier) was involved in the attack, whathappened, where the attack came from (that is, whatnetwork addresses were involved), why the attack waslaunched (that is, what the goal of the attack was),when the attack occurred (as contrasted with whenit was detected) and how the attack was carried out.For the purposes of the technical personnel, thesequestions need to be answered to their satisfaction.They can make inferences and draw conclusions basedon their technical expertise and knowledge, and needonly to be able to convince themselves, and the othertechnical personnel they must contact, of the answersto these questions.

    In practice, these inferences are necessary becausemost computer systems are not designed for forensicanalysis. The analyst examines the contents of logsand current system state, and possibly portions ofearlier states of the system as obtained from backups,to discover what has changed and what activity hasoccurred that might explain the change. But programsand operating systems usually do not record all theinformation needed to analyze the attack, unless thesystems have been designed with security in mind.

    Explorations of how to design new systems, andaugment existing systems, to provide the data neededfor a complete forensic analysis, will expand in thefuture. Further, the infrastructure itself—networksand devices on the networks—will need to supportforensic data collection. With the advent of cheap,plentiful storage, one can record huge amounts of datafor later analysis. The key to forensic analysis is deter-mining what the data means. This requires imposinga structure on the data. The structure can be imposedeither as the data is gathered, or after it has beengathered. Discerning what structure to impose is farmore difficult, and a reason that existing forensics isgenerally ad hoc. One examines logs looking for un-usual events, and then traces forwards and backwardsto reconstruct the event.

    Much research and practice in the future will be de-voted to making forensics more rigorous. One promis-ing approach is to begin with the goals of an attack,and use the requires/provides model to determinewhat capabilities the attacker. One can then work

    backwards to build an attack tree to see what capa-bilities the attacker needs to initiate the attack. Thenext step is to examine the logs to determine eventscorresponding to obtaining those capabilities. The useof a formal model to derive the types of data to lookfor provides a rigorous basis for asserting that theforensic reconstruction of the attack is correct—andfor allowing others to reproduce the analysis.

    In addition to the technological reconstruction, thesite may need to involve lawyers or law enforcementauthorities. Here, the rules change because the tech-nical information must be presented in court. So thedata must be gathered, and the analysis performed,to stand up in a court of law.

    A court requires that evidence be gathered and pre-served according to specific legal rules. 7 For exam-ple, in the United States, evidence requires a “chainof custody” showing who has handled the evidenceand what he has done with it. This allows the courtto evaluate whether the evidence has been tamperedwith. Such rules apply only if the evidence is to beused in court, so are unnecessary for the technicalreconstruction in most cases.

    The situation is different when law enforcementlooks for evidence of a crime. The police are eithermonitoring a network or analyzing a system lookingfor evidence of a crime. As with analyzing attacks,the police must properly interpret the evidence to besure that what they find is a crime, and that theydo not accuse the wrong people. Two examples willshow why law enforcement and other legal authoritiesneed technical expertise and must understand howcomputers work.

    The first case involves pornography. The U.S. WhiteHouse web site is www.whitehouse.gov. At one time,the web site www.whitehouse.com referred to a porno-graphic site. 8 If a user simply entered “whitehouse”as the address to browse to, most browsers automati-cally supplied a “.com” ending, taking the user to thewrong web site. Even if they immediately navigatedaway, the pornographic web page would be in theirweb browser’s cache. If a police officer did not knowhow the cache worked, he might assume the user delib-erately downloaded the page—when in fact the userdid not.

    The second case involves movie piracy, a seriouscrime in many countries. In the United States, theorganization that protects movies from being pirateduses undisclosed techniques to find networks on which

    7 What follows applies specifically to criminal trials in theUSA. Rules for other types of trials, and for courts and laws

    in other countries, will vary.8 It no longer does so.

    ISeCure

  • January 2011, Volume 3, Number 1 (pp. 3–27) 17

    movies are being shared, and sends “take down” no-tices to the owners of servers with pirated copies ofmovies. It also pursues legal action against them. Re-searchers have shown that, under some circumstances,the identification mechanisms used to find unautho-rized movie sharers may identify the wrong systems;indeed, they managed to have their network printerbe the target of a take down notice! [74] By not under-standing how the mechanisms works, the enforcementauthorities will not understand why innocent peoplecan be accused of the crime.

    In the future, these problems will be aggravated.Because of the complexity of law, police science, anddigital forensics, it is likely that experts will do much ofthe interpretation of evidence for legal authorities andlay people. Experts, however, make mistakes, and maypresent incomplete or incorrect evidence as fact. Thestandards for treating evidence as scientific vary fromjurisdiction to jurisdiction. But in general, the “triersof fact”—judges, and where present juries—determinewhat weight, and how much credibility, to give thetestimony.

    3.4 Testing and Experimentation

    As new protocols, infrastructure architectures, anddefenses against attacks are developed, they must betested before being deployed. The complexity of theinfrastructure no longer allows us to predict, with greataccuracy, all the effects of changes, so the protocolsmust be tested to uncover emergent properties. Theproblem is finding a test bed of size sufficient to test theprotocols and architectures in a realistic environment.

    Simulating the environment requires that we un-derstand the environment completely. Often we donot. As an (historical) example, the first high-altitudeflights veered off course due to unexplained high-speedwinds—the jet stream, unknown until those flights. Sosimulations of the flights would have failed to matchthe actual flight paths, because the simulation wouldnot have taken the (then unknown) jet stream intoaccount.

    Deploying the developed mechanisms over a lim-ited area provides some measure of testing, but notenough—especially for security mechanisms. Whenone tests security mechanisms, one must attack (eitherin simulation or reality). Doing so on a production net-work risks interfering with others’ work, or damagingtheir systems, neither of which is acceptable.

    The solution has been to build large test beds, con-sisting of thousands of systems. The two most widelyused are the DETER/EMIST test bed [75] and thePlanetLab test bed [76]. These networks contain thou-sands of nodes. The controllers can be reached over

    the Internet, so programs can be set up, broadcastto the nodes that the experiment is using, and thenrun. But the nodes themselves are not directly con-nected to the Internet, so (for example) if a maliciousprogram is executed to test a defense or measure thespeed of its spread, the experimenters need not worryabout the malware escaping to the Internet.

    The U.S. National Science Foundation (NSF)funded a project to create the Global Environmentfor Network Innovation (GENI) [77, 78]. This virtuallaboratory provides an Internet-scale test bed for ex-perimentation. The GENI infrastructure is designedto be shared, heterogeneous, and highly instrumentedto enable experimenters to run experiments and mon-itor them. Individual nodes can be programmed, justas in PlanetLab, so experimenters can control (ormonitor) their behavior.

    GENI is in its infancy. Two issues that arise in theexisting Internet, and will continue to pose problems inits successors, have already raised challenges for GENI.They grow out of GENI’s idea of sharing resources:federation and isolation.

    As GENI is intended to be global, it will have nodesthroughout the world. Different organizations ownand operate the nodes making up GENI. Those or-ganizations have their own rules for managing theirnodes and for making resources available to GENI.Further, laws in the jurisdictions in which each orga-nization resides may affect what the organization canand cannot do. For example, a node in a jurisdictionthat does not protect privacy may require that data inan experiment be available for others to review. Suchvisibility may be unacceptable to the scientists run-ning the experiment. To resolve this problem, GENIis developing a database of resources, where they areavailable, and under what conditions they are offeredto the GENI community.

    GENI nodes and resources are to be shared amongthe users of GENI. This raises the question of interfer-ence. Suppose two experiments are being run, and onecauses the nodes on which it is being run to fail. Thenall experiments using those nodes will also terminate.Such unreliability is unacceptable. So, GENI mustprovide a way to isolate experiments using the samenode from one another. The solution is to virtualizethe GENI network and resources whenever possible.Each experiment gets a slice of each node and resource.The set of slices for an experiment make up the ex-periment’s view of the GENI network—in essence, avirtual network. Then, if two experiments are runningon the same set of nodes, and one causes the networkinfrastructure to crash, only that experiment’s virtualnetwork fails; the other experiment’s virtual networkis unaffected. Similarly, an experimenter can run se-

    ISeCure

  • 18 Computer Security in the Future — M. Bishop

    curity experiments in his slice without putting otherexperiments at risk.

    The idea of virtualizing infrastructure will be ap-plied much more in the future. Consider cloud com-puting. A program uses clouds to store data or per-form computation, basically in the same way that itwould invoke remote procedure calls, except the callsgo to servers and invoke resources on other systems(“the cloud”). These other systems may belong to theorganization running the program, or to other organi-zations. Indeed, the program may not know, or be ableto control, which organizations’ resources are used.Thus, security becomes an important considerationfor the program and the organization.

    It is also an important consideration for the cloudproviders. They need to keep their resources availableto cloud customers. They need to protect their cus-tomers’ data. Virtualization may provide a (partial)solution, because it would provide the isolation neededto prevent two customers from interfering with oneanother, or reading one another’s data.

    The mutually suspicious environment, in which thecloud customers want to confine providers’ access todata, and the providers want to limit the customers’access to their resources, is a form of the confinementproblem [79], which has been (and will continue to be)an area of active research.

    GENI is currently moving into its third phase of de-velopment (called “Spiral 3”). It has formed partner-ships with other large networking communities (suchas Internet2). These will provide services, additionalinfrastructure, and expertise to accelerate GENI’sgrowth. In the future, GENI and test beds like it willprovide the experience needed for designing and im-plementing effective security mechanisms. Also, thosetest beds can be used to analyze attacks, especiallythose involving the spread of malware.

    This will lead to improved experimental techniquesfor computer security. In the past, many computer se-curity experiments were flawed. They lacked a controlcase, or generalized results without providing evidencethat the generalization was valid.

    In 1998 and 1999, MIT Lincoln Laboratories ran aseries of tests on intrusion detection systems [80]. Theymeasured data from a network with both classifiedand unclassified traffic on an Air Force base, and thencreated data that simulated the actual traffic. Theythen embedded various attacks, and modified someof the traffic to be anomalous. The testers providedsynthesized training data to the research community,which used it to train their intrusion detection systems.Finally, the intrusion detection systems analyzed thesimulated data to see which attacks they could detect.

    The testers then published their experiment and theirresults.

    Subsequently, their experimental techniques werereviewed and challenged [81]. The paper found severalproblems in the underlying assumptions. For example,the testers did not explain why the number of falsealarms on the synthetic data would be the same as forthe real data; this was important because one of themeasures involved the percentage of false alarms. Thedistribution of the injected attacks was not comparedto the distribution of attacks in the real data. Otherpoints raised awareness of the problems of runningeffective experiments.

    Many problems hinder reproducibility, a corner-stone of scientific experimentation. Testing proceduresare often not documented in enough detail for othersto reproduce the analysis. Perhaps more importantly,raw data is rarely made available; this prevents oth-ers from repeating the work exactly. Two commonreasons for withholding the data are that the datamay contain private information—for example, usernames and passwords—that could compromise usersand systems; and an attacker may be able to minethe data for information that could be used to com-promise the business practices of an institution, forexample by revealing information about protectionmechanisms in use. Unfortunately, this often makesproper interpretation of the results difficult becausethe specific parameters that affect the results may notbe fully understood at the time the paper is published.Making the data available allows other researchers toexplain the reasons underlying the results, as is donein the wonderful paper [82] that provided a theoreti-cal and analytical reason for an observed result, thatthe Stide system required data sequences of length 6or above to detect intrusions effectively.

    3.5 Security Management

    Management of any sort is a complex, often daunting,task. Managing security is doubly so, as security isoften seen as a hindrance and non-productive. It bringsin no revenue; indeed, sometimes it interferes withrevenue-producing activities. In the non-commercialworld, it is also seen as a burden because it mayinterfere with the organization’s work.

    In the past, security personnel have often treatedsecurity as the goal, rather than as a means to thegoal (the organization’s mission). In this sense, secu-rity management epitomized the Institutional Imper-ative: “every action or decision of an institution mustbe intended to keep the institutional machinery work-ing” [83, p. 49]. Here, the “institutional machinery”was the protection of the institution, rather than theinstitution successfully fulfilling its goals.

    ISeCure

  • January 2011, Volume 3, Number 1 (pp. 3–27) 19

    People responded by questioning the need for se-curity. As security incidents rarely affected any par-ticular individual, those individuals wondered whythey should be concerned. This created friction withsecurity personnel, and communications deterioratedwithin the organization.

    As information about system vulnerabilities and at-tacks, and the consequences of those exploits, becamepublic, the need for security became clearer and moreimmediate. The rise of people-oriented attacks such asphishing, spearphishing, and other methods of socialengineering brought home the risks, especially sincethese types of attacks often focused on the individualsrather than the company—and the individuals borethe burden of recovering from the attacks. As any vic-tim of identity theft will attest, recovery may take along time, cost much money, and require much work.

    From the technical view, managing security posesadministrative problems. Configuring and maintainingsystems was discussed earlier; its importance here isthe role it plays in keeping systems consistent with thesecurity policy. Tools designed for this purpose allowadministrators to modify system configurations froma remote host or site. These tools are often tailoredfor specific configurations or systems.

    In the future, tools intended for security manage-ment and configuration will be architected modularly,and the user and system interfaces will be key parts ofthese tools. Both are, and will continue to be, complexbecause of the flexibility required to manage the sys-tems. Considerable experimentation will be necessaryto develop an intuitive interface, but it will be criticalto minimize user mistakes.

    Organizations today use tools to evaluate security,but it is unclear whether these metrics are helpful. Fur-ther, the tools are run infrequently. This will change.Tools will provide metrics that the site finds useful,and they will be run far more frequently—daily, ifnot continuously. In this way, the organization will beable to evaluate its security posture, and be able torespond quickly to threats and attacks.

    One important question is how well security policiesare implemented. That is, are the systems properlyconfigured to enforce the security policy? The obviousway to check is to examine the system configurationfiles, and the configuration of the network infrastruc-ture. There are two problems with this approach.

    The first is the complexity of combining the twoconfigurations to figure out what is allowed. Ideally,one could take the configurations and generate thepolicy that is enforced. This “reverse engineering” ofpolicy may soon be possible at the technical level,but it will not bridge the gap between the technical

    statement of the policy and the higher-level, naturallanguage statement of the policy.

    Further, it will miss problems. Policy enforcementis more than proper configuration. Software vulner-abilities enable evasion of stated policy, even whenconfigured properly. Thus, the enforced policy differsfrom the configured one [31]. The best way to detectthis is through penetration testing. This currently isstill something of an art. Although various method-ologies such as the Flaw Hypothesis Methodology [84]exist to guide the testing, ultimately the success ofthe test depends on the skill of the testers. The futurewill bring efforts to systematize how these tests areconducted, so testers will need less experience thanthey do now. How successful those efforts will be isunknown.

    3.6 Summary

    Part of wisdom, it is said, is in knowing what willwork, knowing what will not work, and being able totell the difference between working and not working.In the future, the infrastructure will test our wisdomin this sense.

    Among the success stories will be the hardeningof the infrastructure so that it can better withstandattacks against the network and transport layers. Thetest environments for these changes will develop slowly,but as they become easier to join and use, researchersand experimenters will test new protocols and changesto existing protocols on them. As the protocols andchanges prove themselves, they will slowly migrateout of the test bed into the real environment, wherethey will be evaluated again. Their benefits will eitherbecome apparent, leading to their adoption, or theywill co-exist with existing protocols and systems.

    The notion of virtualization, which already existsin test environments, will expand to include networksand clouds because of the isolation and reliability itprovides. Attacks against the infrastructure, and sys-tems, can be controlled within these environments sothat they do not affect othe