cybersecurity standards: managing risk and creating resilience

12
Revision 1, Submitted to IEEE Computer Risk-Based Cybersecurity Standards: Policy Challenges and Opportunities Zachary A. Collier, United States Army Engineer Research and Development Center, [email protected] Daniel DiMase, Society of Automotive Engineers G-19A Test Laboratory Standards Development Committee, [email protected] Steve Walters, Society of Automotive Engineers G-19A Test Laboratory Standards Development Committee, [email protected] Mark (Mohammad) Tehranipoor, Center for Hardware Assurance Security and Engineering, University of Connecticut, [email protected] James H. Lambert, Center for Risk Management of Engineering Systems, University of Virginia, [email protected] Igor Linkov*, United States Army Engineer Research and Development Center, [email protected] *Corresponding Author. 696 Virginia Road, Concord, MA 01742. 978-318-8197. [email protected] Abstract Pervasive and potentially catastrophic threats to critical infrastructure have prompted many in government to call for risk-based standards to aid in protecting against cyber threats. However, the cyber domain is characterized by high situational uncertainty and rapidly adaptive threats. These unique challenges make traditional risk assessment difficult to implement, and quantification of the associated threats, vulnerabilities and consequences requires a large volume of information. Based on our collective experience developing standards for hardware security and counterfeit detection through the Society of Automotive Engineers (SAE) G-19 Committee, we argue that a Cybersecurity Framework should not only identify the risks and implement controls based on technical data and quantitative risk assessment, but also integrate technical judgment and the risk tradeoffs that decision makers may be willing to accept. Moreover, continuous assimilation of new information and tracking of changing stakeholder priorities and adversarial capabilities through adaptive management are required for successful implementation of such a Cybersecurity Framework. Finally, integration of risk- and resilience-based management is necessary for enhancing cybersecurity capabilities. Keywords: Security, Risk Management, Standards, Infrastructure Protection, Reliability and Testing Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE This article has been accepted for publication in Computer but has not yet been fully edited. Some content may change prior to final publication.

Upload: igor

Post on 21-Feb-2017

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

Risk-Based Cybersecurity Standards: Policy Challenges and Opportunities

Zachary A. Collier, United States Army Engineer Research and Development Center, [email protected]

Daniel DiMase, Society of Automotive Engineers G-19A Test Laboratory Standards Development Committee, [email protected]

Steve Walters, Society of Automotive Engineers G-19A Test Laboratory Standards Development Committee, [email protected]

Mark (Mohammad) Tehranipoor, Center for Hardware Assurance Security and Engineering, University of Connecticut, [email protected]

James H. Lambert, Center for Risk Management of Engineering Systems, University of Virginia, [email protected]

Igor Linkov*, United States Army Engineer Research and Development Center, [email protected]

*Corresponding Author. 696 Virginia Road, Concord, MA 01742. [email protected]

AbstractPervasive and potentially catastrophic threats to critical infrastructure have prompted

many in government to call for risk-based standards to aid in protecting against cyber threats.However, the cyber domain is characterized by high situational uncertainty and rapidly adaptive threats. These unique challenges make traditional risk assessment difficult to implement, andquantification of the associated threats, vulnerabilities and consequences requires a large volume of information. Based on our collective experience developing standards for hardware security and counterfeit detection through the Society of Automotive Engineers (SAE) G-19 Committee, we argue that a Cybersecurity Framework should not only identify the risks and implement controls based on technical data and quantitative risk assessment, but also integrate technical judgment and the risk tradeoffs that decision makers may be willing to accept. Moreover, continuous assimilation of new information and tracking of changing stakeholder priorities and adversarial capabilities through adaptive management are required for successful implementation of such a Cybersecurity Framework. Finally, integration of risk- and resilience-based management is necessary for enhancing cybersecurity capabilities.

Keywords: Security, Risk Management, Standards, Infrastructure Protection, Reliability and Testing

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.

Page 2: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

IntroductionCommunications networks and information systems, referred to as the cyber domain, are

increasingly threatened by non-state actors, foreign states, and counterfeiters of electronic components. With the cyber domain integrating and supporting critical infrastructures, global economic prosperity, public health and safety, and national security, President Obama in 2013 signed Executive Order 13636 - “Improving Critical Infrastructure Cybersecurity” (EO 13636,2013). It calls for the development of a Cybersecurity Framework (NIST, 2013), which is charged with the task of adopting and implementing risk-based standards to identify high-risk infrastructure and select alternatives for risk mitigation. The Executive Order tasks the National Institute of Standards and Technology (NIST) to lead the development of the Cybersecurity Framework to reduce and manage cyber risks to critical infrastructure.

While cybersecurity is a vast topic area representing a number of threats, one aspect of cybersecurity gaining national attention is that of hardware security. While many cybersecurity studies focus mainly on computer networks, the internet, and other related issues, hardware security focuses specifically on the physical infrastructure which supports these other information systems such as circuit boards and integrated circuits. Especially as the global supply chain becomes more complex, coupled cyber-physical systems are increasingly at risk from counterfeit electronics (Figure 1)– parts that are relabeled, refurbished, or repackaged to misrepresent their authenticity (Sood et al., 2011). The consequences of not catching these counterfeits before they travel downstream in the supply chain are numerous. Economic consequences for manufacturers include lost profits and increased costs from additional testing and replacement, as well as opening the door to legal liabilities. However, risks from counterfeits are not relegated strictly to the private sector. For instance, approximately 8,000 reports of counterfeit electronics have been found in military supply chains as well, and these counterfeits have the potential to compromise mission effectiveness and threaten national security (US Department of Commerce 2010). Further, malicious hardware Trojans may be inserted into counterfeit electronics causing numerous security concerns ranging from compromised sensitive information to the degradation of the system itself. Thus, counterfeit electronics pose numerous risks, many potentially critical in terms of economic stability and national defense.

Risk analysis including risk assessment, risk management, and risk communication, is not new to governance, homeland security, nor to Executive Orders. So what does it mean for a standard to be “risk-based”? First one must explore the notion of risk in large-scale systems. Risk is traditionally defined as a triplet consisting of what can go wrong, how likely it is to happen, and the consequences of it happening (Kaplan & Garrick 1981). This is often simplified into an index as “Threat x Vulnerability x Consequence” or “Hazard x Exposure x Effect”.Second, risk assessment requires quantification of each component of the triplet and the quantification of the associated uncertainties. The contribution to the overall risk from individual system components is identified, and if one component poses substantially more risk than the others, it is usually used to develop a quantitative benchmark as a basis for comparison, which de facto constitutes a risk-based standard. This quantitative risk assessment approach has been the basis for regulatory risk assessment in many agencies, for example, the US Nuclear Regulatory Commission.

While establishing the aforementioned risk-based standards presents regulators with a host of difficult challenges, the theme that emerges from recent accidents is a failure to frame and evaluate multiple threat scenarios at various scales (e.g., the Fukushima nuclear power plant

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.

Page 3: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

incident). It may be impossible to have a complete inventory of threats especially in the domain of cybersecurity because of their rapid evolution. The challenge of quantifying vulnerabilityagainst intelligent and adaptive adversaries is arguably far greater than that for well-controlled nuclear plants against natural and accidental phenomena, including the case of the Fukushima accident. Finally, the understanding of the consequences of threat affecting the system is thus fundamentally impossible. We might as well prepare to fail to quantify risk in units of probability of failure across the cyber domain and its interdependent infrastructure systems.

From Risks to DecisionsThis brings back the initial question - What does it mean for a standard to be risk-based?

It is clear that quantitative risk assessment and characterization methods that are currently in useare insufficient to cope with problems with such scope, complexity and dynamism as cybersecurity, though that is not to say that they are not an integral part of the solution. In fact, the President’s Executive Order reinforces the position that good governance should incorporate considerations of risk. The key however is that risk-based standards must ultimately guide risk-informed decisions. This is exemplified in the term risk-based, with emphasis on “based” – there must be some framework above and beyond traditional risk analysis, with the capability to adaptively guide actionable mitigation efforts grounded not only on the physical state of the world, but the preferences and value judgments of stakeholders.

Balancing a variety of trade-offs, the algorithms for combination of different performance metrics, and the capability of a participatory, multi-stakeholder approach actually lies within the purview of a related, but distinct, field of study known as decision analysis. The insight of this field is that where risk analysis is generally concerned with describing the physical world (e.g., failure modes, consequences, tangible future events), it does not easily factor in the human perceptions and values necessary to arrive at actionable decisions. Decision analysis incorporates human, value-centric factors by framing the problem and providing decision makers with insights into the performance of different courses of action (i.e., mitigation strategies). These courses of action are assessed with respect to common objectives. These objectives are in turnbased on a set of criteria representing the values and preferences of one or more stakeholders.Even though a semi-quantitative risk assessment approach was specifically mentioned as one of the top ten achievements in Risk Analysis by the Society for Risk Analysis (Greenberg et al. 2012), there have been very few methodological and regulatory developments in the field as it relates to specific methods and applications.

Currently, the gold-standard for cyber risk guidance within the United States Federal Government is NIST Special Publication 800-39, which other agencies such as Department of Energy and Nuclear Regulatory Commission have leveraged to model their own internal standards (NIST, 2011). As it relates to risk assessment, NIST Special Publication 800-30 (NIST, 2012) supports Special Publication 800-39 by detailing the procedure for conducting risk assessments, and comes close to institutionalizing standards based on semi-quantitative risk assessment in situations where uncertainty is high. In it, the identification, aggregation and relative weighting of “risk factors” (defined as threats, vulnerabilities, impacts, likelihoods, and predisposing conditions) is mentioned as one way of conducting a risk assessment, however, little specific guidance is given on methods, algorithms, or rules in which to combine and weight these factors. NIST does, however, mention that the weighting and aggregation of these factorsshould be done consistent with organizational risk tolerance, implying that the risk assessments

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.

Page 4: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

are incomplete without some type of framing of the problem within a larger organizational decision context.

Another example of a standardized assessment methodology is the Common Vulnerability Scoring System, or CVSS (Mell et al., 2007). The CVSS consists of a number of criteria related to the vulnerability of cyber systems, as well as several impact-related criteria. These criteria are assessed, and the final scores characterized, semi-quantitatively. And while the system provides a common language for communicating and comparing vulnerability the CVSS,a disconnect between risk assessment and risk management leaves the user with no instructions on how to best go about reducing the system vulnerabilities. Moreover, since there are no criteria related to the particular threats, only vulnerability and impacts, the final score and attack vector are not truly measures of risk, which must by definition include aspects of the triplet.

Internationally, ISO/IEC 31010 (ISO, 2009) specifically mentions multi-criteria decision analysis as one of many potential ways in which to conduct a risk assessment. However, this standard gives scant explanation on the mechanics of performing the assessment. This leaves government and industry without explicit guidance on how to weigh the relative importance of diverse risk drivers, how to aggregate across these drivers to formulate a repeatable assessment of risk, how to integrate the organizational preferences and values of the stakeholders, and how to interpret the results for purposes of governance and industry action.

Finally, what guidance can practitioners glean from the Cybersecurity Framework called for in Executive Order 13636? While currently only out in preliminary status, what has been released appears to mirror the aforementioned standards and guidance documents in its enumeration of criteria but lack of framework for making decisions. The heart of the Cybersecurity Framework is a table comprised of five primary cybersecurity functions, namely Identify, Protect, Detect, Respond, and Recover (collectively called the “Framework Core”).Within each of these critical functions are multiple nested categories and subcategories, which represent specific outcomes or means by which to achieve the corresponding critical function. For instance, under the function of Respond, there are five categories (Response Planning, Communications, Analysis, Mitigation, and Improvement), and within Mitigation exist the subcategories of Incidents are contained and Incidents are eradicated. In total, there are twenty categories and nearly one hundred subcategories contained within the Framework Core, making it comprehensive but potentially overwhelming.

The Framework Core is used as a scorecard of progress – the current guidance calls for first developing an organization’s “Current Profile”, which consists of assigned scores based on the organization’s performance in each of the categories and subcategories. This Current Profile is then compared to a “Target Profile”, representing the desired state of the organization in each of the same categories and subcategories, and the shortfalls between these profiles can be viewed as gaps in an organization’s risk management capabilities.

However, similar to the other standards, the difficulty emerges in linking theorganizational assessment of gaps with specific decisions. For instance, out of the nearly onehundred identified subcategories, certainly not all of them will be of equal importance to the overall cybersecurity of the organization. So how is a group of senior leadership supposed to identify which gaps are the most critical, and prioritize corrective action based on which gaps deliver the most value compared to the cost? Tools and methods must be put forth that address these questions.

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.

Page 5: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

Example: Supply Chains and Hardware Security Current research and development initiatives with respect to detection of counterfeit

electronics within the supply chain have tended to focus on either technological solutions or improved supply chain management practices. Technological advances include safeguards such as parts marked with botanical DNA or authentication through physical unclonable functions (PUFs). Various supply chain management best practices have also been put forward, including guidance on supplier selection, disposal of electronic waste, and improved management of obsolescence. However, while these efforts may mitigate some of the risk posed by counterfeit electronics, the essential process of risk assessment is still generally ad hoc. Risk management cannot be successful in the long term without being informed by effective, deliberate risk assessment.

This is especially important for test facilities that must authenticate incoming electronic parts. While test labs currently run various tests on sample parts using advanced technologies, counterfeits still enter the supply chain. One reason is because test labs must strike the right balance between under-testing, which may increase the probability that counterfeits will travel downstream in the supply chain, and over-testing, which will result in lengthened schedules and higher costs. Moreover, hardware security risks are difficult to assess in an ad-hoc manner since they are difficult to detect and unpredictable, thus opening the door for inconsistencies across different labs. The process is also inherently subjective – subject matter experts must weight multiple lines of evidence in the assessment of whether a part is authentic or counterfeit. As expected, there is often wide disagreement among subject matter experts on the correct test sequence and methods to employ to ensure counterfeits have been detected. Thus, risk assessment standards are crucially needed that provide consistency, incorporate subjective evaluations on perceived risks, and balance a variety of cost and risk concerns with confidence of detection.

An exemplary effort is currently being led by the SAE G-19 Committee to address these concerns. The open industry-led group is developing a linked collection of voluntary standards based on risk assessment and decision analysis tools aimed at eliminating counterfeit electronics from the supply chain. The documents within the G-19 will explicitly outline risk assessment methods and best practices for ensuring that counterfeit electronics do not enter into the supply chain, compromising critical infrastructure and applications. Pivotal to the forthcoming standards are two models (Figure 2).

First is a risk assessment model identifying and assessing risk factors associated with suspect counterfeit parts. In it, known risk factors that address the probability of counterfeit threat based on the source of supply, and the vulnerability and consequence based on the component and product configuration and criticality are assessed semi-quantitatively. For instance, very low product risk effects would include negligible degradation in functionality or reliability of the product that is not serious enough to cause injury, property damage, or system damage, but which may result in unscheduled maintenance or repair. Critical product risk consequences, on the other hand, may result in a failure of the component that may result in the product becoming non-functional and a failure of the product may cause death or a major system loss in a mission critical application. The model also considers other factors such as redundancy in product design, obsolescence of components, and industry advisories and alerts on parts and

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.

Page 6: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

suppliers. By assigning quantitative scores to each of the identified risk factors, a composite risk score is then calculated, and an electronic part is placed into one of five tiers, ranging from very low risk, where the threat is small and the consequences would only result in a minor inconvenience, to critical risk, where the consequences would be catastrophic. Thus, the model seeks to standardize the normally ad-hoc risk assessment process and quantifies subjective judgments into distinct, intuitive tiers, without requiring the decision maker to make difficult probabilistic assessments, which, as mentioned above, are difficult in dynamic, uncertain environments.

The second model (based on the work of Guin & Tehranipoor, 2013) utilizes the output from the first model to direct the laboratory quality assurance testing of the parts being used in the application. Recognizing that one can never eliminate 100% of risk, and that the cost to test for all of the known counterfeit-related defects could exceed the value of performing the tests, this second model balances the residual risks with the tradeoffs of cost and confidence of detecting counterfeits. This allows for the organizational risk tolerance, referenced in NIST standards, to come into play. For example, in critical risk applications, the prime objective is to achieve the maximum test coverage irrespective of the test cost and time, as critical risk applications could impact human life and safety or mission critical applications. On the other hand, low and very low risk applications have a different overall objective; for these situations, test time and cost are more important than achieving maximum test coverage. For medium and high risk applications, a balance is struck – wherein a higher confidence threshold is achieved by requiring additional tests and raising the cost limits, though not to the extent called for in critical applications. The algorithm identifies known counterfeit-related defects that are not covered by the recommended test sequence or do not provide the desired confidence level for detection of counterfeits, and recommends specific testing with associated counterfeit defect coverage. If a new counterfeit method emerges, or advances are made in detection technology, these parameters can be dynamically updated – pointing to the necessarily adaptive approach that must be undertaken.

These two models therefore work in tandem to bridge the gap between strictly empirically derived technical data and qualitative subjective judgments, combining engineering data with human values (e.g., risk tolerance) into a semi-quantitative, anticipatory framework. To illustrate this, one can imagine a scenario in which this standard consisted of only the first model.This would represent the status quo – risks could be assessed and comparatively ranked to one another, but that would be the limit of the standard’s usefulness. Decision makers would still be left to make ad hoc risk management decisions without a structured means by which to select mitigation plans (i.e., testing strategies). Conversely, what if the standard only consisted of the second model? In this case, decision makers would possess a decision support tool to aid in the design of testing strategies but would be left without a standardized and repeatable risk assessment methodology. One cannot effectively mitigate a risk that isn’t first identified and measured. This would also open the door to inconsistency in assessments over time. Without being assessed on equal footings, the risk calculated for one batch of parts may not be comparable to another risk assessment, making the effective allocation of scare resources (e.g., labor, equipment usage, etc.) impossible. Both situations are far from ideal; in either case the decision maker would lack the necessary inputs and tools to fully make a risk-informed decision.Thus, this framework integrates the best of both approaches, allowing users to identify the risksbased on what they know, and implement controls balanced with what is technically feasible and what they are willing to tolerate.

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.

Page 7: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

A Path Forward: Cyber ResilienceThe cyber domain is continually reinvented and is uniquely future-oriented. Cyber threats

are adaptive and dynamic in that adversaries are continuously developing increasinglysophisticated attacks to overcome increasingly sophisticated defenses. Traditional static assessments will quickly become obsolete as new technology and methods become available. Unlike retrospective situations, such as the insurance industry which has relevant and readily-available historical data on a wide variety of potential harms, no such data exists for cyber threats. The rapid pace of evolution and the unprecedented nature and extent of cyber threats defy us to enumerate the potential hazards, much less estimate reliable probabilities of occurrence and magnitude of consequences. Given the irregular nature of cyber threats, even the best available data are subject to considerable disagreement and uncertainties. Thus, acomprehensive approach to protecting the nation’s critical infrastructure, economy, and well-being must be risk-based – not risk-exclusive. The framework must provide robust tools to assess risks, and also guide policy actions and adaptively manage the process through monitoring and feedback. Guidance in response to the Cyber Framework called for in the President’s Executive Order must therefore transcend traditional risk analysis and tap into both technical and behavioral disciplines and tools to ultimately guide repeatable and accountable risk-informed decisions.

A challenge to successful implementation is to view the cybersecurity problem at a systems level. All aspects of cybersecurity, including hardware, software, and firmware must be coordinated since the cyber domain represents coupled cyber-physical systems and assets.Moreover, the system boundary can be expanded to include the users of cyber-physical systems and the societies and economies in which these systems exist and depend. Within an organization, cybersecurity is the concern of everyone including Information Technology, Engineering, Quality Control, Management, and other personnel. At a broader social scale, coordination between government, industry, and academia is also crucial so that the range ofstakeholders are engaged and their respective capabilities to address the challenges are leveraged toward a common goal.

This multi-domain approach ultimately points towards an emerging concept of cyber resilience. In contrast to the definition of risk put forth by Kaplan and Garrick, i.e., the likelihood of an adverse event and the magnitude of the resulting damage, resilience is focused on the ability to withstand and recover quickly from threats which may be known or unknown. Thus, whereas risk involves the identification and assessment of threats acting on or within a system, resilience can be thought of as a property of the system itself. Managing for resilience requires ensuring a system’s ability to plan and prepare for a hazard, and then absorb, recover, and adapt to the hazard. This, coupled with the systems view, in which cyber systems are defined as containing components across physical, information, cognitive, and social environments in which the system exist, is the basis for cyber resilience. Recent efforts to create metrics for cyber resilience (Linkov et al. 2013) have used this approach to frame the cybersecurity problem with recognition that cybersecurity is inextricably across all facets of modern life. The resulting set of cyber resilience metrics can be integrated with decision analytic frameworks to compare potential cyber system designs or to prioritize upgrades and maintenance for existing cyber systems.

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.

Page 8: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

References:(1) Executive Order 13636 (2013) Improving Critical Infrastructure Cybersecurity. http://www.gpo.gov/fdsys/pkg/FR-2013-02-19/pdf/2013-03915.pdf

(2) Kaplan S, Garrick BJ (1981) On the Quantitative Definition of Risk. Risk Analysis 1(1): 11-27.

(3) Greenburg M, Haas C, Cox Jr. A, Lowrie K, McComas K, North W (2012) Ten Most Important Accomplishments in Risk Analysis, 1980-2010. Risk Analysis 32(5): 771-781.

(4) National Institute of Standards and Technology (2011) Managing Information Security Risk: Organization, Mission and Information System View. NIST Special Publication 800-39.

(5) National Institute of Standards and Technology (2012) Guide for Conducting RiskAssessments. NIST Special Publication 800-30, Revision 1.

(6) Mell P, Scarfone K, Romanosky S (2007) A Complete Guide to the Common Vulnerability Scoring System Version 2.0. http://www.first.org/cvss/cvss-guide.pdf

(7) ISO (2009) Risk Management – Risk Assessment Techniques. IEC/FDIS 31010.

(8) National Institute of Standards and Technology (2013) Preliminary Cybersecurity Framework. http://www.nist.gov/itl/upload/preliminary-cybersecurity-framework.pdf

(9) Sood B, Das D, Pecht M (2011) Screening for counterfeit electronic parts. Journal of Materials Science: Materials in Electronics 22(10): 1511-1522.

(10) US Department of Commerce (2010) Defense Industrial Base Assessment: Counterfeit Electronics.

(11) Guin U, Tehranipoor M (2013) On Selection of Counterfeit IC Detection Methods. IEEE North Atlantic Test Workshop (NATW), Wakefield, MA, USA.

(12) Linkov I, Eisenberg DA, Plourde K, Seager TP, Allen J, Kott A (2013) Resilience Metrics for Cyber Systems. Environment Systems & Decisions 33(4): DOI 10.1007/s10669-013-9485-y.

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.

Page 9: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

Author Biographies:Zachary A. Collier is a Decision Analyst with the US Army Engineer Research and Development Center, where his research focuses on building risk-based decision models. Heholds a Master of Engineering Management from Duke University and is a member of Society for Risk Analysis. [email protected]

Dan DiMase is chairman of the SAE G-19 Test Laboratory Standards Development committee, and is a recognized expert in counterfeit parts prevention, compliance and quality management,and strategic planning. He holds an MBA from Northeastern [email protected]

Steve Walters is chairman of the Risk Characterization Subgroup, part of the SAE G-19 Test Laboratory Standards Development committee. His research interests include reliability engineering and risk analysis. He holds a Bachelor of Science degree in Electrical Engineering from the University of South Florida. [email protected]

Mark (Mohammad) Tehranipoor is the F. L. Castleman Associate Professor in Engineering Innovation at the University of Connecticut and director of the Center for Hardware Assurance Security and Engineering (CHASE). He received his Ph.D. in Electrical Engineering from University of Texas. His research includes counterfeit electronics detection and prevention, hardware security and trust, and VLSI design and testing and is a Senior Member of the IEEE and ACM. [email protected]

James H. Lambert is a Research Professor in the Department of Systems and Information Engineering at University of Virginia. He received his Ph.D. in Civil Engineering from University of Virginia, and studies risk and reliability in engineering systems. He is a senior member of IEEE and a Fellow of the Society for Risk Analysis. [email protected]

Igor Linkov is the Risk and Decision Science Focus Area Lead with the US Army Engineer Research and Development Center. He received his Ph.D. from University of Pittsburg and his research includes risk analysis, decision analysis, and resilience. He is a Fellow of Society for Risk Analysis. [email protected]

Acknowledgments: The authors would like to thank Celia Merzbacher, Christine Hines, and Jeffrey Keisler for their helpful comments on the manuscript. Permission was granted by the USACE Chief of Engineers to publish this material. The views and opinions expressed in this paper are those of the individual authors and not those of the US Army, or other sponsor organizations.

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.

Page 10: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

Fig. 1. Comparison of authentic and suspect counterfeit inductor.

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.

Page 11: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

Fig. 2. Overview of SAE G-19 Hardware Security Risk Assessment Methodology.

Risk Assessment

Component Risk

Product Risk

Supplier Risk

Other Factors

Test Cost

Desired Confidence

Level

Observed Defect Type

Part Type

Decision Making

Ti Tj Tk Tp …

Testing Strategy

TestTi Tj Tk Tp …

Counterfeit Defect Coverage

Det

ecta

bilit

y y

Technical Data and Subjective Judgment

????

??

Incoming Electronic Parts of Unknown

Authenticity

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.

Page 12: Cybersecurity Standards: Managing Risk and Creating Resilience

Revision 1, Submitted to IEEE Computer

Author Contact Information

Zachary A. Collier Phone: 601-634-7570Address: Zachary A. Collier, ERDC EP-R3909 Halls Ferry RoadVicksburg, MS 39180 Email: [email protected]

Daniel DiMasePhone: 401-398-2343Address: Daniel DiMase45 Balsam DriveEast Greenwich, RI 02878Email: [email protected]

Steve Walters Phone: 727-539-2730Address: Steve WaltersClearwater Failure Analysis Lab13350 US Highway 19 N.Clearwater, FL 33764Email: [email protected]

Mohammad Tehranipoor Phone: 860-486-3471Address: Mohammad TehranipoorElectrical and Computer EngineeringUniversity of Connecticut371 Fairfield Way, Unit 4157Storrs, CT 06269-4157Email: [email protected]

James H. LambertPhone: 434-982-2072Address: James H. LambertUniversity of Virginia112 Olsson HallCharlottesville, Virginia 22904Email: [email protected]

Igor LinkovPhone: 978-318-8197Address: Igor Linkov, ERDC EP-R696 Virginia RoadConcord, MA 01742Email: [email protected]

Digital Object Indentifier 10.1109/MC.2013.448 0018-9162/$26.00 2013 IEEE

This article has been accepted for publication in Computer but has not yet been fully edited.Some content may change prior to final publication.