towards an enterprise level measure of security

133
The Pennsylvania State University The Graduate School College of Information Sciences and Technology TOWARDS AN ENTERPRISE LEVEL MEASURE OF SECURITY A Dissertation in Information Science and Technology by Robert L. Marchant Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy December 2013

Upload: others

Post on 21-Dec-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

The Pennsylvania State University

The Graduate School

College of Information Sciences and Technology

TOWARDS AN ENTERPRISE LEVEL MEASURE OF SECURITY

A Dissertation in

Information Science and Technology

by

Robert L. Marchant

Submitted in Partial Fulfillment

of the Requirements

for the Degree of

Doctor of Philosophy

December 2013

The dissertation of Robert L. Marchant was reviewed and approved* by the following:

William McGill

Assistant Professor of Information Sciences and Technology

Dissertation Advisor

Chair of Committee

Frederico Fonseca

Associate Professor of Information Sciences and Technology

John Bagby

Professor of Information Sciences and Technology

Chia-Jung Chang

Assistant Professor of Industrial and Manufacturing Engineering

Peter Forster

Department Graduate Program Chair

*Signatures are on file in the Graduate School

iii

ABSTRACT

Vulnerabilities of Information Technology (IT) Infrastructure have grown at the

similar pace (at least) as the sophistication and complexity of the technology that is the

cornerstone of our IT enterprises. Despite massive increased funding for research, for

development, and to support deployment of Information Assurance (IA) defenses, the

damages caused by malicious attackers to the IT infrastructure are growing at an

accelerated rate. An entire industry, the Information Assurance (IA) industry, has grown

up and currently exists only to provide methods and products for protecting IT assets and

the data and services provided and maintained by these IT enterprise assets. For the

manager, attempting to evaluate the security of products and services of the IA industry is

exacerbated by the almost complete lack of universally recognized, reliable, and scalable

methods to measure the “security” of those assets at the “enterprise” level and to

determine the cost of protecting those assets. What Information Technology Enterprise

Leadership needs is a quantitative security risk management methodology that is as

affective to the enterprise manager as the traditional, non-IT risk, program management

quantitative risk management methodologies . The topic of this dissertation is to describe

from the perspective of IA professionals how probably it is that the issue stated above is

perceived as real and what (if any) are the obstacles for achieving this goal.

iv

TABLE OF CONTENTS

LIST OF FIGURES ......................................................................................................vi

LIST OF TABLES .......................................................................................................vii

ACKNOWLEDGEMENTS .........................................................................................viii

Chapter 1 INTRODUCTION .......................................................................................1

Problem Statement ................................................................................................2 Research Approach ...............................................................................................4

Thesis Structure .....................................................................................................9

Chapter 2 BACKGROUND .........................................................................................10

What is Information Systems Security (INFOSEC)?............................................11 The Hard Problem .................................................................................................13

Is the problem real? And do I really need metrics? ..............................................15

Chapter 3 TECHNOLOGY ..........................................................................................27

The Systems Engineering Life Cycles ..................................................................28

Security Engineering .............................................................................................33

Risk Management..................................................................................................37 IT-Related Risk .....................................................................................................46

Chapter 4 INFORMATION .........................................................................................55

Systems Security Plan ...........................................................................................56

Risk Assessment Information ...............................................................................60 Security Controls ...................................................................................................67 FISMA Metrics .....................................................................................................74

Chapter 5 PEOPLE .......................................................................................................77

Chapter 6 RESEARCH ................................................................................................83

Observations ..........................................................................................................86 Collaborations .......................................................................................................97

Questionnaire ........................................................................................................99 Classified Advertisement Survey ..........................................................................110

Chapter 7 CONCLUSIONS .........................................................................................113

v

Chapter 8 BIBLIOGRAPHY .......................................................................................121

vi

LIST OF FIGURES

Figure 3.1: Life Cycle Models – from INCOSE Systems Engineering Handbook. ....29

Figure 3.2: Six Systems Life Cycle Model Steps. .......................................................31

Figure 3.3: Typical gating reviews. .............................................................................31

Figure 3.4: Overlapping phases showing feedback. ....................................................32

Figure 3.5: A Typical Risk Management Process Framework and Cycle. .................39

Figure 3.6: The Risk Management Framework. .........................................................49

Figure 4.1: CVSS Metrics Groups (from FIRST.org website). ..................................62

Figure 4.2: Example of typical NIST SP 800-53 security control ..............................72

Figure 6.1: Tiers. .........................................................................................................88

Figure 6.2: My Categorization of RMF Stakeholders .................................................89

Figure 6.3: Comparison of Models. ............................................................................92

Figure 6.4: Sample Scenario. ......................................................................................94

Figure 6.5: Comparison of the SDLC and RMF [Marchant 2013]. ............................95

Figure 6.6: Contents of questionnaire. ........................................................................101

Figure 6.7: Example of Talent Acquisition Posting. ...................................................111

Figure 6.8: Talent Acquisition Review Results. .........................................................112

vii

LIST OF TABLES

Table 2.1: Partial list of NIST Special Publications Series. ........................................16

Table 2.2: ISO/IEC 27000 family of standards [ISO/IEC 27000 Series]. ..................18

Table 2.3: IA related directives. ..................................................................................20

Table 3.1: Criteria for determining probability of occurrence, Pf...............................42

Table 3.2: Criteria for determining consequence of failure, Cf ..................................43

Table 3.3: Sample arbitrary risk level assignment ......................................................44

Table 3.4: Assessment tasks from NIST SP 800-30 (Chapter 3). ...............................50

Table 3.5: FAIR basic risk assessment stages. ............................................................51

Table 3.6: Assessment Phases from OCTAVE. ..........................................................52

Table 3.7: Comparison of NIST, FAIR, and OCTAVE assessments. ........................53

Table 4.1: SSP Template extracted from NIST SP 800.18 .........................................58

Table 4.2: Typical vulnerability database fields. ........................................................61

Table 4.3: Typical vulnerability database fields. ........................................................63

Table 4.4: SCAP specifications suites. ........................................................................65

Table 4.5: FIPS 199 Security Objectives ....................................................................68

Table 4.6: NIST SP 800-53 control families. .............................................................70

Table 4.7: The AU family from NIST SP 800-53. ......................................................71

Table 6.1: Categories of IA professionals. ..................................................................86

Table 6.2: Job answers from questionnaire response. .................................................102

Table 6.3: Years of experience and Certifications. .....................................................103

Table 6.4: Response to questions ................................................................................107

viii

ACKNOWLEDGEMENTS

I come to the close of my association with the College of Information Sciences

and Technology and Pennsylvania State University with years of wonderful memories.

The faculty and the staff at IST have been family; people who I respected immediately

and grew to love. I will always look back on my time at PSU and smile. IST is an

incredible school with incredible people. I thank you all for being the wonderful people

you are.

I unquestionably want to acknowledge the support I received from my dissertation

committee. Although all four members helped me more than I deserved, I most

remember each for somewhat unique support. Dr. Chang provided, through her graduate

work and her references, guidance that helped me formulate my categorizations of the

professionals involved in IT risk analysis and provided the basis for many of my

arguments related to the need for engineers to assist in quantitative analysis. Dr Bagby

always brought and continues to bring enriching and enlightening perspectives that I

frequently missed or unjustifiably discounted. Dr. Fonseca always provided just the right

“gentle nudge” to adjust my thinking towards more constructive and productive avenues

of research. Dr. McGill’s supportive attitude and wealth of relevant experience helped

me through periods of the long process towards defense when I was ready to give up. I

do not believe any graduate student anywhere could have had a better committee. I thank

you all and I will miss you.

No one finishes a dissertation without owing thanks to a virtual army of

colleagues, friends and family. I have had more than my fair share of support from all of

my close associates; thank you all.

But no one was more supportive, more understanding, and more patient that my

wife. Thank you Wanda for always being there, and always being supportive.

Chapter 1

INTRODUCTION

Sometimes the biggest problem with working a problem is knowing what the

problem is – Unknown

I’m not sure exactly when, but, about 25 years ago I was the team lead of a group

of requirements engineers working on a suite software requirements documents. It was a

small group (7 or 8 engineers at that time). We had just endured a 30 minute argument

between two of our team members that was finally resolved when they both realized they

had both interpreted the “problem” differently. I learned a very valuable lesson that day,

one that I had been taught in leadership training session, but until this argument, I never

truly understood the significance. Without really thinking about it, I blurted out

“sometimes the biggest problem with working a problem is knowing what the problem

is”. I don’t know where I had heard this concept stated; I certainly never heard the

concept stated in this form. It became a motto for our small group though and 2 years

later when the group had grown to over 20 engineers, this motto appeared at the entrance

to our offices.

I don’t know who to give credit to for this intuitively obvious management

principal, but it certainly is a good management principal, and following this principal I

will begin this thesis by defining the problem. I will then discuss the research approach

and end with an overview of the dissertation content.

2

Problem Statement

For many reasons, today’s information technology executives (government and

industry) appear to not be able to answer the most fundamental and most disturbing of all

security questions any IT executive could be asked; “how secure is your information

technology enterprise?” Most executive can give a weak, qualitative answer. “We’re

pretty secure”, or “We’re ok for now”, but putting a more quantitative “dollarized” value

is most often out of reach. What the executive would like is the ability to provide the

similar to what a program manager may state related to the risk of completing a

development program, for example “my program has a remaining budget of 10 millions

dollars, we have recognized the potential for 1.5 million dollars in risk, I have budgeted

.5 million dollars to mitigate this risk to an overall programmatic risk of .3 million

dollars”. One reason for this difficulty relates to how difficult it is to empirically measure

IT enterprise security. Although much progress has been made in empirical measurement

of discrete parts of IT systems during development, these measurements use conceptual

schema developed specifically to aid the development during the applicable development

phase. The models used are often models for development that, for example, aid in

selection of architecture or help determine a specific hardware or network device

component. When a new system or component is added to an enterprise, adding the new

systems metrics to the enterprise metrics will be difficult unless the statistics are

normalized.

In November 2005, the INFOSEC Research Council published its 2005 Hard

Problems List [INFOSEC 2005]. Number eight in the list of eight hardest problems that

IT has to solve is “Enterprise Level Security Metrics”. Since the publication of this list, it

appears that little has been accomplished towards establishment of Enterprise level

security metrics. True end-to-end metrics have been allusive and little progress has been

made to define methodology for maintenance of a life-cycle long ontology.

3

Even were the executive able to quantify the security of an enterprise, the

dilemma is exacerbated by the difficulty of determining cost/benefit ratios (for the federal

systems executive, determining cost/benefit is a legal responsibility). As stated in An

Introduction to Computer Security: The NIST Handbook [NIST SP 800-12] “The costs

and benefits of security should be carefully examined in both monetary and nonmonetary

terms to ensure that the cost of controls does not exceed expected benefits. Security

should be appropriate and proportionate to the value of and degree of reliance on the

computer systems and to the severity, probability and extent of potential harm.”

This apparent lack of progress stated above is one of the motivations for this

thesis. I wanted to know if possibly that little progress defining meaningful enterprise IT

security metrics is perhaps because little effort has been expended in defining the process

necessary for creation of meaningful methodologies to capture and maintain ontology for

enterprise level security metrics?

I was also intrigued by the hard problem statement itself. It is indeed a hard

problem, but so what? Managers need metrics to determine how to best allocate their

limited resources but must this be a definitive quantitative overall measure. Aren’t these

really two different problems? I wanted to know more about what is meant by

“enterprise level” security metric and why an “enterprise level” security metric is needed.

I was not sure if determining a security cost benefit ratio requires an overall quantitative

measure of security.

I was interested in knowing what the difference is (to this community) between

qualitative and quantitative. Does qualitative mean the metric is qualitative (e.g. low,

medium, or high) or is the method used to create a quantitative measure (e.g. dollar

based) qualitative?

4

Research Approach

I have 35 years of complex systems engineering experience (e.g. satellite systems,

weapons control systems, command and control systems, large IT networks), 20 of those

years (at least), directly involved engineering secure solutions using information

technology. Since August of 2009, I have worked for Sotera Defense Solutions (see

http://www.soteradefense.com), a mid-sized government contractor (approximately 1500

employees). Sotera’s areas of expertise include data fusion and analytics, cyber network

operations, mission IT solutions, tactical ISR and intelligence analysis and operations and

their customers include organizations in the Intelligence Community, Department of

Defense, Department of Homeland Security and Federal Law enforcement. During this

time, I have worked as a consultant to several Sotera customers as an Information

Systems Security Engineer. I am a Certified Information Systems Security Professional

(CISSP) and an Information Systems Security Engineering Professional (ISSEP).

I am one of approximately 12 Technical Fellows at Sotera and, as such, am a

member of the Sotera Technical Council. The Sotera Technical Council is the technical

advisory board for the corporation. The Sotera Technical Council is charged with the

following responsibilities:

Serving as the technical advisory board for the company helping to identify

capabilities within and outside of Sotera to support our customers’ missions and

needs

Reviewing and support Internal Research and Development (IRaD) initiatives to

support development of Intellectual Property (IP) for key proposals

Serving as domain experts in your fields of expertise for the broader company

Representing Sotera in professional outreach initiatives

5

Attending and actively participating in conferences, seminars, government

exchanges

Supporting Corporate key bids thru identification and capture of differentiated

solutions

Representing Sotera in community outreach activities that promote science and

engineering

My responsibilities to the technical council are to provide subject matter expertise

in the areas of Systems Engineering and Cyber Security.

I am a member of the International Council on Systems Engineering (INCOSE –

see http://www.incose.org), and am an active member of the security engineering

working group of that organization.

As a benefit of the above, I have access to large groups of experienced IT users,

professionals, and security professionals. In deciding on a dissertation thesis, I was

interested in finding a research effort where I could use an ethnographic approach to take

advantage of this large group. My first step though was (of course) to conduct a short

(six month long) review of the literature and assessment of the field. This quick review

of the state of the art I conducted prior to starting this thesis indicated that although there

have been ontology proposed and explored, the focus of these efforts is on validation and

enhancement of a proposed ontology, not on a determination of meaningful methodology

(process) for creation and utilization of these ontology to create an effective model. I

also wondered if the apparent lack of progress on ontology was real or was it perhaps

because the people involved in this domain don’t use the term ontology to describe

ontology (after all, whatever form that is used to capture and maintain knowledge is

sufficient, regardless of what it is called). For example, the definition of ontology to the

computer scientist is different than the definition of ontology to the philosopher. The

definition of ontology adopted for this thesis is based on Gruber’s [Gruber 1993]

definitions, that “An ontology is an explicit specification of a conceptualization” where a

6

conceptualization is an abstract representation of the real world. As an example, in

Computer Science, ontology is a data model that represents a set of concepts within a

domain and the relationships between those concepts. To avoid confusion, I will attempt

to refrain from using the term ontology.

My initial review indicated a hot debate in progress. As part of my review, I

discussed the topics of several conferences that I and my peers attended (for example the

Information Assurance Symposium, BlackHat, DEFCON, INFOSEC). I was intrigued by

the number of times topics related to the problems with the security risk management

process arose, particularly risk assessment. The gist of these sessions indicated that there

was an apparent problem with if and how security metrics were captured and maintained,

mostly surrounding the issue of “value of assets”, and perhaps related to the skill set of

the professionals assisting in risk management or with the definition of the metrics to be

collected.

And the end of my initial review, I was convinced that somehow, somewhere, a

technically competent set of people, tools and information is being misused or is going

unused. I wanted to conduct a non-biased inquiry to find the cause of the breakdown.

Social informatics methods make no initial assumptions, and are intentionally non-biased.

I believed a social informatics approach is best at researching this issue because:

There is a known problem and social informatics is affective when “a

problem” is known.

This is clearly a socio-technical ensemble.

It involves hardware, software, systems, techniques, and information.

It involves people, organizations, and institutions.

The problem area shows evidence of misunderstanding (failure to

communicate).

7

I decided to use a triangle technique of first defining the applicable processes

(technology), information, and people as the vertices of the triangle and then observing

the interaction between these vertices. Although my intent was to be non-biased

(observe), as I am immersed in this field, I entered the research with some hypothesis.

They were:

H1: Holistic quantitative information systems enterprise security metrics are not

available because executive and managers perceptions are that they are not needed.

Essentially, managers believe qualitative measures are sufficient.

.

H2: Information Systems Security Engineers (ISSEs) use methods that can

produce reasonable estimates of federal information systems enterprise security. Further,

ISSE’s create reasonable models during the execution of their functions that, if

maintained, could provide these quantitative metrics.

H3: The existing Risk Management Framework used by Information Assurance

Professionals may result in the lack of sufficient skill set and motivation to maintain an

ontology (or whatever IA professionals call an ontology) that could be used to provide

reasonable quantitative estimates of federal information systems enterprise security

metrics.

Although a large portion of this research is based on observation, a part of this

research was conducted using human subjects responding to an e-mail based

questionnaire and by evaluation of talent acquisition solicitations. The questions used for

the human subjects have introductory paragraphs that provide enough information to

allow a more focused answer to each question. The questions specifically address

federal systems, but as discussed in chapter 2, are applicable to all enterprises (federal,

academic, and commercial).

8

Question 1: Do you agree with the statement that government decision makers

are required to perform cost/benefit analysis?

Question 2: What methods are you aware of that can be used to evaluate the costs

and benefits associated with mitigating systems vulnerabilities (e.g. attack tree,

OCTAVE)? What skills are needed to perform this type of analysis? If you have used

any of these methods, during what phase of the systems development (or C&A process)

did you use the methods?

Question 3: What methods do you recommend for generating cost/benefits

metrics at the enterprise level?

In addition to seeking answers to the above, I asked the participants to provide

demographic information. In reality, it is the demographics that are more interesting to

my research, as the demographics indicate the skill set and temperament of the

professional being employed during each phase of a systems life cycle.

9

Thesis Structure

This thesis is arranged into 3 major sections (Introduction provided in chapter 1,

Background provided in chapters 2 - 5, and Research and Conclusions provided in

chapters 6 and 7) as follows:

Chapter 1: Discusses the problem, the research approach, and the

structure of the thesis.

Chapter 2: Provides a general background and covers some general

definitions and discussion of relevant frameworks and regulations.

Chapter 3: Discusses the technology used in risk assessment.

Chapter 4: Provides an overview of the information used and created in

risk assessment.

Chapter 5: Provides a short review of the People involved in risk

assessment.

Chapter 6: Review and discussion of the research conducted.

Chapter 7: Conclusions and potential future research.

Chapter 8: Bibliography.

10

Chapter 2

BACKGROUND

Security is, I would say, our top priority because for all the exciting things you

will be able to do with computers - organizing your lives, staying in touch with people,

being creative - if we don't solve these security problems, then people will hold back. -

Bill Gates

As its name implies, this brief chapter is intended to provide some basic

background that “sets the stage” for the chapters that follow. My intent with this chapter

is to provide the reader with:

A definition of the domain of information security, and that information

security is focused on managing risk.

An appreciation that the problem of enterprise metrics is hard, and is

based on a “weak hypothesis”.

A discussion on risk management frameworks and why I am using the

National Institute of Standards and Technology’s Special Publication 800

series as my reference framework.

A recap of the U.S. regulations that motivate federal managers to comply

with the NIST SP 800 series.

11

What is Information Systems Security (INFOSEC)?

As defined in the National Institute of Standards and Technology (NIST)

Glossary of Key Information Security Terms [NIST IR-7298] information systems

security (often referred to as INFOSEC) is “Protection of information systems against

unauthorized access to or modification of information, whether in storage, processing, or

transit, and against the denial of service to authorized users, including those measures

necessary to detect, document, and counter such threats.”

Often confused with INFOSEC, information security (sometimes called IT

security or computer security) is “protecting information and information systems from

unauthorized access, use, disclosure, disruption, modification, or destruction in order to

provide—

integrity, which means guarding against improper information

modification or destruction, and includes ensuring information

nonrepudiation and authenticity;

confidentiality, which means preserving authorized restrictions on access

and disclosure, including means for protecting personal privacy and

proprietary information;

and availability, which means ensuring timely and reliable access to and

use of information.”

Information Assurance is “Measures that protect and defend information and

information systems by ensuring their availability, integrity, authentication,

confidentiality, and non-repudiation. These measures include providing for restoration of

information systems”

INFOSEC then is comprised of information security (the protection) and

information assurance (the methodology). In this thesis, I will attempt to use Information

Assurance when describing the holistic practices (the actions people perform) related to

12

INFOSEC and computer security or information security when describing the specific

actions to provide INFOSEC (e.g. application of specific security controls). To those

involved in INFOSEC, all three terms are often used interchangeably.

Information Assurance (IA) is field of practice focused on managing the risks

associated with storing, processing, and transmitting information. Risks to our

information’s integrity, confidentiality, and availability arise from inadequate business

processes, lack of policy and regulatory compliance, personnel misconduct, espionage,

and natural or man-made disasters. IA is also concerned with governance (e.g.

compliance, privacy, audits, business continuity, and disaster management and recovery).

An interdisciplinary field, IA involves expertise in computer science, criminal justice,

management science, systems engineering, security engineering, psychology, and

forensics. IA procedures are proactive, intending to provide continuous protection, by

constantly evaluating the risk environment and updating the protection mechanism as

close to real time as possible.

Information Assurance addresses challenges that are ever present and

organizationally agnostic. IA professionals have often both diverse and specialized

skills. Ranging from technician to manager, the job skills of IA professionals are often

nebulous and confusing to business managers and senior leadership. Both corporate

executives and government officials around the world have identified countering

information security threats as one of their highest priorities, yet they are forced to

depend on a small pool of information assurance (IA) professionals that have both the

specific technical skills and domain specific business knowledge needed to address what

are increasingly pervasive and complex threats. The data used by IA professionals ranges

from highly specific (quantifiable) vulnerability entries (most often presented with

specific details on the impact of the vulnerability and specific guidance on how to counter

mitigate the vulnerability) to qualitative data captured by interviewing business leaders

and subject matter experts in order to capture their “estimates”.

13

The Hard Problem

Vilhelm Verendel [Verendel 09] defines the term “Weak Hypothesis”. A weak

hypothesis is one that “lacks clear tests of its descriptive correctness”, where descriptive

correctness is as defined by Karl Popper [Popper 1959]. For example, a weak hypothesis

may result from lack of quantified data, insufficient empirical test, unclear evidence, or

insufficient knowledge. Regardless of the cause, a weak hypothesis results when there is

difficulty in quantifying data, performing empirical test, or when we lack the knowledge

to clarify or understand the evidence captured. In [Verendel 2009], Verendel determines

as a result of surveying 140 documents (peer-reviewed research, technical reports, and

extracts from standards) that quantified security is a weak hypothesis. Although this

determination is problematic to a scientist or technician; to the engineer or manager, the

determination simply means they must rely on more qualitative methods.

The INFOSEC Research Council (IRC) is a government sponsored voluntary

organization, see www.infosec-research.org. Informally chartered, the goals of the IRC

are focused on increasing the efficiency and effectiveness of U.S. Government

Information Security (INFOSEC) research. To assist in this effort, the IRC maintains and

periodically publishes a list of high value research targets called the IRC “Hard Problems

List”. The latest version of the list is the 2005 IRC Hard Problems List (with 2009

revisions). Number 8 in the 2005 list is title Enterprise-Level Security Metrics.

The IRC states that organizations must make cost/benefits decision based on data

that is (at best) poorly quantified. These unqualified decisions are often based on short

term, poorly associated metrics that often leads to decision that result in poor, long term

decision. According the IRC, “One of the most insidious threats to security metrics lies

in the metrics themselves. The mere existence of a metric may encourage its purveyors to

over endow the significance of the metric. A common risk is that analyses may be based

14

on spurious assumptions, inadequate models, and flawed tools, and that the metrics

themselves are inherently incomplete --- often a one-dimensional projection of a

multidimensional situation. Furthermore, a combination of metrics in the small (e.g.,

regarding specific attributes of specific components) typically do not compose into

metrics in the large (e.g., regarding the enterprise as a whole).”

I believe that many enterprise level metrics programs are based on large scale

metrics programs created as a result of a desire to base budgetary decision on a

perception of quantified data or as a result of the need to be compliant with regulations

and guidance similar to the Federal Information Security Management Act (FISMA) [PL

107-347]. Regardless of organization, (commercial, academic or federal) the focus of

metrics programs being managed by the large enterprises is typically on easily quantified

parameters (e.g. status of patch compliance) or on metrics that can span the multiple

enclaves and special purpose environments that are supported, at least in part, by the

enterprise. Metrics programs are typically based on automatic collection and audit

reduction tools. Run by technicians, these metrics programs focus on the present, yet

decisions makers must look to the future. As the IRC states “in a world where

technology, threats, and users change so quickly, tomorrow’s risks may be quite different

from yesterday’s risks, and historical data is not a sufficiently reliable predictor of the

future”. The problem of translating the empirical data used by technicians and managers

who operate and maintain enterprise level organization into data that can be used by

decision makers who must determine the most cost effective allocation of security dollars

is indeed based on a weak hypothesis and may indeed be a hard problem.

15

Is the problem real? And do I really need metrics?

The case no longer needs to be made that our information systems require

INFOSEC; we all know that everything from our utilities infrastructure to our personal

computers come under persistent and sophisticated cyber-attack. Virtually any

organization that uses information technology, from public utilities to the Department of

Defense, has implicitly or explicitly a need to provide INFOSEC. The case for why we

measure security though needs to be understood.

Program managers involved in providing information technology products in the

commercial world will implement security in products only to the extent that their

customers require (e.g. I have to prove that my products are built in a security

environment and security so I achieve ISO certification). When their customers require

evidence that security is “built in”, security measurement becomes an issue. Business

managers must balance their need to protect their organization’s information technology

infrastructure and information with the need to make a profit. They need metrics, or at

least some methodology, to help make these cost/benefit decisions.

Public utilities and services must both defend their infrastructure from cyber-

attack (e.g. power grid or water supply control systems) and be in compliance with

federal requirements to protect financial (and personal) data obtained from their

consumers. Hospitals and other health care providers are under even greater scrutiny to

ensure patient data remains confidential. On the government side (almost all

governments), managers must implement only what is required by policy, legislation, or

standard. These decision makers need some form of metrics to provide evidence of

legislative compliance.

16

Christine Kuligowski [Kuligoswski 2009] in her Master’s thesis uses the term

“security framework” to define the collection of various documents that “give advice on

topics related to information systems security, predominately regarding the planning,

implementing, managing and auditing of overall security practices.” Christine conducted

a detailed comparison of the two largest frameworks, the security framework provided by

the National Institute of Standards and Technology (NIST) through NIST Special

Publications series documents available on the NIST.gov website and the ISO/IEC 27000

available from the International Organization for Standardization (ISO) and the

International Electrotechnical Commission (IEC). A partial list of these standards is

shown in table 2.1 for the NIST series and table 2.2 for the ISO/IEC series [ISO/IEC

27000 Series].

ISO/IEC 27001 formally specifies a management system that is intended to bring

information security under explicit management control. Being a formal specification

means that it mandates specific requirements. Organizations that claim to have adopted

ISO/IEC 27001 can therefore be formally audited and certified compliant with the

standard. The NIST SP series provides guidance to organizations on how to implement

and use a risk management framework, including how the organization can implement a

formal method for certifying and authorizing that systems are provide adequate security.

Table 2.1: Partial list of NIST Special Publications Series.

NIST SP Number Title

SP 800-12 An Introduction to Computer Security: The NIST Handbook

SP 800-18 Rev.1 Guide for Developing Security Plans for Federal Information

Systems

SP 800-23 Guidelines to Federal Organizations on Security Assurance and

Acquisition/Use of Tested/Evaluated Products

SP 800-27 Rev. A Engineering Principles for Information Technology Security (A

Baseline for Achieving Security)

SP 800-30 Rev. 1 Guide for Conducting Risk Assessments

SP 800-34 Rev. 1 Contingency Planning Guide for Federal Information Systems

SP 800-35 Guide to Information Technology Security Services

SP 800-36 Guide to Selecting Information Technology Security Products

SP 800-37 Rev. 1 Guide for Applying the Risk Management Framework to Federal

17

Information Systems: A Security Life Cycle Approach

SP 800-39 Managing Information Security Risk: Organization, Mission, and

Information System View

SP 800-51 Rev. 1 Guide to Using Vulnerability Naming Schemes

SP 800-53 A Rev. 1 Guide for Assessing the Security Controls in Federal Information

Systems and Organizations, Building Effective Security

Assessment Plans

SP 800-53 Rev. 3 Recommended Security Controls for Federal Information

Systems and Organizations

SP 800-53 Rev. 4 Security and Privacy Controls for Federal Information Systems

and Organizations

SP 800-55 Rev. 1 Performance Measurement Guide for Information Security

SP 800-59 Guideline for Identifying an Information System as a National

Security System

SP 800-60 Rev. 1 Guide for Mapping Types of Information and Information

Systems to Security Categories

SP 800-64 Rev. 2 Security Considerations in the System Development Life Cycle

SP 800-65 Integrating IT Security into the Capital Planning and Investment

Control Process

SP 800-100 Information Security Handbook: A Guide for Managers

SP 800-115 Technical Guide to Information Security Testing and Assessment

SP 800-117 Guide to Adopting and Using the Security Content Automation

Protocol (SCAP) Version 1.0

SP 800-126 The Technical Specification for the Security Content Automation

Protocol (SCAP): SCAP Version 1.0

SP 800-128 Guide for Security-Focused Configuration Management of

Information Systems

SP 800-137 Information Security Continuous Monitoring for Federal

Information Systems and Organizations

18

Table 2.2: ISO/IEC 27000 family of standards [ISO/IEC 27000 Series].

Number Description

ISO/IEC 27000 Information security management systems overview and

vocabulary

ISO/IEC 27001 Information security management systems requirements.

ISO/IEC 27002 Code of practice for information security management

ISO/IEC 27003 Information security management system implementation

guidance

ISO/IEC 27004 Information security management measurement

ISO/IEC 27005 Information security risk management

ISO/IEC 27006 Requirements for bodies providing audit and certification of

information security management systems

ISO/IEC 27007 Guidelines for information security management systems

auditing (focused on the management system)

ISO/IEC TR 27008 Guidance for auditors on ISMS controls (focused on the

information security controls)

ISO/IEC 27010 Information technology—Security techniques—Information

security management for inter-sector and inter-organizational

communications

ISO/IEC 27011 Information security management guidelines for

telecommunications organizations based on ISO/IEC 27002

ISO/IEC 27013 Guideline on the integrated implementation of ISO/IEC 20000-

1 and ISO/IEC 27001

ISO/IEC 27014 Information security governance

ISO/IEC TR 27015 Information security management guidelines for financial

services

ISO/IEC 27031 Guidelines for information and communications technology

readiness for business continuity

ISO/IEC 27032 Guideline for cybersecurity (essentially, 'being a good

neighbor' on the Internet)

ISO/IEC 27033-1 Network security overview and concepts

ISO/IEC 27033-2 Guidelines for the design and implementation of network

security

ISO/IEC 27033-3:2010 Reference networking scenarios - Threats, design techniques

and control issues

ISO/IEC 27034 Guideline for application security

ISO/IEC 27035 Security incident management

ISO/IEC 27037 Guidelines for identification, collection and/or acquisition and

preservation of digital evidence

ISO 27799 Information security management in health using ISO/IEC

27002

19

Christine concluded that collectively these two frameworks provide the

preponderance of guidance used worldwide on how to assess and manage risk, and,

although there are differences in the frameworks implementation, the consequences of

these differences are “inconclusive” and are essentially inconsequential. As is the case

with ISO/IEC standards, all NIST Special Publications are open for public review and

comment prior to publication. Indeed, NIST bases the vast majority of its documentation

on best practices of industry, academia and government. It should not be surprising than

that the vast majority of international guidance would come from these two frameworks.

Based on this analysis, this thesis will use NIST for the majority of discussion (primarily

because the NIST standards are free and openly available); as added background relating

to federal systems, this chapter will highlight some of the directives (policies, laws, and

standards) that require U.S. federal government programs to manage and measure

security. Specifically, the chapter will describe the general intent and purpose of select

directives, and what the directives impact is on security measurement. We will discuss

the details of the NIST framework (and some commercial equivalents, later in this

thesis). The directives discussed are shown in Table 2.3:

20

Table 2.3: IA related directives.

DIRECTIVE IMPACT

The Paperwork Reduction Act

(PRA) of 1980:

Stipulates that government agencies must ensure

that information technology is acquired, used,

and managed to improve performance of agency

missions, including the reduction of information

collection burdens on the public

The Federal Managers Financial

Integrity Act (FMFIA) of 1982:

Requires ongoing evaluations and reports from

each executive on the adequacy of administrative

control for internal accounting systems.

The Government Performance

and Results Act (GPRA) of 1993:

Establishes the foundation for budget decision

making to achieve strategic goals.

Clinger-Cohen Act (CCA): Establishes a comprehensive approach for

executive agencies to improve the acquisition

and management of their information resources

OMB Circular A-130: Provides a policy framework for information

resources management (IRM) across the Federal

government.

The Federal Information Security

Management Act of 2002

(FISMA):

Established the National Institute for Standards

and Technology (NIST) as the responsible

agency for developing standards and mandates

use of the Federal Enterprise Architecture.

The Paperwork Reduction Act (U.S. Code, Title 44, Chapter 35,

subparagraphs 3501-3520) [PL 104-013]: Although primarily focused on reducing

unnecessary paperwork burden, this act also stressed increased use of information

technology. As stated in subparagraphs 3501 of the PRA, the

“purposes of this subchapter are to—

(1) minimize the paperwork burden for individuals, small businesses, educational

and nonprofit institutions, Federal contractors, State, local and tribal

governments, and other persons resulting from the collection of information by or

for the Federal Government;

21

(2) ensure the greatest possible public benefit from and maximize the utility of

information created, collected, maintained, used, shared and disseminated by or

for the Federal Government;

(3) coordinate, integrate, and to the extent practicable and appropriate, make

uniform Federal information resources management policies and practices as a

means to improve the productivity, efficiency, and effectiveness of Government

programs, including the reduction of information collection burdens on the public

and the improvement of service delivery to the public;

(4) improve the quality and use of Federal information to strengthen

decisionmaking, accountability, and openness in Government and society;

(5) minimize the cost to the Federal Government of the creation, collection,

maintenance, use, dissemination, and disposition of information;

(6) strengthen the partnership between the Federal Government and State, local,

and tribal governments by minimizing the burden and maximizing the utility of

information created, collected, maintained, used, disseminated, and retained by

or for the Federal Government;

(7) provide for the dissemination of public information on a timely basis, on

equitable terms, and in a manner that promotes the utility of the information to

the public and makes effective use of information technology;

(8) ensure that the creation, collection, maintenance, use, dissemination, and

disposition of information by or for the Federal Government is consistent with

applicable laws, including laws relating to—

(A) privacy and confidentiality, including section 552a of title 5;

22

(B) security of information, including the Computer Security Act of 1987

(Public Law 100-235); and

(C) access to information, including section 552 of title 5;

(9) ensure the integrity, quality, and utility of the Federal statistical system;

(10) ensure that information technology is acquired, used, and managed to

improve performance of agency missions, including the reduction of information

collection burdens on the public; and

(11) improve the responsibility and accountability of the Office of Management

and Budget and all other Federal agencies to Congress and to the public for

implementing the information collection review process, information resources

management, and related policies and guidelines established under this

subchapter.”

This act also established the office of the Chief Information Officer, and levied

the requirement to coordinate with the director of the National Institute of Standards and

Technology (NIST) the “implementation of policies, principles, standards, and

guidelines for information technology functions and activities of the Federal

Government, including periodic evaluations of major information systems”. Its effect on

security measurement is that to be compliant with the act, security measurement must be

conducted to the “greatest possible public benefit”. The government program manager’s

most effective method to “prove compliance” is to be in accordance with the guidance

provided by the NIST. The impacts on security measurement of this act are that it

requires program managers to be cost effective in applying and measuring security and

implies that they should follow the recommendations of the NIST.

The Federal Managers Financial Integrity Act (FMFIA) of 1982 [PL 97-225]:

The Act requires each government agency to establish controls that reasonably ensure:

23

“(1) obligations and costs comply with applicable law, (2) assets are

safeguarded against waste, loss, unauthorized use or misappropriation, and (3) revenues

and expenditures are properly recorded and accounted for. In addition, the agency head

must annually evaluate and report on the systems of internal accounting and

administrative control.”

Although the Act does not initially appear to directly affect Security

measurement, the act is the law behind OMB Circular A-123 which mandates that

“Internal control also needs to be in place over information systems – general and

application control. General control applies to all information systems such as the

mainframe, network and end-user environments, and includes agency-wide security

program planning, management, control over data center operations, system software

acquisition and maintenance. Application control should be designed to ensure that

transactions are properly authorized and processed accurately and that the data is valid

and complete. Controls should be established at an application’s interfaces to verify

inputs and outputs, such as edit checks. General and application control over information

systems are interrelated, both are needed to ensure complete and accurate information

processing. Due to the rapid changes in information technology, controls must also

adjust to remain effective.” The end result of the OMB Circular A-123 is that federal

managers must not only ensure that federal systems implement security controls and that

these controls be measured.

The Government Performance and Results Act (GPRA) [P.L. 103-62]:

Enacted in 1993, this act at first appears to have no effect on security measurement.

However, the GPRA requires agencies to engage in project management tasks such as

setting goals, measuring results, and reporting their progress. To comply with the GPRA,

agencies must produce strategic plans, performance plans, and must conduct gap analysis

on projects. It requires that agencies develop 5 year plans, must establish long term goals

(with annual performance goals), and must prepare annual performance reports. Its

24

impact on security measurement is that security measurement is one of the annual

performance measurements.

Clinger/Cohen Act [PL 104-106]: This Act is a companion (supplement) to the

Paperwork Reduction Act (PRA) [PL 104-013]. It is intended to establish a more defined

and comprehensive approach for executive agencies to improve the acquisition and

management of their information resources. The act recognized that it is essential to

establish effective IT leadership within each government agency and reinforced the

requirement to establish a Chief Information Officer. The Act requires that the

government IT enterprises be operated as “an efficient and profitable business would be

operated”. The law is complex but clearly established the OMB as being primarily

responsible for providing guidance on information technology acquisition and

management. It also encouraged (short of a mandate) the creation of reference

architectures; an encouragement that led to the creation of the Federal Enterprise

Architecture (FEA) and the Federal Enterprise Architecture Framework (FEAF).

Office of Management and Budget (OMB) Circular A-130 [OMB A-130]:

Management of Federal Information Resources. The circular has been updated many

times, but always maintains its association with the Paper Reduction Act and the Clinger-

Cohen act. The policy requires that:

All federal agencies provide plans for information systems that are life

cycle based and that all systems have security plans.

All agencies must designate a single individual who has responsibility for

operational security.

All agencies annual and fiscal responsibility reports must include security.

All agencies must provide annual security awareness training to all

government users and administrators of the system.

All agencies must perform regular review and improvement upon

contingency plans.

25

The end result of this act on security measurement is that program managers must

measure the security of their systems annually and report those measurements with their

annual management and fiscal responsibility reports.

The Federal Information Security Management Act of 2002 (FISMA): Title

III of the E-Government Act of 2002 (P.L. 107-347, 116 Stat. 2899) [PL 107-347]:

The act requires that each federal agency develop, document, and implement an agency-

wide program to provide information security for the information and information

systems that support the operations and assets of the agency. The act emphasizes a "risk-

based policy for cost-effective security". FISMA specifically requires that agency

program officials, chief information officers, or inspectors general (IGs) conduct annual

reviews of the agency’s information security program and report the results to the Office

of Management and Budget (OMB). FISMA explicitly assigns specific responsibilities to

federal agencies, the National Institute of Standards and Technology (NIST) and the

Office of Management and Budget (OMB) in order to strengthen information system

security

NIST is responsible for developing standards, guidelines, methods and techniques

for providing information security for all agencies (with some national security

exceptions). FISMA mandates that all Information Systems be categorized using

Federal Information Processing Standard (FIPS) 199 "Standards for Security

Categorization of Federal Information and Information Systems" and that all systems

comply with FIPS 200 "Minimum Security Requirements for Federal Information and

Information Systems". To provide additional guidance, NIST maintains a series of

special publications (the NIST SP 800 series). Examples of some of the SP 800 series

documents NIST provides to help agencies comply with FISMA, (some of these,

depending on the agency and the category of the information system are mandatory):

NIST SP 800-60 "Guide for Mapping Types of Information and

Information Systems to Security Categories."

26

NIST SP 800-53 "Recommended Security Controls for Federal

Information Systems and Organizations.”

NIST SP-800-18 “Guide for Developing Security Plans for Federal

Information Systems.”

SP 800-37 "Guide for Applying the Risk Management Framework to

Federal Information Systems: A Security Life Cycle Approach".

FISMA’s impact on security measurement includes the requirement to have full

system life cycle security measurement. This measurement spans the entire life cycle

from feasibility study to retirement. It includes activities ranging from initial risk and

vulnerability analysis to operational continuous monitoring. Further the act provides the

foundation establishing NIST as the agency responsible for providing standards and

guidance for security measurement for all Federal agencies. Current directives are

requiring Department of Defense (DoD) and National Security Organizations to comply

with the guidance provided by NIST. The end result of these directives and other similar

directives is that government program managers must provide an annual security

measurement plan that establishes and maintains an assessment of the level of risks

associated with their programs.

NIST has the established mission to “promote U.S. innovation and industrial

competitiveness by advancing measurement science, standards, and technology in ways

that enhance economic security and improve our quality of life.” NIST is a branch of the

Department of Commerce and as such is involved with evaluating industry best practices

and consistently involves industry and academia in collaboratively evaluating all of its

products. The end result is a set of standards that, although created specifically for the

federal agencies, are applicable to all organizations (e.g. industry, academia, and federal).

27

Chapter 3

TECHNOLOGY

Karen Evans, OMB Administrator for Electronic Government and Information

Technology, in testimony before the Committee on Government Reform, Subcommittee on

Technology, Information Policy, Intergovernmental Relations, and the Census, stated—

“There continues to be a failure to adequately prioritize IT function

decisions to ensure that remediation of significant security

weaknesses are funded prior to proceeding with new development…

Agencies must—

1. Report security costs for IT investments;

2. Document in their business cases that adequate security

controls have been incorporated into the lifecycle planning for

each IT investment;

3. Reflect the agency’s security priorities as reported separately in

their plans of action and milestones for fixing programs and

systems found to have security vulnerabilities;

4. Tie those plans of action and milestones for an IT investment

directly to the business case for that investment” [Evans 2004]

The previous Chapter presented the argument that any prudent organization

involved with information technology will establish in some manner an information

assurance program and also presented the argument that federal systems in the United

States must establish and follow one. Furthermore, it presented that the two most

prevalent frameworks used by IA professionals provide essentially the same quality of

guidance. In this chapter, I will discuss the processes (technology) Information

Assurance professionals use in creating and support IT risk assessment. I will also

attempt to relate the activities of systems engineering with the activity of security

engineering in order to demonstrate how security engineering is conducted throughout a

systems life cycle and when security measurement is conducted.

28

The Systems Engineering Life Cycles

The system life cycle is the realm of the Systems Engineer. Systems engineering

is a disciplinary field of engineering focusing on how complex projects should be

managed over their life cycles. It is an engineering discipline that uses interdisciplinary

processes and interdisciplinary integrated product teams to control development through

a system's entire life cycle. The International Council on Systems Engineering

(INCOSE) [INCOSE SEH], IEEE [IEEE 1220], and ISO [ISO/IEC 15288], all provide

high guidance on this well documented discipline. ISO/IEC 15288 “Every system has a

life cycle. A life cycle can be described using an abstract functional model that represents

the conceptualization of a need for the system, its realization, utilization, evolution and

disposal. A system progresses through its life cycle as the result of actions, performed

and managed by people in organizations, using processes for execution of these actions.

The detail in the life cycle model is expressed in terms of these processes, their outcomes,

relationships and sequence.”

As shown in Figure 3.1, the INCOSE Systems Engineering Handbook provides a

comparison of potential systems engineering life cycles.

29

Figure 3.1: Life Cycle Models – from INCOSE Systems Engineering Handbook.

Although slightly different in their representation of a systems life cycle, all the

models above express the concept of evolving a system from discovery to retirement. All

of the models start with an understanding of the user’s and stakeholder’s needs, the

environment the system will operate in, and what enablers and obstacles the system may

encounter. Subsequent stages show the progression through the maturing of the systems,

development to operations and finally to retirement. Systems engineers refer to this

process as evolving a system from cradle to grave. It usually involves starting the

engagement with an organization during the development of the original definition of

30

mission needs and concept of operation then shepherding the program through design,

development, deployment, into operations and then, possibly, retirement. All the models

above can be roughly approximated using the evolution of a system through its life cycle

using the following six phases.

1: Definition: This phase encompasses learning the operational environment (the

mission area), capturing the mission essentials and mission needs in some form of

document (e.g. an initial capabilities document or mission needs statement), and defining

the desired outcome of the object of engineering (for systems, this usually is

accomplished by creation of a high level requirement specification and a concept of

operations document). It is during this phase that the ontology (explicit or implicit) of the

system is created. Most models show this as the first part of the Concept phase.

2: Design: In “design” the concepts from definition are modeled and translated

in to artifacts that enable and constrain the creation (engineering) of the object. This

stage includes the creation of the conceptual models (e.g. architectural views) and the

conceptual schema (e.g. requirements) for the object of engineering.

3: Development. During this stage, the object of engineering is created. It is the

stage that is the instantiation of the conceptual schema.

4: Deployment. In some systems engineering life cycle models, this stage is

called integration, verification, and validation. It is the phase where the engineering

objects are integrated in large systems, the conceptual schema’s (requirements) are tested

to ensure the objects comply with the schema’s, and the completed system is validated to

ensure it fulfills the need captured during the definition phase.

5: Operations. The development object of engineering is operated and sustained.

This phase consists of engineering activities to provide maintenance, periodic upgrade,

and (potentially anticipated) repair of latent defects.

31

6: Retirement. This will include migration of the activities supported by the

existing system to new systems, disposal of the existing system, and capture of

intellectual property (e.g. lessons learned).

Figure 3.2: Six Systems Life Cycle Model Steps.

All of the models referenced in Figure 3.2 and the Systems Engineering

Handbook refer to a constant feedback mechanism, sometimes called gating (see figure

3.3), that ensures that the ontology created in the definition phase and all of the models

and scripts created based on this ontology are validated periodically throughout the life

cycle. These gates used in gating typically occur at the transition between phases of the

life cycle. For example, the transition gate between definition phase and the design phase

(labeled System’s Requirements Review below) would evaluate the completeness of the

program’s requirements definition and the program’s readiness to efficiently perform

design; as well as reviewing the tangible products that were used to define the systems to

be designed.

Figure 3.3: Typical gating reviews.

32

Any inadequacy discovery during a phase (perhaps as a result of changes in

environment or in technology) can result in a program regressing to an earlier phase.

Also, the phases are shown in a way that leaves the impression that a phase must

complete before the next phase can start. Perhaps a better representation of the phases

would show the phases overlapping with “feedback” loops that show that phases may not

always progress smoothly (figure 3.4).

Figure 3.4: Overlapping phases showing feedback.

33

Security Engineering

Systems security engineering is a specialized subfield of systems engineering that

focuses on the security aspects in the engineering of systems (primarily information

technology systems). As is the case in systems engineering, many organizations provide

framework, standards and guidance for security engineering. For federal systems, the

framework established by the NIST to provide guidance for development and

maintenance of the risk management program is the Risk Management Framework

contained in NIST Special Publication 800-37 – Guide for Applying the Risk

Management Framework to Federal Information Systems [NIST SP800-37]. Before we

discuss the Risk Management Framework (RMF), it is worthwhile to briefly describe the

Processes defined in the National Security Agencies (NSA) Information Assurance

Technology Framework (IATF) [NSA IATF].

As stated in the Information Assurance Technology Framework (IATF)

” The Framework has several objectives:

o Raise the awareness among users of information-dependent systems of

information assurance technologies.

o Identify technical solutions to IA needs in accordance with national

policies. Employ the technology focus areas of a Defense-in-Depth

strategy to define approaches to information assurance.

o Define the security functions and protection levels needed for different

situations or mission scenarios (referred to as “cases”).

o Present the IA needs of users of information-based systems.

o Highlight the need to engage a team of IA or information systems security

34

experts to resolve pressing security needs.

o Aid the development of IA solutions that satisfy IA needs by highlighting

gaps in the currently available commercial and government protection

technologies.

o Provide guidance for solving IA issues by offering tutorials on available

technologies, tradeoffs among available solutions (at a technology versus

product level), and descriptions of desirable solutions characteristics.

o Assist purchasers of IA products by identifying important security-related

features that should be sought.”

With so many lofty goals, the framework is a huge document, difficult to produce

and difficult to maintain. Now superseded by several NIST publications, the IATF still

provide an excellent reference. Within the IATF, the security engineering process steps

are:

Discover Information Protection Needs: The security engineer evaluates the

mission needs, relevant policies, regulations, and standards in the user

environment. All of the information systems stakeholders are evaluated to

determine the nature of their interaction with the system and their roles,

responsibilities, and authorities. The information protection needs are defined

from the perspective of the stakeholders.

Define Information Protection System: During this activity, the user’s description

of information protection needs and information system environment are

translated into objectives, requirements, and functions. The goal of this activity is

to define what the information protection system is going to do, how well the

35

information protection system must perform its functions, and the internal and

external interfaces for the information protection system.

Design the Information Protection System: In this activity, working with the

systems engineers, the security engineer builds the system security architecture

and specifies the design solution for the information protection system. The IATF

identifies the following activities that to be performed by the security engineer

during this phase:

- Refine, validate, and examine technical rationale for requirements and threat

assessments,

- Ensure that the set of lower-level requirements satisfy system-level

requirements,

- Support system-level architecture, CI, and interface definition,

- Support long lead-time and early procurement decisions,

- Define information protection verification and validation procedures and

strategies,

- Consider information protection operations and life-cycle support issues,

- Continue tracking and refining information protection relevant acquisition and

engineering management plans and strategies,

- Continue system-specific information protection risk reviews and

assessments,

- Support the certification and accreditation processes, and

- Participate in the systems engineering process.

Implement the Information Protection System: The objective of this activity is to

build, buy, integrate, verify, and validate the information protection subsystem

against the full set of information protection requirements.

36

Assess Effectiveness: The effectiveness of the information protection system is

assessed with emphasis on how well the system provides the necessary level of

confidentiality, integrity, availability, and non-repudiation to the information

being processed by the system and required for mission success.

37

Risk Management

Program risk managers often state that ”risk is not a problem. It is an

understanding of the level of threat due to potential problems. A problem is a

consequence that has already occurred.”. ISO/IEC 15288 [ISO/IEC 15288] is a Systems

Engineering standard covering systems engineering processes and systems engineering

life cycle stages. As stated in ISO/IEC 15288, “The purpose of the Risk Management

Process is to identify, analyze, treat and monitor the risks continuously”. According to

ISO/IEC 15288, The Risk Management Process is a continuous process for

systematically addressing risk throughout the life cycle of a system. Risk management

processes are intended to be applied to risks related to the acquisition, development,

maintenance, or operation, and retirement of a system.

Many organizations, companies, and governments provide risk management

frameworks that are intended to provide the basis for projects to tailor their risk

management needs to match the risk control goals of the project. Most of these

frameworks include some form of the following five phases:

1. Risk management planning: The initial phase of our risk management process; in

this phase, the remainder of the risk processes are tailored to best meet program,

project or system needs. During this phase, the organization risk profile is

defined, its tolerance for risk is established, its organization for managing risk is

defined, and initial budgetary guidance is established.

2. Risk Identification: The risk identification step captures program uncertainties,

issues, and threats in distinct, measurable, describable risk entries into some form

38

of risk list. The goal of risk identification is to identify any risk, regardless of

perceived impact.

3. Risk Assessment and Prioritization: This phase requires a consistent and

repeatable assessment of all risk items to rank their criticality. During this phase,

an assessment (often qualitative) is made to assign numerical assessment values

for the probability that a risk event will occur and subjective evaluation of the

consequence criteria, evaluated from the perspective of cost, performance, and

schedule.

4. Risk Handling: Although sometimes called risk mitigation, risk handling

involves several methods of responding to risk, including mitigation. Typical

methods include:

Avoid – Avoid the risk by taking an alternate approach or changing the

requirements to acceptable levels.

Transfer – Transfer the risk to a party who has more control over the risk area

such as a parallel development effort or a different agency or contractor.

Reserves – Use reserves (funding, schedule slack, or design margins) to

reduce the risk.

Accept (Assume) – Very low risk or cost of reduction/ mitigation outweighs

possible effect

Mitigate – Reduce the effect of the risk by planning and implementing one or

more tasks to accomplish that end.

39

5. Risk Tracking and Reporting: Sometimes called risk monitoring or continuous

monitoring, this phase involves some methodology for keeping track of risk and

for reporting the status of risk items.

These phases are shown graphically in Figure 3.5. Note that Phases two through

five are cyclical. Most programs instantiate this cycle on periodic bases and convene

some form of board (e.g. a risk review board) somewhere in the cycle. There are

variations of this framework. But I have found that most variations can be recast to fit

this model.

Figure 3.5: A Typical Risk Management Process Framework and Cycle.

40

Risk management planning: In the initial phase management framework, the

organization risk profile must be defined, its tolerance for risk is established, its

organization for managing risk is defined, and initial budgetary guidance is established.

Programs need to respond to program risk differently based on the program’s risk profile,

for example, a program that must provide essential updates to a military naval vessel

during a critical time period when the vessel is in dry-dock, will be very adverse to any

potential risk associated with schedule slip. Another example might be a program that

provides a satellite payload, this type of program will be very adverse to any risks that

could cause the payload to prematurely fail (e.g. through a technology failure or a

reliability failure). Depending on the risk then, the organizations risk control team will

be defined and staffed with the skill set needed to define risk handling appropriately.

Procedures for cyclical risk management are tailored and an initial budget is allocated.

It is vital to the success of risk management planning that those involved in the

planning activity recognized the necessity that risk management is an organized method

for identifying and measuring risk and for selecting, developing, and implementing

options for the handling of risk. It is a living process that requires management to inject

the process(es) into the life cycle development processes of the program and to maintain

risk management process currency and relevance. The process depends on consistent and

persistent execution of early identification activities integrated in the program or

activities event planning, an assessment process that allows for inclusion, as needed, of

the appropriate subject matter experts to assess and prioritize risks; these same subject

matter experts must be available to help define the appropriate risk handling methods.

Finally, risk planning must recognize that the risk management cycle requires a dedicated

resource to manage, track, and status the programs risk

Risk Identification: The risk identification phase captures program

uncertainties, issues, and threats in distinct, measurable, describable risk entries in the

risk list. The goal of this phase is to reveal any risk before they become a problem. Most

risk management frameworks expect all program personnel to be active participants in

41

the risk identification effort enabled by some suite of proactive risk identification

activities such as analysis of requirements versus capabilities, risk workshops, periodic

risk training, and use of project risk analysis tools.

Capturing the context of risk involves recording additional information about

circumstances, events, and interrelationships within the program (project) that may affect

the risk. The objective is to provide enough risk information to ensure that the intent of

the original risk statement can be understood, particularly after time has passed. The

risks should be captured and documented in a Risk List/Register that at a minimum

includes the following for each risk identified:

Risk ID

Risk Name

Risk Statement

Risk Assessment and Prioritization: In order to prioritize risk, the program

must perform a credible, consistent, and repeatable assessment of all risk items.

Typically this involves determining a numerical assessment values (see Table 3.1) for the

probability that a risk event will occur (Pf), using subjective evaluation of the probability

criteria. Pf ranges between 0.0 and 1.0 inclusive, with some predetermined and consistent

granularity (e.g. 0.1). The next step of the phase is then to assign numerical values (See

Table 3.2) to the severity of consequence (Cf) from subjective evaluation of the

consequence criteria, usually evaluated from the perspective of cost, performance, and/or

schedule. The assessment team will then select worst case Cf to represent that risk. Cf

ranges between 0 and 10 inclusive, with a granularity of 1.0.

Risks are then ranked according to a Risk Factor (Rf) that is the product of Pf and

Cf (i.e. Rf = Pf * Cf). Since Pf is always less than 1 (if it is 1, then it is no longer a risk, it

is a problem), and Pf is always less than or equal to 10 (see table 3.2), Rf has a range

between 0 and 10. Most project tracking and status procedures prefer that the risk

42

managers define thresholds for Rf to indicate the more traditional ranking of high,

medium and low. This classification is usually subjective and often arbitrary (e.g. Risks

with Rf > 5.5 are noted as high, risks with Rf < 2.5 as low, and the remainder as

medium). Table 3.3 illustrates how risks can reported based on Rf.

Table 3.1: Criteria for determining probability of occurrence, Pf

Criteria for Probability of Risk

0.9 (90%)

Negative outcome is almost certain. Indicators: Current approach/processes cannot mitigate this risk; State-of-the-art technology; System is very complex; Success highly dependent upon developmental activity beyond program span of control; Issue not well understood.

0.7 (70%)

Negative outcome is highly likely. Indicators: Current approach/processes not well documented; Technology avail. but not validated; Significant design, SW coding, and/or validation efforts required; Complexity above normal; Success dependent upon developmental activity beyond program span of control.

0.5 (50%)

Negative outcome is likely. Indicators: Current approach/processes are partially documented; Un-validated technology has been shown to be feasible by analogy, test, or analysis, but requires moderate redesign or validation efforts; Moderate complexity; Moderately dependent upon activity beyond program span of control.

0.3 (30%)

Negative outcome is slightly likely. Indicators: Current approach/processes well understood and documented; Most of system technology has been validated; Some components require minor redesign/modification or validation efforts; Minor complexity; Some dependency upon activity beyond program span of control.

0.1 (10%)

Negative outcome is not likely. Indicators: Current approach/processes well understood and documented; Insignificant alternations, or off-the-shelf HW, SW and test equip.; Independent of separate programs, subcontractors, or customer; Assessment relies on evidence or previous experience to bolster confidence.

43

Table 3.2: Criteria for determining consequence of failure, Cf

Risk

Level

Criteria for Consequence of Failure

Performance Cost Schedule

High (Cf=9) Program success jeopardized

Program performance requirements cannot be achieved; performance unacceptable; No alternatives or solutions exist

Program budget impacted by ≥10%; Non Recurring Effort (NRE), production unit cost, or Operations and Support (O&S) cost exceeded by ≥25%

Key Program milestone would be late by > 2 weeks; development schedule exceeded by ≥25%

Significant (Cf=7) Program success in doubt

Performance unacceptable; Significant changes required

Program budget impacted by 5-10%; Program reserves (performance, cost, schedule) must be used to implement workarounds; NRE or O&S cost exceeded by 15-25%

Critical path activities ≥1 month late; Workarounds would not meet program milestones; development schedule exceeded by 15-25%

Moderate (Cf=5) Limited impact on program success

Performance below requirements; Moderate changes required; Workarounds would result in acceptable system performance

Program budget impacted by 1-5%; Program reserves (performance, cost, schedule) do not need to be used to implement workarounds; NRE, production unit cost, or O&S cost exceeded by 5-15%

Non-critical path activities ≥1 month late; workarounds would avoid impact on critical path; development schedule exceeded by 5-15%

Minor (Cf=3) Minor impact on program success

Performance below goal but within acceptable limits; Minor changes required

Program budget impacted by ≥1%; Program reserves (performance, schedule, cost) do not need to be used to implement workarounds; NRE or O&S cost exceeded by 1-5%

Non-critical path activities late; Workarounds would avoid impact on key and non-key program, milestones; development schedule. exceeded by 1-5%

Low (Cf=1) No impact on program success

Performance goals met; No changes required

Program budget not dependent on issue; NRE, production unit cost, or O&S cost not exceeded or not dependent on issue

Schedule not dependent on issue; Development schedule not exceeded or not dependent on issue

44

Table 3.3: Sample arbitrary risk level assignment

Risk

Factor Risk Level

Expected Management

Action

< 0.1 - No monitoring required

1.0 – 2.4 Low Normal monitoring required

2.5 – 5.4 Medium Management attention required

5.5 – 10.0 High Management intervention required

Risk Handling: It is customary to equate risk “handling” with risk mitigation.

However, mitigation is only one of several approaches to handling risks. Because

mitigation requires resources (e.g. money, time, people) and because a complex risk

reduction plan may introduce new risks, the following alternate approaches are often

encouraged.

Avoid – Avoid the risk by taking an alternate approach or changing the

requirements to acceptable levels.

Transfer – Transfer the risk to a party who has more control over the risk

area such as a parallel development effort, a subcontractor, or a prime

contractor.

Accept (Assume) – Very low risk or cost of reduction/ mitigation

outweighs possible effect (usually involves budgeting some form or

45

reserve).

When these are not feasible, it is necessary to define some activities to mitigate,

i.e., reduce, the effect of the risk (Pf, Cf, or both) by planning and implementing one or

more tasks to accomplish that end. Most risk management frameworks define a risk

owner who develops the mitigation plan(s) for risks requiring mitigation

Tracking and Reporting Risk and Risk Mitigation Status: To be most

responsive, an individual or office should closely monitor progress of risk mitigation

plans, re-assess old and new risks, adjust mitigation plans as necessary, and generate

comprehensive reports for program management. Most organizations assign this role to a

risk manager. As many risks and risk handling plans involve schedule risk, access to and

a method of updating program schedules (e.g. the programs integrated master schedule

(IMS)) is generally required. In most large programs, a risk-tracking tool provides a

standard method for tracking and reporting risks and risk mitigation activities from

inception to closure.

46

IT-Related Risk

Now that we have a general description of a generic risk management framework,

we can look at the specific risk associated with information systems security and the

associated frameworks for managing these risks. Hereinafter, I will refer to risks as

defined above and IT-related risk when I am referring to information systems security

specific risks.

NIST Guide for Conducting Risk Assessments SP800-30 [NIST SP 800-30]

defines IT-Related Risk as “The net mission impact considering (1) the probability that a

particular threat-source will exercise (accidentally trigger or intentionally exploit)

particular information system vulnerability and (2) the resulting impact if this should

occur. IT-related risks arise from legal liability or mission loss due to—

1. Unauthorized (malicious or accidental) disclosure, modification, or destruction of

information

2. Unintentional errors and omissions

3. IT disruptions due to natural or man-made disasters

4. Failure to exercise due care and diligence in the implementation and operation of

the IT system.”

The Risk Management Framework (RMF) is defined in the National Institute of

Standards and Technology, Special Publication 800-37 – Guide for Applying the Risk

Management Framework to Federal Information Systems [NIST SP 800-37] and builds

on the principals defined in the IATF [NSA IATF]. Following this standard, security

engineering is performed through the following six phases.

47

1: Categorize Information System. This step is intended to define the criticality

and the sensitivity of information system according to potential worst-case, adverse

impact to mission or business. The system is analyzed to determine what the new system

will be or what impact the system under development will have on an existing enterprise.

In this phase, security categorization is conducted in accordance with the Federal

Information Processing Standard (FIPS) 199: Standards Publication for Categorization

of Federal Information and Information System [NIST FIPS 199] and SP 800-60 [NIST

SP 800-60] on the entire enterprise (including both new objects of engineering and

existing systems). The results of the security categorization are used to guide selection of

security controls for the information system(s). Categorization of systems and

subsystems enables the allocation of security controls from NIST Special Publication

800-53 Recommended Security Controls for Federal Information Systems and

Organizations [NIST SP 800-53]. This, security categorization and any supporting

information used in developing the categorization is captured (according to the process)

in the system security plan or included as an attachment to the plan (e.g. as a security

concept of operations).

2: Select Security Controls. Common controls are security controls that are

inherited by the system under development from the organization the system will be a

part of. The organization identifies these common controls using FIPS 200 Minimum

Security Requirements for Federal Information and Information System and NIST SP

800-30 to ensure that the security capability provided by the inherited controls is

consistent with other existing or planned components. These “common controls” are then

used to help select the security controls (using NIST SP 800-53) applicable to the system

under development. If deemed necessary after a security risk assessment, additional

requirements may be added to supplement this set of common controls. The security

controls selected in this step are then captured following the guidance of the NIST Guide

for Developing Security Plans for Federal Information System [NIST SP 800-18].

48

3: Implement Security Controls. As the system matures, the security architecture

is developed to support the allocation of the security controls from phase 2. For systems

decomposed in subsystems, this allocation is performed on each subsystem (Not all

security controls need to be allocated to every subsystem). The NIST guiding document

for this step is NISTs National Checklist Program for IT Products--Guidelines for

Checklist Users and Developers [NIST SP 800-70].

4: Assess security controls: Security control assessments are evaluations

conducted to identify potential weaknesses (deficiencies) in system early in an effort to

provide the most cost-effective method for initiating corrective actions. During the

assess security controls phase, assessments are conducted to determine the level of

confidence decisions makers should have that the security controls are adequate and

correctly implemented. Security assessment is normally conducted using NIST Guide for

Assessing the Security Controls in Federal Information Systems and Organizations

[NIST SP 800-53A] for guidance.

5: Authorize information system: Although the ultimate goal of this phase is the

authorization for the organization to operate a system, this phase also includes the steps

that lead up to this “authorization decision” and, if needed, may include any liens (usually

requiring a get well plan) that must be addressed for an approved systems to continue to

be allowed to operate. Guidance is provided by NIST SP 800-37.

6: Monitor security controls: Often called the continuous monitoring phase, this

phase coincides with the operations of the system. During this phase, the security

controls, and any other metrics determined to be of interest, are monitored with the intent

of identifying any security relevant changes that may require the re-evaluation of the

authorization to operate the systems. This phase uses both SP 800-53A and NIST SP

800-37. Figure 3.6 represents graphically, the risk management framework.

49

Figure 3.6: The Risk Management Framework.

Following the RMF, IT security risk assessment is a continuous process that

should be initiated as soon as possible and conducted throughout the life of a system or

program (similar to when program risk management would be implemented and

conducted). The RMF recommends the use of NIST SP 800-30, Guide for Conducting

Risk Assessments. As a guide, SP 800-30 provides recommendation for processes to be

followed. As there are many processes available for risk assessment, I will briefly

describe four methodologies: SP 800-30, FAIR, OCTAVE, and TARA.

NIST Guide for Conducting Risk Assessments [ref], is as its name suggests, a

guide for how to conduct risk assessments. The guide establishes four steps for risk

assessment:

1. Prepare for the assessment

2. Conduct assessment

50

3. Communicate results

4. Maintain assessment

Step 2 (Chapter 3) of the guide defines the task necessary to conduct risk

assessments as shown in table 3.4.

Table 3.4: Assessment tasks from NIST SP 800-30 (Chapter 3).

Task Description

Task 1: Identify Threat

Sources

Identify and characterize threat sources of concern,

including capability, intent, and targeting characteristics for

adversarial threats and range of effects for non-adversarial

threats.

Task 2: Identify Threat

Events

Identify potential threat events, relevance of the events, and

the threat sources that could initiate the events. Threat

events are characterized by the threat sources that could

initiate the events.

Task 3: Identify

Vulnerabilities and

Predisposing Conditions

Identify vulnerabilities and predisposing conditions that

affect the likelihood that threat events of concern result in

adverse impacts. The primary purpose of vulnerability

assessments is to understand the nature and degree to

which organizations, mission/business processes, and

information systems are vulnerable to threat sources

(identified in Task 1) and the threat events (identified in

Task 2) that can be initiated by those threat sources.

Task 4: Determine

Likelihood

Determine the likelihood that threat events of concern

result in adverse impacts, considering: (i) the

characteristics of the threat sources that could initiate the

events; (ii) the vulnerabilities/predisposing conditions

identified; and (iii) the organizational susceptibility

reflecting the safeguards/countermeasures planned or

implemented to impede such events.

Task 5: Determine Impact Determine the adverse impacts from threat events of

concern considering: (i) the characteristics of the threat

sources that could initiate the events; (ii) the

vulnerabilities/predisposing conditions identified; and (iii)

the susceptibility reflecting the safeguards/countermeasures

planned or implemented to impede such events.

Task 6: Determine Risk Determine the risk to the organization from threat events of

concern considering: (i) the impact that would result from

the events; and (ii) the likelihood of the events occurring.

51

FAIR (Content Copyright by the Risk Management Insight, LLC – see

http://www.cxoware.com/what-is-fair/) is organizational level framework designed to

address security practice weaknesses. The framework provides procedures that enable

organizations to create a common ontology for risk. The FAIR process can be applied at

multiple levels within an enterprise or organization, but its goal is to provide an

organizational level view of risk. The FAIR is a ten step process organized into 4 stages

as shown in table 3.5.

Table 3.5: FAIR basic risk assessment stages.

Stage Step

Stage one:

Identify scenario components

Step 1: Identify the asset at risk

Step 2: Identify the threat community under

consideration

Stage two:

Evaluate loss event frequency

Step 3: Estimate the probable threat event frequency

Step 4: Estimate the threat capability

Step 5: Estimate control strength

Step 6: Define vulnerability

Step 7: Derive loss event frequency

Stage three:

Evaluate probably loss

magnitude

Step 8: Estimate worst case loss

Step 9: Estimate probably loss

Stage four:

Derive and articulate risk

Step 10: Derive and articulate risk

OCTAVE (Operationally Critical Threat, Asset and Vulnerability Evaluation),

developed at the CERT Coordination Center at Carnegie Mellon University (see

http://www.cert.org/octave/), is a suite of tools, techniques and methods for risk-based

information security strategic assessment and planning. By its own admission, CERT

defines OCTAVE as a workshop centric analysis that is heavily dependent on subject

matter experts (or at least, informed stakeholders). The framework consists of 8

processes organized into 3 phases as shown in table 3.6.

52

Table 3.6: Assessment Phases from OCTAVE.

Phase Process

Phase one:

Organizational view

Process 1: Identify senior management knowledge

Process 2: Identify operational area management

knowledge

Process 3: Identify staff knowledge

Process 4: Create threat profiles

Phase two:

Technological view

Process 5: Identify key components

Process 6: Evaluate selected components

Phase three:

Strategy and plan development

Process 7: Conduct risk analysis

Process 8: Develop protection strategy

TARA (Threat Agent Risk Assessment) is a proprietary framework supported by

Intel that emphasizes a predictive framework to prioritize areas of concern; organizations

can proactively target the most critical exposures and apply resources efficiently to

achieve maximum results. The concept at the heart of TARA is that attempting to

counter all possible threats is prohibitively expensive; focus must be placed on those

events (risks) that are most likely to happen (occur). TARA is a six step process that

culminates in prioritized list of recommendations that help decision makers determine

where to most cost effectively invest their security budget. Essentially, TARA is focused

on prioritizing threats in order to optimize security budget expenditures to the most

important threats. It is more of a process from risk management than for risk assessment.

It is mentioned here because TARA, like many risk management frameworks,

emphasizes that risk assessment is primarily a qualitative process that relies heavily on

“subject matter experts”. Risk assessment is essentially complete in step 1 of this

framework.

A paraphrase of the six steps of the TARA is:

1. Measure current threat agent risks. Using subject matter experts to review and

rank the current threat levels to the project (system). This is a qualitative to

quantitative exercise necessary to establish a general understanding of current

53

risks.

2. Distinguish threat agents that exceed baseline acceptable risks. Again using

subject matter experts (the same experts may be used to first create an acceptable

risk baseline if one does not already exist).

3. Derive primary objectives of those threat agents. These objectives are a

combination of threat agent motivations and threat agent capabilities.

4. Identify methods likely to manifest.

5. Determine the most important collective exposures. In this step, you must first

find attack vectors, which are vulnerabilities without controls. Then, the

intersection of the methods determined in step 4 and the attack vectors define

likely exposures. The goal of the step is to rank likely exposures according to

their severity of consequence. The end result of step 5 is a list of the most

important collective exposures.

6. Align strategy to target the most significant exposures.

Table 3.7 shows a comparison of three of these assessment methods. Regardless

of which method is used, the resulting equation that defines the IT risk level is a function

of the vulnerabilities and the threats to those vulnerabilities.

Table 3.7: Comparison of NIST, FAIR, and OCTAVE assessments.

NIST FAIR OCTAVE

In NIST and FAIR, SME identification is conducted either

prior to an assessment activity or as part of that activity

Process 1: Identify senior

management knowledge

Process 2: Identify

operational area

management knowledge

54

Process 3: Identify staff

knowledge

Task 1: Identify Threat

Sources

Step 1: Identify the asset at

risk

Process 4: Create threat

profiles

Step 2: Identify the threat

community under

consideration

Process 5: Identify key

components

Task 2: Identify Threat

Events

Step 3: Estimate the

probable threat event

frequency

Process 6: Evaluate

selected components

Task 3: Identify

Vulnerabilities and

Predisposing Conditions

Step 4: Estimate the threat

capability

Task 4: Determine

Likelihood

Step 5: Estimate control

strength

Step 6: Define vulnerability

Step 7: Derive loss event

frequency

Task 5: Determine Impact Step 8: Estimate worst case

loss

Process 7: Conduct risk

analysis

Step 9: Estimate probably

loss

Task 6: Determine Risk Step 10: Derive and

articulate risk

In NIST and FAIR, developing protection strategies (e.g.

risk handling) are not part of the assessment activity.

Process 8: Develop

protection strategy

55

Chapter 4

INFORMATION

The previous chapter demonstrated that IT risk is a function of vulnerabilities,

threats, and asset values. The information needed in a risk assessment then are

vulnerability, threat, and asset value related. This chapter will start with a discussion all

three of these categories of information used for risk assessment. Since, in addition to the

information needed for risk assessment, information related to risk handling (e.g.

mitigation) is relevant to the risk assessment process, the controls used to address IT risk

will be described (with examples). The chapter will end with a discussion metrics. But

first, a quick discussion of the systems security plan document, this document for most

organizations, is the most reliable implement for maintaining knowledge of the security

planning for a system.

56

Systems Security Plan

A System Security Plan (SSP) describes the approach to ensuring that the system

meets the security standards. According to the NIST Guide for Developing Security

Plans for Federal Information Systems, NIST SP 800.18 Revision 1 [NIST SP 800-18],

the purpose of the systems security planning is to “improve protection of information

systems resources”. The objective of an SSP is to:

Capture and maintain the results of systems security planning.

Provide an overview of the systems security requirements.

Describe the protections provided for the system.

Describe the controls planned and implemented for the system.

Provide definitions of the roles and responsibilities of all the organizations

and individual officials responsible for implementing the security

planning.

The Security plan is a living document, intended to be maintained throughout the

life cycle of the systems. Most organizations provide a common template for SSPs (a

simple internet search will provide ample samples) that range in size from page bulletized

lists to extremely detailed and very organization specific templates (often using

automated tools) that may be several hundred pages in length. Table [4.1] is the SSP

template provided with NIST SP 800-18. Most SSPs consistently require the following

information:

Identifying information (e.g. system name, organization, key personnel

roles and responsibilities).

Description of the purpose and function of the system.

57

Descriptions of the information contained within the system.

Description of the environment of the system (e.g. systems external

boundaries, internal interfaces, hardware, software, network and

communications equipment, governance and applicable legal

requirements).

Description of information interfaces and requirements.

Delineation of all of the security controls (requirements), how they are

implemented, and how they are monitored.

58

Table 4.1: SSP Template extracted from NIST SP 800.18

Information System Security Plan Template

1. Information System Name/Title: • Unique identifier and name given to the system.

2. Information System Categorization: • Identify the appropriate FIPS 199 categorization.

LOW MODERATE HIGH

3. Information System Owner: • Name, title, agency, address, email address, and phone number of person who owns the

system.

4. Authorizing Official: • Name, title, agency, address, email address, and phone number of the senior management

official designated as the authorizing official.

5. Other Designated Contacts: • List other key personnel, if applicable; include their title, address, email address, and phone

number.

6. Assignment of Security Responsibility: • Name, title, address, email address, and phone number of person who is responsible for the

security of the system.

7. Information System Operational Status: • Indicate the operational status of the system. If more than one status is selected, list which

part of the system is covered under each status.

Operational Under Development Major Modification

8. Information System Type: • Indicate if the system is a major application or a general support system. If the system

contains minor applications, list them in Section 9. General System Description/Purpose.

Major Application General Support System

9. General System Description/Purpose • Describe the function or purpose of the system and the information processes.

10. System Environment • Provide a general description of the technical system. Include the primary hardware,

59

software, and communications equipment.

11. System Interconnections/Information Sharing • List interconnected systems and system identifiers (if appropriate), provide the system,

name, organization, system type (major application or general support system), indicate if

there is an ISA/MOU/MOA on file, date of agreement to interconnect, FIPS 199 category,

C&A status, and the name of the authorizing official.

System

Name

Organization Type Agreement

(ISA/MOU/MOA)

Date FIPS 199

Category

C&A

Status

Auth.

Official

12. Related Laws/Regulations/Policies • List any laws or regulations that establish specific requirements for the confidentiality,

integrity, or availability of the data in the system.

13. Minimum Security Controls Select the appropriate minimum security control baseline (low-, moderate-, high-impact)

from NIST SP 800-53, then provide a thorough description of how all the minimum security

controls in the applicable baseline are being implemented or planned to be implemented. The

description should contain: 1) the security control title; 2) how the security control is being

implemented or planned to be implemented; 3) any scoping guidance that has been applied

and what type of consideration; and 4) indicate if the security control is a common control

and who is responsible for its implementation.

14. Information System Security Plan Completion Date: _____________________ • Enter the completion date of the plan.

15. Information System Security Plan Approval Date: _______________________ • Enter the date the system security plan was approved and indicate if the approval

documentation is attached or on file.

60

Risk Assessment Information

Vulnerability assessment is the process of identifying, quantifying, and

prioritizing (or ranking) the vulnerabilities in a system. Vulnerability assessment is an

integral part of IT risk assessment and is typically performed according to the following 3

steps:

Step 1: Cataloging assets and capabilities (resources) in a system. This data are

essentially an inventory of the hardware, operating systems, application programs,

appliances, facilities, and any other IT asset that can in some way be exploited. Although

potentially a daunting task, there are (fortunately) a large number of tools available that

automatically discover and catalog IT assets. Indeed, the first step of a vulnerability

assessment is almost always running one or more discovery tools. Discovery tools exist

that can specifically target to catalogue everything from software products to network

devices. Regardless of origin, the assets catalogue(s) must clearly identify the asset

including model, version and patch status.

Where the assessment is most difficult though is in determining the value of

information assets. Determining the value of corporate data, corporate intellectual

property, and corporate business process requires business leader subject matter experts

and almost always requires some form of quantitative estimate.

Step 2: Identifying the vulnerabilities of each resource. Again, a potentially

daunting task, were it now for a large number of readily available databases that delineate

the know vulnerabilities to the vast majority of IT related assets.

Step 3: Identifying potential threats to each resource. Threat data that are

specific to a resource are again generally available through automated tools and often is

61

directly available as part of the data in the vulnerability databases mentioned above.

Threats to corporate or organizational information again require use of the appropriate

subject matter experts with expert level knowledge of who or what organization would be

interested in the corporate data.

Vulnerability Databases are systems for collecting, maintaining, and

disseminating information about discovered (known) vulnerabilities targeting IT

resources. The data within in most cases include the description of the discovered

vulnerability, its exploitability, its potential impact, and the workaround to be applied

over the vulnerable system. Examples of web-based vulnerabilities databases are the

National Vulnerability Database [http://nvd.nist.gov/] and the Open Source Vulnerability

Database [http://osvdb.org/]. Table 4.2 shows an example of typical database content.

Table 4.2: Typical vulnerability database fields.

FIELD DESCRIPTION

Date/time Normally this is the date and time of the discovery of the vulnerability

Description A free form textual description of the vulnerability.

Product ID As complete an identification as possible, to include Model/Version,

release and patch status if available.

Category or

classification

A taxonomy based break down of the vulnerability. Usually includes

area(s) affected (e.g. remote network access), attack type, impact,

solution, and type of exploit,

Mitigation or

solution

Current status and if mitigations exist, where the fixes are described.

CVSS score Common Vulnerability Scoring Systems score, if available.

References Anything that helps clarify the vulnerability. This field is often used to

point to other vulnerabilities that are related or that enable this

vulnerability

Comments Free form comments field

62

The Common Vulnerability Scoring System (CVSS) is an open framework that

provides standardized vulnerability scores. Since it is an open framework, the methods

used can be evaluated and tailored for organization specific needs. As shown figure 4.1,

it is composed of three metric groups: Base, Temporal, and Environmental.

Figure 4.1: CVSS Metrics Groups (from FIRST.org website).

The base metric group is metrics that are common across all vulnerabilities,

temporal are (as the names suggests) time based and environmental are specific to an

organizational or user unique environment. Each of the metric group values are then a

function of the metrics within the group. Each metric group can have effect on the other

two. Fortunately, the scoring methods are well documented, with open and easily

accessible scoring matrices for each metric group. The system is maintained by the

Forum of Incident Response and Security Teams (FIRST) [http://first.org/]. Use of this

method does require some subject matter expertise. Most often though, with limited

knowledge, a security engineer, with security engineering background can use the table

based tools of the guide to complete the assessment for virtually all vulnerabilities.

When in doubt of an already known vulnerability metrics, the security engineer can

review and reassess that vulnerability. An online CVCSS calculator for the national

vulnerability database is available at http://nvd.nist.gov/cvss.cfm?calculator/. Help for

selecting the appropriate answer for each field of the calculator (completed by selecting

from drop down menus) provide table based guidance. Table 4.3 is an example of the

guidance provided for filling the types of tables provided:

63

Table 4.3: Typical vulnerability database fields.

2.1.5 Integrity Impact (I) This metric measures the impact to integrity of a successfully exploited vulnerability.

Integrity refers to the trustworthiness and guaranteed veracity of information. The

possible values for this metric are listed in Table 5. Increased integrity impact

increases the vulnerability score.

Metric Value Description

None (N) There is no impact to the integrity of the system.

Partial (P) Modification of some system files or information is possible, but

the attacker does not have control over what can be modified, or the

scope of what the attacker can affect is limited. For example,

system or application files may be overwritten or modified, but

either the attacker has no control over which files are affected or

the attacker can modify files within only a limited context or scope.

Complete (C) There is a total compromise of system integrity. There is a

complete loss of system protection, resulting in the entire system

being compromised. The attacker is able to modify any files on the

target system.

Table 5: Integrity Impact Scoring Evaluation

One method often used to help organize the effort to assess the impact of potential

risk is the attack tree. Attack trees are multi-leveled diagrams that start with one root that

branch out in to layers of parents and children. Starting with the root (the highest level

parent), any parent may have multiple children, but all children have only one parent.

From the bottom up, child nodes are conditions which must be satisfied to make the

direct parent node true; when the root is satisfied, the attack is complete. Each node may

be satisfied only by its direct child nodes. The ISSE will determine the conditions

necessary for a risk to be realized (these become the child nodes) and then by associating

a probability with each child and following the logic rules associated with the attack tree

structure, the ISSE can estimate a risk value for each risk. It is important to note that

these early models based on attack tree structures used to evaluate the potential risk

associated with a system are sometimes the first and best estimate of “how secure” a

system is using a reasonable expert based qualitative estimate.

64

There are many vulnerability "scoring" systems managed by both commercial and

non-commercial organizations. Some examples are the Software Engineering Institute at

Carnegie Maintains CERT/CC available at http://www.cert.org/certcc.html/ and the

SANS Institute [http://www.cert.org/certcc.html]. While useful for very specific needs,

these methods typically lack the open nature of the CVSS which offers the ability to

provide a common scoring reference that provides consistent measure across all IT

environments. CVSS is also useful when a vulnerability is not in a database, when data is

inadequate to evaluate the vulnerability, or when risk handling methods complicate the

assessment of the vulnerability (of course, a subject matter expert(s) will be needed).

An example of a risk handling method that can complicate vulnerability

assessment is defense in depth. Defense in Depth is an Information Assurance (IA)

strategy where multiple layers of defense are placed throughout an information system.

A description of Defense in Depth is available from the National Security Agency at

http://www.nsa.gov/ia/_files/support/defenseindepth.pdf/ .

Some databases use the NIST Security Content Automation Protocol (SCAP)

[NIST SP 800-126]. NISTs Technical Specification for the Security Content Automation

Protocol (SCAP): SCAP Version 1.0 provides a technical specification for SCAP. SCAP

is suite of specifications (currently 11) arranged in five categories (see table 4.4)

65

Table 4.4: SCAP specifications suites.

CATEGORY SPECIFICATION DESCRIPTION

Languages Extensible Configuration Checklist

Description Format (XCCDF) 1.2

a language for authoring security

checklists/benchmarks and for reporting

results of evaluating them [XCCDF]

Open Vulnerability and Assessment

Language (OVAL) 5.10

a language for representing system

configuration information, assessing

machine state, and reporting assessment

results [OVAL]

Open Checklist Interactive

Language (OCIL) 2.0

a language for representing checks that

collect information from people or from

existing data stores made by other data

collection efforts [OCIL]

Reporting

formats

Asset Reporting Format (ARF) 1.1 a format for expressing the transport

format of information about assets and

the relationships between assets and

reports [ARF]

Asset Identification 1.1 a format for uniquely identifying assets

based on known identifiers and/or known

information about the assets [AI]

Enumerations Common Platform Enumeration

(CPE) 2.3

a nomenclature and dictionary of

hardware, operating systems, and

applications [CPE]

Common Configuration

Enumeration (CCE) 5

a nomenclature and dictionary of

software security configurations [CCE]

Common Vulnerabilities and

Exposures (CVE),

a nomenclature and dictionary of

security-related software flaws9 [CVE]

Measurement Common Vulnerability Scoring a system for measuring the relative

66

and scoring

systems

System (CVSS) 2.0 severity of software flaw vulnerabilities

[CVSS]

Common Configuration Scoring

System (CCSS) 1.0

a system for measuring the relative

severity of system security configuration

issues [CCSS]

Integrity Trust Model for Security

Automation Data (TMSAD) 1.0

a specification for using digital signatures

in a common trust model applied to other

security automation specifications

[TMSAD]

67

Security Controls

Under the NIST RMF, Security controls are the safeguards and countermeasures

used to protect the confidentiality, integrity, and availability of the information systems

and the information that is processed, stored, and transmitted by those systems. The size

of an organization, the size of system being developed, the sensitivity and importance of

the information being processed are just a few of the many factors that affect what

security controls are needed to adequately mitigate risk incurred by using the information

and the information systems.

During the Categorize phase of the RMF (see figure 306), the system is assigned a

potential impact in the three security objective areas of Confidentiality, Integrity, and

Availability based on the table 4.5 below (extracted out of NIST FIPS 199).

68

Table 4.5: FIPS 199 Security Objectives

Security Objective LOW MODERATE HIGH

Confidentiality

Preserving authorized

restrictions on

information access and

disclosure, including

means for protecting

personal privacy and

proprietary information.

[44 U.S.C., SEC. 3542]

The unauthorized

disclosure of

information could be

expected to have a

limited adverse effect

on organizational

operations,

organizational assets,

or individuals.

The unauthorized

disclosure of

information could be

expected to have a

serious adverse effect

on organizational

operations,

organizational assets, or

individuals.

The unauthorized

disclosure of

information could be

expected to have a

severe or catastrophic

adverse effect on

organizational

operations,

organizational assets,

or individuals.

Integrity Guarding against

improper

information modification

or destruction, and

includes ensuring

information non-

repudiation and

authenticity.

[44 U.S.C., SEC. 3542]

The unauthorized

modification or

destruction of

information could be

expected to have a

limited adverse effect

on organizational

operations,

organizational assets,

or individuals.

The unauthorized

modification or

destruction of

information could be

expected to have a

serious adverse effect

on organizational

operations,

organizational assets, or

individuals.

The unauthorized

modification or

destruction of

information could be

expected to have a

severe or

catastrophic adverse

effect on

organizational

operations,

organizational assets,

or individuals.

Availability Ensuring timely and

reliable access to and use

of information.

[44 U.S.C., SEC. 3542]

The disruption of

access to or use of

information or an

information system

could be expected to

have a limited adverse

effect on

organizational

operations,

organizational assets,

or individuals.

The disruption of

access to or use of

information or an

information system

could be expected to

have a serious adverse

effect on organizational

operations,

organizational assets, or

individuals.

The disruption of

access to or use of

information or an

information system

could be expected to

have a severe or

catastrophic adverse

effect on

organizational

operations,

organizational assets,

or individuals.

These objective levels are used to identify standard suites of controls that define

the baseline set of security controls from NIST SP 800-53 and applied to the system in

RMF phase 2 (Select Security Controls). Using the risk analysis (risk assessment) and

69

attack trees when needed, this starting set of controls is customize and augmented to

create the controls needed to properly define the total control set for the set system.

Security controls in NIST SP 800-53 are organized in to 18 families identified by

a 2 character ID code (shown in table 4.6). Each family contains the security controls

that are related to the security topic of the family. The Audit and Accountability (AU)

family is show in table 4.7 as an example.

70

Table 4.6: NIST SP 800-53 control families.

ID FAMILY NAME

AC Access Control

AT Awareness and Training

AU Audit and Accountability

CA Security Assessment and Authorization

CM Configuration Management

CP Contingency Planning

IA Identification and Authentication

IR Incident Response

MA Maintenance

MP Media Protection

PE Physical and Environmental Protection

PL Planning

PS Personnel Security

RA Risk Assessment

SA System and Services Acquisition

SC Systems and Communications Protection

SI Systems and Information Integrity

PM Program Management

71

Table 4.7: The AU family from NIST SP 800-53.

ID FAMILY CONTROLS

AU-01 Audit and Accountability Policy and Procedures

AU-02 Auditable Events

AU-03 Content of Audit Records

AU-04 Audit Storage Capacity

AU-05 Response to Audit Processing Failures

AU-06 Audit Review, Analysis, and Reporting

AU-07 Audit Reduction and Report

AU-08 Time Stamps

AU-09 Protection of Audit Information

AU-10 Non-repudiation

AU-11 Audit Record Retention

AU-12 Audit Generation

AU-13 Monitoring for Information Disclosure

AU-14 Session Audit

AU-15 Alternate Audit Capability

AU-16 Cross-Organizational Auditing P0 Not Selected Not Selected Not Selected

Security controls consist of the following components:

a control section

a supplemental guidance section;

a control enhancements section;

a references section;

a priority and baseline allocation section.

72

Figure 4.2 is an extract from NIST SP 800-53 showing an example from the

Auditing and Accountability family illustrates the structure of a typical security control.

AU-3 CONTENT OF AUDIT RECORDS

Control: The information system produces audit records that contain sufficient information to, at a

minimum, establish what type of event occurred, when the event occurred, where the event

occurred, the source of the event, the outcome of the event, and the identity of any user or subject

associated with the event.

Supplemental Guidance: Audit record content that may be necessary to satisfy the requirement of

this control includes, for example, time stamps, source and destination addresses, user/process

identifiers, event descriptions, success/fail indications, filenames involved, and access control or

flow control rules invoked. Event outcomes can include indicators of event success or failure and

event-specific results (e.g., the security state of the information system after the event occurred).

Related controls: AU-2, AU-8, AU-12, SI-11.

Control Enhancements:

(1) CONTENT OF AUDIT RECORDS | ADDITIONAL AUDIT INFORMATION

The information system includes [Assignment: organization-defined additional, more detailed information] in the audit records for audit events identified by type, location, or subject.

Supplemental Guidance: Detailed information that organizations may consider in audit records

includes, for example, full-text recording of privileged commands or the individual identities

of group account users. Organizations consider limiting the additional audit information to

only that information explicitly needed for specific audit requirements. This facilitates the use

of the audit trails by not including information that could potentially be misleading or could

make it more difficult to locate information of interest. (2) CONTENT OF AUDIT RECORDS | CENTRAL MANAGEMENT OF AUDIT RECORDS

The organization centrally manages the content of audit records generated by [Assignment: organization-defined information system components].

Supplemental Guidance: This control enhancement requires that the content to be captured in

audit records be configured from a central location (necessitating automation). Related

controls: AU-6, AU-7.

References: None.

Priority and Baseline Allocation:

P1 LOW AU-3 MOD AU-3 (1) HIGH AU-3 (1) (2)

Figure 4.2: Example of typical NIST SP 800-53 security control

Since most systems receive support from an enterprise level infrastructure,

organizations will define common security controls that are pervasive to all tenants.

Common controls then are security controls that are inheritable by one or more

organizational information systems. Common controls can be inherited from many

sources including, the organization, organizational mission/business lines, sites, enclaves,

73

environments of operations, or other information systems. Regardless of where a

common control is provided or inherited from, all controls are designated and

documented in the systems security plan.

74

FISMA Metrics

Every year, the Chief Information Officer for FISMA establishes the FISMA

reporting metrics. Note that metrics are the end result of taking and assessing

measurements, the distinction being that a measurement is a single, usually quantitative,

comparison to a known standard (e.g. a ruler) or a ordinal or computation assessment

(e.g. a count or a weight measurement). Although an annual event, the changes in the

metrics reported year to year are minimal, with implemented changes primarily instituted

to address changing technology. For 2012, the following eleven areas for metrics were

required:

SYSTEMS INVENTORY: The system inventory reporting required is very high

level. The reporting is at the level of an entire project or program. In general, a PM

(ISO) is responsible for 1 system. The primary goal of this metric is to determine the

number of systems that are authorized to operate and that are functioning with adequate

security.

ASSET MANAGEMENT: This metrics consists of the total number of hardware

and software assets (as this is a constantly changing number, the metric is usually

reported using an “as of” date. For federal systems, an automated asset discovery tool is

expected to be used. One of the measures of interest in this metric is how long it takes

for the automated tool to discover all of the assets (obviously for large enterprises, this is

an estimate).

CONFIGURATION MANAGEMENT: In this case, the metric is for those assets

that require that their configurations be managed (e.g. operating systems). The metrics is

focused on the percentage of assets that should have periodic maintenance (e.g. patching)

that are managed by an automated configuration management capability.

75

VULNERABILITY AND WEAKNESS MANAGEMENT: A goal of the FISMA

metrics effort is to improve the federal systems ability to determine by automated means

the vulnerability of systems and their potential areas of weakness (this is the area of focus

of this research). The measures for this metric require providing the number of assets

identified that are evaluated using an automated capability that identifies vulnerabilities

maintained by NIST in the NIST National Vulnerability Database.

IDENTITY AND ACCESS MANAGEMENT: This simple metric requires

reporting the number of unprivileged and privileged user accounts.

DATA PROTECTION: This metric has many measures that are primarily

intended to identify the potential for data loss. The focus is on mobile devices and

unencrypted e-mail as these are primary sources of loss for sensitive data.

BOUNDARY PROTECTION: There are a lot of measures in this area. The

measures range from a count of Trusted Internet Connection (TIC) to the number of e-

mail protections directed to reduce the number of phishing attacks.

INCIDENT MANAGEMENT: For 2012, the CIO is interested in 3 measures:

The number of assets that have been penetration tested. The number of security

awareness reports (SARs) has the organization remediated. The number of incidents that

involved successful phishing attacks.

TRAINING AND EDUCATION: Although there are many measures in this

category, the primary measure is the number of users (percentage) provided annual

security awareness training.

REMOTE ACCESS: This metric is an estimate of the total annual number of

remote connections.

76

NETWORK SECURITY PROTOCOLS: A simple measure of the total number

of outward (Internet) facing domains and outward facing servers.

FISMA metrics have a relationship to the controls discussed earlier in this

chapter. The metrics clearly focus on quantifying the numbers of equipment types,

interface, and software components as well as the total counts of each type of device, but

also attempt to quantify the mitigation status of these components based on the automated

assessments provided by SCAP enabled scanning tools. FISMA reporting attempts tie to

the controls implementation in a way that allows automated tools assessment of

continuous compliance with the controls. For example, the FISMA reporting area of

VULNERABILITY AND WEAKNESS MANAGEMENT is focused on determining

what percentage of the enterprise environment is monitored by automated tools and how

well are the enterprise support organizations maintaining a low risk posture using these

automated tools.

77

Chapter 5

PEOPLE

A rose by any other name – William Shakespeare

NIST special publication 800 series (e.g. [NISP SP 800-37], [NIST SP 800-39])

uses the following definitions for the critical stakeholders in the Risk Management

Framework:

Head of Agency (Element Head, Chief Executive, e.g. Director National

Security Agency or Secretary of Defense): The executive with the ultimate

responsibility for mission accomplishment and execution of business functions.

The Head of Agency sets priorities to ensure risk mitigation while balancing the

need for collaboration and information-sharing. The Element Head retains

ultimate responsibility for all IS authorization and the associated risk management

decisions made on his/her behalf but may appoint a subordinate as an information

systems approval authority.

Risk Executive (an individual or a function): The Risk Executive function may

be fulfilled by an individual or a group within an organization. The Risk

Executive ensures risk-related decisions for individual information systems

maintain an organization-wide perspective and that managing information-system

related security risk is consistent across the organization, reflects organizational

risk tolerance, and is included along with other types of risks in the organizations

risk management process. The Risk Executive (organization or individual) is

primarily a source of expertise and consultation and is usually a department or

group (e.g. the technical security group or Cyber Security Group).

78

Chief Information Officer (CIO): The CIO ensures that information systems

are acquired and information resources are managed in a manner consistent with

laws, Executive Orders, directives, policies, regulations, as well as priorities

established by the Element Head. The CIO develops, maintains, and ensures the

implementation of sound, integrated, IS architectures and promotes the effective,

efficient design, development, and operations of all major information- resource-

management processes.

Senior Agency Information Security Officer (SAISO)/Chief Information

Security Officer (CISO): A SAISO or CISO executes the CIO’s responsibilities

under the Federal Information Security Management Act (FISMA) of 2002 and

serves as the CIO’s liaison to the organizations Authorization Official. It is this

individual who will aggregate all the organizations systems and programs FISMA

reporting into a single agency report to the OMB.

Authorizing Official (AO): An AO is an agency or element CIO or executive of

sufficient seniority to execute the decision-making and approval responsibilities

for information systems authorizations to operate (called and ATO) on behalf of

the Element Head. The AO assumes responsibility for operating an IS at an

acceptable level of risk to the organization.

Delegated Authorizing Official (DAO): A DAO is delegated authority by an

AO to carry out the same activities as an AO (e.g., authorize system operations).

Security Control Assessor (SCA): An SCA (sometimes called a certifier) is

responsible for performing the evaluation (Asses Security Controls phase) of the

security-controls and features of an IS and determining the degree to which the

system meets its security requirements.

79

Common Control Provider (CCP): A CCP is responsible for all aspects of

providing common controls (i.e. the security controls from SP 800-53,

modification to the SP 800-53 recommended controls and any custom controls

augmenting SP 800-53). Organizations may have multiple CCPs.

Information Owner/Steward: An Information Owner/Steward is an

organization official who “owns the data”. He IO has statutory, management, or

operational authority for specific information and is responsible for establishing

the policies and procedures governing its generation, collection, processing,

dissemination, and disposal.

Mission/Business Owner (MBO): An MBO has operational responsibility for

the mission or business process supported by the mission/business segment or the

information system. The MBO is a key participant/stakeholder regarding system

life-cycle decisions.

Information System Owner (ISO)/Program Manager (PM): An ISO (aka PM)

is responsible for the overall procurement, development, integration,

modification, operation, maintenance, and disposal of an information system (as

well as the system components), to include development and provision of the

stem’s Security Plan (SSP).

Information System Security Engineer (ISSE): An ISSE ensures that

information-security requirements are effectively implemented throughout the

security architecting, design, development, configuration, and implementation

processes. The ISSE coordinates his/her security-related activities with ISOs,

ISSOs/ISSMs, and CCPs. The ISSE also provides the definition, design,

development, and deployment support to development systems as part of the

system under developments systems engineering activity.

80

Information System Security Officer (ISSO)/Information System Security

Manager (ISSM): An ISSM or ISSO is responsible for maintaining the day-to-

day security posture and continuous monitoring of a system.

Although all of these stakeholders affect the metrics associated with an

information system, the individual roles directly involved in metrics collection and

assessment are the ISSE, the ISSO/ISSM, and the ISO/PM. As also stated earlier, the

stakeholders most involved in security metrics are also the program manager (PM or

ISO), the security officer or security manager (ISSO/ISSM), and the security engineer

(ISSE). Capturing the measurements and producing the metrics during the development

phases are primarily the responsibility of the ISSE and during the operational phases

primarily the responsibility of the ISSO. We will review each phase of the RMF to see

how the ISSE and ISSO generate these metrics.

During the Categorize phase, the system is assigned a potential impact in the three

security objective areas of Confidentiality, Integrity, and Availability based on the table

4.5. It is these objective levels that define the security controls from NIST SP 800-53

that are applied to the system in phase 2 (Select Security Controls). During these 2 initial

phases the metrics collected relate to the number of requirements (controls) selected,

augmented, or created. However, this categorization is insufficient to provide the

guidance needed for phase 3 (Implement Security Controls). Because of this, the IATF

and the NIST Guide for Mapping Types of Information and Information Systems to

Security Categories [NIST SP 800-60] define additional assessment steps that require the

security engineer to perform a detailed security risk analysis and assessment.

During phase 3 the ISSE must evaluate alternative architectures to provide the

most cost effective security implementation. Attack tree based models allow the ISSE to

normalize each alternative. Although the metrics provided during this phase are

estimates, to provide a reasonable assessment of the alternatives for each subsystem of a

system, the values in the attack trees would have to be ‘dollarized”. This dollarized set

81

estimates has the added benefit of providing (albeit a qualitative estimate) an estimate of

the unmitigated risk, the mitigated risk, and the cost associated with each mitigation.

During phases 1 through 3, the ISSE (and others) will generate the systems

security plan (SSP). One metric that is often captured during phases 1 through 3 is the

amount (rate) of change the SSP has endured during these phases of the development.

During phase 4 (Assess Security Controls), the SCA will conduct testing to

evaluate the security controls implemented for the system. This assessment will involve

an analysis of the SSP (now complete) and a thorough inspection with automated tools

(called scanning) of the system. The metrics collected during this phase include numbers

of known vulnerabilities encountered, un-posted patches, and number of failures to

follow best security practices. Penetration testing may also be conducted to evaluate the

system’s ability to prevent or detect intrusions. The attack tree based models developed

earlier in the systems development cycle may be used if additional security measures are

determined to be necessary but once this phase is complete, these models are typically no

longer maintained. The end result of successfully completed phase 4 for most systems is

that the design is considered to be complete.

During phase 5 (Authorize Information System) any deficiencies identified by the

SCA must be resolved. Any additional security measures would be implemented. The

only metrics generally gathered during this phase are related to working off any action

items from phase 4. For most program managers and for most systems the ISSE is no

longer needed. The role of the ISSO/ISSM however is dramatically increased.

During phase 6 (Monitor Security Controls) a full suite of measurement tools and

a full set of metrics may be defined. Recall though, that for government program

managers, there is a legal requirement to “be cost effective”. The government PM is only

required and only (in reality) authorized to spend funds to monitor what is necessary and

reasonable. For most PM’s, the metrics they believe are required are limited to the set of

82

metrics defined in the annual FISMA requirement. Regardless, the NIST Special

Publication Information Security Continuous Monitoring (ISCM) for Federal Information

Systems and Organizations [NIST SP 800-137] appendix D provides guidance and

recommended tools for establishing an automated capability for continuous monitoring.

83

Chapter 6

RESEARCH

What we've got here is failure to communicate – The Captain, in the movie Cool

Hand Luke.

My research activity spanned 2 years and involved 4 separate but related and

overlapping activities. In the fall of 2011, I started my research by informally working

with (and observing) a several groups of associates who are professionals in IT program

management, information assurance, and systems engineering. As the objective of my

research was then (and continued to be) to determine from the perspective of all IA

professionals how probable it is that the issues stated in Chapter 1 are real, what are the

causes for this problem, and what are potential solutions for the problem, I knew I would

need the help of others in the field. I was confident that having discussions with a

broader range of subject matter experts would help both expand and refine my

perspective. In addition to increasing my interactions with other IA professionals during

work activities, I used my need for “research consultation” as a excuse for moving

discussions away from the work place. Information Assurance professionals are hesitant

to talk about what they do for a living, but have less of a problem “opening up” to a

colleague doing academic research (this might be a good sociologist study). Another

difficulty working with this highly conservative (almost paranoid) community is aversion

to having informal discussion recorded or having notes taken during these discussions.

These “consultations” were informal, unstructured, and usually held during a social

gathering (lunch, dinner, or company sponsored happy hour gatherings), but always

focused on the central theme of the “hard problem: is it real and does it matter”. In

addition, I consistently discussed the concepts of models (ontologies) both of and for

information systems expressed in “The Double Role of Ontologies in Information Science

84

Research” [Fonseca 2007]. I discuss the results of these activities in the section of this

chapter that have in their titled the word “Observations”.

As a way of achieving validation of my observations, perspectives and opinions

created by observation of peers and associates, I:

Presented my initial findings at the Systems Engineering District of

Columbia conference (see http://www.sedcconference.org/ in May of 2012

[Marchant SEDC 2012].

Attended the Information Assurance Symposium sponsored by the

Information Assurance Directorate of the National Security Agency in

August, 2012 (see http://www.nsa.gov/ia/events/).

Co-authored and presented a paper [Marchant/Bonneau 2012] created as a

direct result of my research at the 2012 Information Systems Educators

Conference (ISECON) (see http://www.isecon.org/).

I was particularly interested in having a validation of my observations as much of

what I observed could not easily be recorded. I discuss these activities in the section of

this chapter titled “Collaborations”

By December of 2012, I believed that the issues related to the apparent difficulty

constructing enterprise level security metrics were the result of a mismatch of skills

during Continuous Monitoring; a critical phase of the Risk Management Framework. It

was my belief that role of Information Systems Security Engineers is needed to maintain

an enterprise level metric of IT risk exposure but that the ISSE is not present during this

critical phase. In order to explore this belief, I created and distributed a questionnaire

intending to determine if “ISSEs” are, in fact, scarce during the continuous monitoring

phase. I discuss this activity in the section title “Questionnaire”.

85

During my presentation at the 2012 SEDC conference, I made the claim that

“90% of the IA professionals who claim to be security engineers aren’t”. To my surprise,

this controversial claim in front of a relatively large audience was received with

widespread agreement. Subsequent discussion continued to validate my belief. I believe

that the reason these claims exist is that the positions IA professionals are “hired into”,

are ambiguously titled; essentially eroding the meaning of the title Security Engineer. As

a result, I conducted a survey of classified advertisements to determine the skills

requested under the title “Security Engineer”. The findings of this survey of over 60

classified advertisements are discussed in the section titled “Classified Advertisement

Survey”.

86

Observations

I am an engineer, my entire career has been in engineering; I knew I lacked the

full perspective of the full range of professionals that make up the Information Assurance

profession because I view everything from the perspective of an engineer; in particular, I

lacked the perspective of the ISSO, ISSM, and ISO roles, but also weak in understanding

the total range of roles of the professionals that fulfilled the roles and responsibilities

listed in Chapter 5. I found after a short time observing, interacting with, and socializing

with other IA professionals that I was categorizing them loosely into one of the four

groups shown in table 6.1 below.

Table 6.1: Categories of IA professionals.

Scientist A scientist is one engaging in a systematic activity to acquire knowledge.

Scientists perform research toward increasing understanding of nature,

including physical, mathematical and social realms. Scientists use

empirical methods to study things. In risk assessment, the knowledge

needed from scientists include understanding how value is assessed and

assigned to assets; what are threats (identity, motivation, enablers,

vectors); and what are vulnerabilities.

Engineer An engineer applies knowledge of applied science and applied

mathematics to develop solutions for technical problems. Engineers

design materials, structures, technology, inventions, machines and

systems. Engineers use ingenuity to create things. In risk assessment,

engineers apply knowledge of asset value, threats, and vulnerabilities,

using defined repeatable processes to determine risk and (more important)

to determine methods to mitigate risk. In risk mitigation, engineers apply

their skills to architect solutions that implement the controls assigned to a

system and to create and allocate requirements on the system to enact

those controls.

87

Technician A technician is a worker in a field of technology who is proficient in the

relevant skills and techniques of that technology. Technicians apply

methods and skill to build, operate and maintain things. In risk

assessment, technicians (e.g. IT systems administrators) are most involved

in controlling the automated risk assessment (scanning) tools used to

evaluate vulnerabilities.

Manager One who handles, controls, or directs an activity or other enterprise,

including allocation of resources and expenditures. A manager uses

qualitative methods to control the build, operation, and maintenance of

things. In risk assessment, the focus of most of managers I observed was

on ensuring that the Select, Implement, and Assess stages of the RMF

progressed smoothly. Technical details were, at least for them, the realm

of the security engineer.

The roles discussed in chapter 5 are assigned differently depending on the

organizational level the assignee is working within. NIST defines 3 tiers for the RMF

notionally capture in figure 6.1.

88

Figure 6.1: Tiers.

Tier 1 is the organizational level that establishes policy. As such, tier 1 is the

federal agency (e.g. CIA, NSA), department (e.g. Treasury, Energy, Justice), major DoD

element (e.g. Army, Air Force), business or corporate identity, or institution (e.g.

university, hospital). Tier 2 is the mission/business process level (often referred to as the

enterprise architecture level). Tier 3 contains the information systems ranging in size and

complexity from specific business or mission tasks (e.g. a website providing a catalog for

ordering office supplies or small group special purpose local area network) to monstrous

wide area networks providing supporting infrastructure and collaborative environments

for hundreds of thousands of end users. An ISSM at tier 1 would have assignments that

are overarching policy related; at tier 2, an ISSM could be assigned the tasks of

translating overarching policy into more specific guidance for mission or business

processes (e.g. how does an audit policy at tier 1 get translated into policy for human

relations; for business development; for marketing) at tier 3, an ISSM may have as few as

one (e.g. an ISSM assigned to a development project going through the Systems

Development Life Cycle defined in Chapter 3) or may have dozens of systems assigned.

Typically, individuals assigned a few systems are also assigned multiple roles. For

89

example, I have often been assigned the roles of ISSM, ISSO, and ISSE simultaneously

for the same project. Focusing primarily on enterprise level professionals, my opinion of

which category from table 6.1 each of the roles defined in chapter 5 falls into is contained

in figure 6.2.

Figure 6.2: My Categorization of RMF Stakeholders

The job titles for the professionals filling these roles are a little more difficult to

categorize as both the titles are nebulous and (especially in tier 3) the professional filling

the roles may assume multiple roles. For example, I talked with many systems

administrators who clearly performed and clearly demonstrated a technician’s perspective

and skill set, which filled the role of an ISSO, and yet carried the title “security

engineer”. Some of this title confusion may be egocentric, some may be financial (e.g.

the perception that security engineers earn more than security administrators), and some

90

may be simply the result of how the contract vehicle or company job title descriptors

label the positions. As a result of this ambiguity, in my discussions in this section, I

focused more on the roles defined in the NIST special publications series and ignored (as

much as possible) job titles.

My focus is on the enterprise level professionals (at tier 3) who are directly

involved in the risk assessment activities and the enterprise processes surrounding and

supporting risk assessment for those types of systems that provide supporting

infrastructure and pervasive mission and business process support systems. Since a

common trait of this category of system is that they are common control providers (i.e.

they do not inherit controls from another supporting infrastructure) I will not discuss the

role of Common Control Provider (CCP).

Recall from Chapter 5 that the individual roles in the RMF directly involved in

metrics collection and assessment are the ISSE (engineer), the ISSO/ISSM

(technician/manager), and the ISO/PM (manager). Capturing the measurements and

producing the metrics during the development phases (selecting and implement controls,

conducting security architecture and design) are primarily the responsibility of the ISSE,

an engineer, and during the operational phases (continuous monitoring) primarily the

responsibility of the ISSO, a technician.

Although, risk assessment is performed whenever necessary in enterprise

environment, it can often result in the system under assessment being reverted to the

Implement phase of the risk management framework. Most often though, risk assessment

is performed during the phase of an enterprise project (system) when the project is in the

late stages of design step or early in the development step as this is the systems life cycle

phase when security architecture and security design is easiest to implement. Since

returning a system to earlier phases of the RMF almost always result in at least a

minimized security risk assessment, most of the IA professionals involved in continuous

91

monitoring resist change (or, at least changes that are defined as “security relevant

changes” which lead to re-assessment).

Changes to an enterprise come from multiple sources. Environmental changes

can occur from changes in threat environment, technological advancement, changes in

mission or business focus, legislative changes, even societal attitude. Enterprise change

as result of insertion of new or recently upgraded subsystems (e.g. a new payroll system).

Enterprises also change, and grow, by assimilating smaller networks and enclaves (I

discuss this phenomenon in Security Engineering Lessons Learned for Migrating

Independent LANs to an Enterprise Environment [Marchant/Bonneau 2012]).

Large development projects destined to be supported in an enterprise environment

usually have security engineers assigned to the project (those programs that do not will

typically assign a systems engineer this role). The systems security engineers working in

the professional arena of Information Assurance supporting systems development are

very similar to the professional systems engineers working systems engineering. Both

fields must have a meta-model of the real world that allows the creation of a finished

system. Systems engineers view the real world using tools that allow for the creation of

hierarchical taxonomy of requirements that can be decomposed into lower levels and

translated into test that can be used to verify that the system was built as required and

validate that the system was built to match the real world. Architects prefer to model the

worlds using those tools that best help them create their product (hardware, software,

network infrastructure architectures); their tools create products that fit nicely into the

views described in the architecture frameworks (e.g. The Open Group Architecture

Framework – see http://www.opengroup.org/togaf/ or the Federal Enterprise Architecture

Framework – see http://www.whitehouse.gov/omb/e-gov/fea/ ). Designers and

Developers have their favorite tools as well (e.g. UML, OML, RUP). During the

development of a system, the knowledge captured in these models and produced as

design artifacts for the review gates is repeatedly reviewed and enriched during the gate

reviews shown in figure 3.3 (recall that a gate review is conducted during the transition

92

from one phase to the next as system progresses through its development). The security

engineers working development projects will support the creation of gate review artifacts,

but under the RMF are directed to record their “knowledge” in the Systems Security Plan

(SSP).

When we compare the six steps of the systems engineering life cycle model with

the six steps of the Risk Management Framework (see figure 6.3), there appears to be

almost one for one comparison, but further analysis highlights some perturbations.

Figure 6.3: Comparison of Models.

To help clarify the differences between the two life cycle models and to enable

discussions with my colleagues, I created and (occasionally) introduced a presentation

based sample scenario for a simple IT system development. The project scenarios

revolve around a central theme of a small "college town" bank deciding to implement a

93

mobile branch office to support the many large extended duration activities the college

community sponsors. The presentation I used is captured in Figure 6.4 below.

94

Figure 6.4: Sample Scenario.

Although very lightweight in content, the scenario helped significantly to focus

the conversations I held on both the models used to help develop secure architectures and

design and on the specific models used to estimate the level of risk mitigation provided

by a particular solution. When I used the scenario, I observed that during the systems

engineering Definition phase, the overarching objectives for the system under

development are defined and usually captured as requirements. This phase best compares

to the Categorization phase and the Select Security Control Phase of the RMF as these

two phases are where the overarching security defense control environment is defined.

This is clearly an engineering role and it is important to note that the systems

requirements often drive the categorization, essentially making categorization a

subordinate process to definition. Since the initiation of the implementing security

95

controls phase must have (as a minimum) a systems design (engineering role), these two

phases roughly align, again though, systems design will slightly precede implementing

security controls. Completion of Implementing Security Controls and Assessing Security

Controls appear to align respectively with Development and Deployment. In practice

however, the implement and assess activities of the RMF are iterative and occur multiple

times throughout the time period contained within the Development and Deployment

phases. Authorization aligns roughly with the Deployment; as does the Monitor phase

with the Operate step. There is no matching RMF phase for the Systems Engineering

Retirement Life Cycle Phase. Figure 6.5: Graphically shows this comparison.

Figure 6.5: Comparison of the SDLC and RMF [Marchant 2013].

The relationship of the RMF to the systems engineering review cycle makes the

“knowledge” review for systems security engineer complicated if, as is the case with

most developments, the review cycle is tied to the systems engineering gating structure

without regard to the delay needed to complete the RMF cycle. Although helpful in

maintaining security requirements history and risk analysis basis through the

development portions of the life cycle, systems engineering gates do not exist in the

operations phase where the authorize/monitor phases repetitively occur for the security

professional. In practice, what I observed is that in most cases the security knowledge is

captured in the systems security documentation and most often in the systems security

plan (SSP). Some organizations require additional documentation that will often include

96

a privileged user guide and a systems administrator guide which can, depending on the

organization, contain the rationale for the creation and assignment of security controls,

requirements, and risk mitigations.

With transition of a program to the Authorize stage, the role of ISSE in the RMF

is no longer needed. Although notionally available as an advisor, the ISSE is essentially

released once the tangible products of the ISSEs task are captured in the systems security

plan and other security documentation and assessment is complete. Continuous

monitoring then becomes a process of ensuring compliance with those controls that

require technician (system administrator) attention and support and cyclical automated

scanning tools evaluation

Systems in the operation and maintenance mode do not need strong engineering

presence. The primary focus of this environment is in monitoring for anomalies and

systems failures, performing normal periodic administrative functions (e.g. help desk,

user administration, storage and equipment expansion), and maintaining systems

components (e.g. ensuring components are at the most current patch level available). The

technicians prevalent in the operations and maintenance mode are not involved in the

active definition and monitoring of existing or evolving threats and do not perform asset

value assessment. In fact, the only consistently reliable measure (metric) available from

the operation and maintenance environment are those measures directly related to output

from automated tools that scan using data from the large vulnerability databases. These

measures are essentially only an estimate of the unmitigated impact and the unmitigated

probably of occurrence of know vulnerabilities.

97

Collaborations

Since the majority of my investigation is based on literature review and

immersion based observation, I reached out to external organizations for validation and

collaboration. I decided that a somewhat eclectic mix of conference based collaborations

would be helpful. I was primarily interested in the opinions of systems engineers,

information assurance professionals familiar with the RMF, and academicians familiar

with IT security.

To access systems engineers, as defined by the International Council on Systems

Engineering (INCOSE) who are specialized in security engineering, I became a member

of the INCOSE working group for security engineering and vetted my observation

through the group by both presenting at the Systems Engineering District of Columbia

(SEDC) conference (see http://www.sedcconference.org/ in May of 2012 [Marchant

SEDC 2012] and authoring a submission to the peer reviewed journal for INCOSE

(INSIGHT) which contained a description of my observations of the relationship of the

RMF to the systems development life cycle.

My presentation at the SEDC was very interactive. Attended by over 60 systems

engineers, the 30 minute lively session confirmed my belief that security engineers are

needed to provide truly meaningful, context based, assessments of IT Risk. Without the

engineering skills needed for the architecting and design of attack tree based security

solutions, anyone involved in automated scan based assessments will only be able to

assess the implementation status of known “patches and fixes” to national vulnerability

database recorded vulnerabilities.

I attended the Information Assurance Symposium sponsored by the Information

Assurance Directorate of the National Security Agency in an effort to gain the

perspective of a broad community of professionals familiar with the RMF. Although not

an active participant in this symposium (I did not have a paper or presentation submitted

98

to this symposium), there was ample opportunity to participate in working sessions,

panels, and social networking. During the symposium, I used both the slides from my

SEDC presentation and my simple scenario to “encourage discussion”. In general, the

discussion I participated in validated my observations. My categorizations of the RMF

roles was (much to my surprise) often a topic of interest apparently helping several

program managers better understand the skills of their programs professionals. During

the IAS, I determined that the prevailing attitude of these more frequent uses of the RMF

is that quantitative measurement of IT risk is not required during the continuous

monitoring phase.

Although not as productive, my attendance at the Information Systems Educators

Conference (ISECON) did validate my opinions on the nature of operations and

maintenance roles. I was often asked though what where the skills needed by the

professionals regardless of role. The educators at this conference in particular were

understandably interested in know what Information Assurance Professionals skill sets

and roles are and, although not a topic of interest for this dissertation, the topic is of

interest to me for future research.

99

Questionnaire

For the questionnaire, volunteers who are experienced with the risk management

framework were solicited through direct e-mail contact (directly or indirectly through

known associates). Volunteers were e-mailed the questionnaire shown in figure 6.6

below and requested to respond via e-mail within a reasonable period of time. Once

received, any identifying information was removed from the questionnaire response, an

identifier was assigned to the response, the responses were analyzed (data reduced) and

moved to an excel spreadsheet, and once all responses were transferred and the results

reviewed, the e-mails were deleted. I received 25 useful responses.

Fellow Information Assurance Professional.

You are receiving this e-mail because you have agreed to participate in a research project

I am conducting. Thank you for allowing me to send you this e-mail. If you have not

agreed to participate, I apologize for the inconvenience.

I am completing a PhD program at Pennsylvania State University and am the principle

investigator in the research project described in this e-mail. My contact information is

available here:

http://ist.psu.edu/directory/rlm325

My committee chair and advisor is Dr. William McGill. Dr. McGill’s contact

information is here:

http://ist.psu.edu/directory/wlm142

The title of my project: Elicitation of Methodologies for Monetizing Enterprise Level

Security Metrics

1. Purpose of the Study: I am attempting to determine what obstacles keep IA

professionals from maintaining quantitative measures for the security levels of the

enterprises we are working to defend. Our Chief Information Officers must constantly

evaluate the costs and benefits associated with securing our enterprises. To help our

100

CIOs accomplish their budgetary requirements, security should be carefully examined in

both monetary and non-monetary terms to ensure that the cost of controls does not exceed

expected benefits. Security should be appropriate and proportionate to the value of and

degree of reliance on the computer systems and to the severity, probability and extent of

potential harm. The purpose of this short questionnaire is to help determine what metrics

we are creating or can create that can be used to help our decision makers place a

monetary value on security. As an example, during the initial phases of the Risk

Management Framework, security professionals must evaluate the security risk associate

with anticipated threats and vulnerabilities. Our deliverables during this phase include

some form of security assessment report. If monetized, the measures used in the security

assessment would provide the type of metric decision makers need to determine the

cost/benefit ratio. To be of value to our program decision makers, the monetized

assessment discussed above would need to be maintained through the life of a project (i.e.

from development through retirement).

2. Procedures to be followed: You will be asked to answer 3 questions and provide a

small amount of not personally identifying demographic data.

3. Duration: It will take about 10 minutes to complete the survey.

4. Statement of Confidentiality: Your participation in this research will be

anonymous. The survey does not ask for any information that would identify who the

responses belong to. In the event of any publication or presentation resulting from the

research, no personally identifiable information will be shared because your name is in no

way linked to your responses.

5. Right to Ask Questions: Please contact me at [email protected] with questions

or concerns about this study.

6. Voluntary Participation: Your decision to be in this research is voluntary. You can

stop at any time. You do not have to answer any questions you do not want to answer.

You must be 18 years of age or older to take part in this research study.

Completion and return of the survey implies that you have read the disclosure above and

consent to take part in the research. Please respond by simply replying to this e-mail

(should be addressed to [email protected]).

The results of this survey will be compiled as part of my dissertation and may be made

available for public review, if you wish to review a copy, please request it in a separate e-

mail. To protect your privacy and to ensure your answers remain anonymous, all

personal references to all participants will be removed from all responses received for

this research. If you are willing to participate, please answer the three questions and

provide the demographic information requested below in line (with brief answers) by

simply responding to this e-mail.

101

Thank you for your help.

-------------------------------------------------------------------------------------------------------

Question 1: Do you agree with the statement above that government decision

makers are required to perform cost/benefit analysis? I am asking for your opinion.

Do you think that our CIOs must perform this type of trade? Why (or why not)?

Question 2: What methods are you aware of that can be used to evaluate the costs

and benefits associated with mitigating systems vulnerabilities (e.g. attack tree,

OCTAVE)? What skills are needed to perform this type of analysis? If you have

used any of these methods, during what phase of the systems development (or C&A

process) did you use the methods?

Question 3: What methods do you recommend for generating cost/benefits metrics

at the enterprise level? What I am asking you for here, is your recommendation for

how you would provide a metrics program that could help decision makers decide

where best to spend their security dollars. Do not be concerned about the cost or

complexity of the method.

Demographics:

The RMF (see http://csrc.nist.gov/publications/nistpubs/800-37-rev1/sp800-37-rev1-

final.pdf) defines multiple roles and responsibilities, please circle the role(s) that

most closely defines your current position: ISSE, ISSO, ISSM, ISO (PM), AO or

DAO, SCA (certifier), Other.

Which of the following roles have you ever performed? ISSE, ISSO, ISSM, ISO

(PM), AO or DAO, SCA (certifier)?

What certifications do you currently have (e.g. CISSP, CEH, PMP, CAP)?

How many years experience do you have in Information Assurance?

How many years experience do you have as an Information Technology

Professional? This completes my questions, thank you for your time.

Figure 6.6: Contents of questionnaire.

Although the three questions in the questionnaire are important, the demographic

information is as important. I was interested in the experience, different roles assumed,

102

and certifications because these are strong indicator of whether the individual has

developed engineering skill sets.

During data analysis, I entered the responses from the questionnaire for current

and past job title as entered, with the currently role entered as current in the spreadsheet

and any prior roles entered as past. In some cases, respondents were filling two roles

simultaneously so I entered current in both of the appropriate columns. Table 6.2 shows

the job title responses from the participants.

Table 6.2: Job answers from questionnaire response.

ID ISSE ISSO ISSM ISO AO SCA Other

BB Current

BH Current Past Past Forensics

CL Current

CR Current Incident response

DB Current

DC Past Current

DC2 Current

DG Current

EC Current Past

EL Current Current

EO Current

GM Current Past Past

JB Current Incident response

JK Past Past Current Past

JS Current Past

KJ Past Past Current Incident response

MH Current

NH Past Network Engineer

RC Current Past Past

RJ Current

RO Current

RV Current

SL Current Past

SS Current Current Past

UR Current Past Past Penn tester

103

Answers for years of experience and certifications were also transcribed from the

e-mails to the spreadsheet and are as shown in table 6.3 below.

Table 6.3: Years of experience and Certifications.

ID Sec exp

IT exp

CISSP ISSEP CISA CISM CEH GSEC or Security+

BB 11 19 X

BH 27 13

CL 11 23 X

CR 10 10 X

DB 7 20 X

DC 22 22 X X X

DC2 14 14 X X

DG 13 30 X

EC 13 23 X X

EL 15 23 X X X X X

EO 1 6

GM 20 35 X

JB 2 14

JK 30 30 X X

JS 17 17 X

KJ 6 34 X

MH 7 9 X X

NH 11 25 X X

RC 14 14

RJ 11 11 X

RO 7 17 X

RV 3 23

SL 13 13 X

SS 13 15 X X X

UR 14 20 X

The certificates listed in table 6.3 are:

CISSP: Certified Information Systems Security Professional. This certificate is

governed by International Information Systems Security Certification Consortium

(ISC)², see http://www.isc2.org/. The certificate requires a minimum of 5 years of

104

experience in information associate and passing grade on a 6 hour, 300 questions

examination that covers the following 10 areas of knowledge:

Access control

Telecommunications and network security

Information security governance and risk management

Software development security

Cryptography

Security architecture and design

Operations security

Business continuity and disaster recovery planning

Legal, regulations, investigations and compliance

Physical (environmental) security

ISSEP: Information Systems Security Engineering Professional. Also governed

by International Information Systems Security Certification Consortium (ISC)²; the

certificate requires an active CISSP certification and a passing grade on a 3 hours 150

question examination that covers the following areas of knowledge :

Systems Security Engineering

Certification and Accreditation (C&A)/Risk Management Framework

Technical Management

U.S. Government Information Assurance Related Policies and Issuances

CISA: Certified Information Systems Auditor. This certificate is governed by

ISACA ® (formally known as the Information Systems Audit and Control Association),

see http://www.isaca.org/. As its name implies, the CISA is a

certificate for IS auditors.

105

CISM: Certified Information Systems Manager: Also governed by ISACA®,

this certificate is similar to the CISSP but is more management oriented; the exam for this

certificate covers these broad areas:

Access Control

Identity Management

Information Security Management

Information Security Policies/Procedures

Intrusion Prevention/Detection

Network Security

Physical Security

Security Tools

Security Trends

CEH: Certified Ethical Hacker. Governed by the International Council of

Electronic Commerce Consultants (EC-Council), see http://www.eccouncil.org/; the CEH

is achieved by passing the CEH examination. The exam is 4 hours long with 125

questions that cover a broad range of “ethical” hacking topics.

GSEC: The GIAC (Global Information Assurance Consortium) Security

Essentials Certificate, see http://www.giac.org/. The certificate requires passing a 180

question, 5 hour exam that covers hands on IT topics (systems administrator skills).

Security+: The Security+ certificate is governed by COMPTIA, see

http://www.comptia.org. The certificate is similar to the GSEC, is primarily for systems

administrators, but slightly more focused than the GSEC. It requires passing a 90 minute,

100 questions (or less) exam that covers the following general areas:

Network security

Compliance and operational security

106

Threats and vulnerabilities

Application, data and host security

Access control and identity management

Cryptography

Of these certificates, the CISSP, CISA, CISM are potential, but not significant

indicators of skills needed to perform a reasonable risk assessment and ISSEP (as an

indicator of engineering skill) is a clear indicator of the skills needed.

In encoding the responses to questions 1 through 3 (see Table 6.4), I used the

following rules:

Question 1 part 1 (Q1.1): Do you agree with the statement above that

government decision makers are required to perform cost/benefit analysis?

I entered the exact answer received. Question 1 part 2 (Q1.1): Why (or

why not)? I entered “by law” if the response was anything related to the

CIO being bound by law, policy, agency regulation, or other government

guidance. Note that the all of the responses I received for this question

indicate that most responders believe the CIO is obligated by law, policy,

or regulation.

Question 2 part 1 (Q2.1): What methods are you aware of that can be used

to evaluate the costs and benefits associated with mitigating systems

vulnerabilities (e.g. attack tree, OCTAVE)? I entered the response as

close to the verbiage used as possible. I entered RMF if the responder

entered RMF or any of the NIST SP 800 series documents (e.g. 800-39).

Question 2 part 2 (Q2.2): What skills are needed to perform this type of

analysis? I entered the response as close to the verbiage used as possible.

Question 2 part 3 (Q2.3): If you have used any of these methods, during

what phase of the systems development (or C&A process) did you use the

107

methods? I only entered 2 responses, CM if the responder indicated the

continuous monitor phase, or all if the response had multiple stages.

Question 3 (Q3): What methods do you recommend for generating

cost/benefits metrics at the enterprise level? I entered the response as

close to the verbiage used as possible, but shortened the response. Several

of the participants provided long, detailed discussion of how to perform

this effort. NOTE: Xacta© is an automated risk management tool, more

information is available at http://www.telos.com/cybersecurity/risk-

management/.

Table 6.4: Response to questions

ID Q1.1 Q1.2 Q2.1 Q2.2 Q2.3 Q3

BB Yes

BH Yes By law

RMF CM FISMA

CL No None None

CR Yes

DB Yes By law

DC

DC2 Yes By Law

RMF Know RMF All RMF

DG Yes By law

EC Yes By Law

RMF Know the RMF

Risk Watch, Xacta, FISMA compliant scanners

EL Yes By law

Many Good interview skills

All Qualitative analysis, use automated scanning tools then address highest priorities

EO Yes By law

Nessus

GM Yes By law

Risk Watch

Know tool CM

108

JB Yes By Law

RMF Know CVSS CM scanning tools

JK Yes By law

There are many scanning tools

JS Yes XACTA All XACTA

KJ Yes By law

Attack Tree

SME Use SRA

MH Yes By Law

Nessus Know RMF Nessus

NH Yes XACTA All XACTA

RC Yes By Law

RMF scanning tools

RJ Yes By law

RMF SME CM FISMA

RO Yes

RV Yes None None

SL Yes RMF SME CM FISMA

SS Yes By Law

XACTA All XACTA

UR Yes By law

XACTA All XACTA

The almost unanimous “yes” response to question 1 was expected, as was the

large number of By Law comments.

Although not intended as a “trick” question, Question 2 is intended to determine if

the responder really understands the role of the ISSE and when a risk assessment would

be performed. An RMF aware ISSE would respond to question 2 part 1 with phase 2, 3,

or 4, or possibly all, but would most likely not respond with any answer of only the

Authorize (phase 5) or the Continuous monitor (phase 6). Six responders answered

correctly, indicating to me that they at least understood when in the RMF, cost/benefit

(risk assessment) vulnerability analysis is performed. Five responders entered CM, and

14 did not answer that part of the question, which indicate to me a lack of understanding

of either the question or the RMF.

109

The response to question 2 part 1 was disappointing. I was expecting to see more

responses indicating methodologies that showed an understanding of the quantitative risk

assessment needed for control implementation, architecture and design selection, and

requirements definition and allocation. RMF or attack tree is a reasonable response, but

Xacta, Risk Watch, or Nessus is not. Since only 2 of my responders fully met these

criteria (DC2 and EL), my statement at the SEDC that 90% of those who claim to be

security engineers, at least in regards to filling the role of security engineers in the RMF,

may have been close to truth.

My intent for question 3 was to determine what tools or methods I should look for

when evaluating the classified advertisements for information systems security engineers.

Again I was disappointed by the results, seeing only reasonable responses from 3

responders (DC2, EL, and KJ).

Although the responses to the questionnaire were disappointing, the results

confirmed my belief there is ambiguity and confusion in understanding the role of

security engineer as defined in the RMF and the job title security engineer most likely

being used by my volunteers responding to this questionnaire. This is perhaps a subject

for future research. The years of experience (average of over 12 in security, almost 20 in

IT), clearly evident strength in certifications held by the responders, and number of

responses showing experience in multiple leave me convinced that there is not a shortage

of the ISSE skill set available to assist in quantitative risk analysis during the later stages

of the RMF.

110

Classified Advertisement Survey

Using the role definitions for the RMF as shown in chapter 5, I evaluated

classified advertisements for positions advertised as some variant of security engineer

(e.g. Information systems security engineer, IT security engineer, cyber security engineer,

Information Assurance engineer) collected in one session from a single large employment

database targeting advertisements no more than 1 week from data of posting (I

determined that this method resulted in the most consistent responses without

redundancy). The job descriptions were analyzed to determine to what extent the

advertisement met one or more of the RMF roles. Where no match was identified, I

categorized the position using my more general categories of engineer, manager,

technician, or scientist. I was interested in both what percentage of the advertisement

was for engineers (particularly ISSEs) and what were for technicians (particularly

ISSO/ISSM and systems administrators).

The posting shown in Figure 6.7 is an example of the typical posting. This

example is for a position that is advertised to be a Cyber Security Engineer. The

responsibilities indicate however that this position is for a systems administrator who will

be responsible for implementation and configuration of HBSS. HBSS is a security

system used to monitor information systems infrastructure and systems platforms. This is

clearly a “technician” job description and is most closely related to the roles of ISSO.

Many advertisements, however, were much harder to categorize, often asking for skills or

identifying responsibilities from multiple roles. In these cases, I identified the role of the

advertisement with the most prevalent identifiers. The results of my evaluation are

shown in Figure 6.8.

As a Cyber Security Engineer, you will implement, maintain, and engineer systems, software, and data integration solutions for business technical problems through the use of COTS hardware and software. Responsibilities include: Management, development, and optimization of the LAN/WAN infrastructure. Will ensure thorough implementation of all security procedures through physical measures, operating system, and industry security standards.

111

• Provide support in developing an appropriate Host Based Security System (HBSS) architecture and engineering design. • Create and update system design documentation. • Identify and implement improvements in existing methodology, processes, and procedures. • Assist with integration of enterprise tool sets such as Microsoft System Center Configuration Manager (SCCM), HP Server Automation (HP SA), Active Directory, etc. • Determine how best to leverage HBSS (and associated products) to meet the strategic goals by defining "use cases". • Install, configure, manage, and troubleshoot McAfee ePolicy Orchestrator (ePO) as well as the HBSS Framework package to include all additional modules (VirusScan, Host Intrusion Prevention, IntruShield, Secure Web and Email Gateways, Policy Auditor, and Foundstone). • Integrate alerts and ePO data with existing ArcSight SIEM solution. • Provide right-seat training to team leads and HBSS implementation and sustainment team members in new techniques, tools, and other HBSS-related skills. • Install, configure, maintain, and troubleshoot the HBSS ePO server and associated policies and content. • Develop and maintain policies for HBSS modules. • Monitor and assist with deployment of HBSS framework to workstations and servers via automated solutions (e.g. SCCM and HP SA). • Strategically plan for load balancing HBSS agent traffic for locations with high latency lines using SADRs. • Report progress and metrics to senior leadership on a weekly and monthly basis. • Oversee future deployments of HBSS to partner organizations as needed. • Provide leadership and guidance to subordinate engineers for the successful operation and maintenance of the HBSS environment. Requirements: Must possess 3+ years of security/systems engineering experience working in a non-management technical role to integrate COTS products. Must possess a CISSP/Security + certification and/or have a pre-registered date for when the certification test will be attempted. Must be familiar with Linux/Unix server functions and must be proficient in Microsoft Windows platforms. Active Directory experience is desired. Must be familiar with security analytical/vulnerability assessments tools. Will be working in a highly active environment where multiple tasks are expected to be worked simultaneous; consequently, the candidate must be able to work independently of others and be efficient with his/her time. • Experience with Windows and Linux system administration • Experience with Windows Active Directory • Experience with enterprise-wide security programs • Experienced in large scale network security design, deployment and support • Knowledge of security compliance policy, programs, processes, and metrics • Knowledge of Cyber Security and Information Protection and Privacy • Knowledge of Internal audit and corrective action plans for information protection and security • Knowledge of network engineering concepts • Experience with leading or mentoring a team of network security practitioners • Experience with security engineering, including security testing and evaluation, certification and accreditation, or penetration testing • Strong Networking background combined with Strong Security • Must possess excellent interpersonal and communication skills • Possess the ability to be a self driven quick learner with attention to details and quality • BA/BS in Computer Science, Information Security, or related field; a Master's degree is preferred; several years of relevant experience may be substituted.

Figure 6.7: Example of Talent Acquisition Posting.

112

Figure 6.8: Talent Acquisition Review Results.

Clearly, there is some ambiguity in the job title security engineer!

Chapter 7

CONCLUSIONS

I consider a goal as a journey rather than a destination. And each year I set a

new goal. - Curtis Carlson

Very early in my research, I establish a “goal” for my research by expressing

three hypotheses:

H1: Holistic quantitative information systems enterprise security metrics are not

available because executive and managers perceptions are that they are not needed.

Essentially, managers believe qualitative measures are sufficient.

.

H2: Information Systems Security Engineers (ISSEs) use methods that can

produce reasonable estimates of federal information systems enterprise security. Further,

ISSE’s create reasonable models during the execution of their functions that, if

maintained, could provide these quantitative metrics.

H3: The existing Risk Management Framework used by Information Assurance

Professionals may result in the lack of sufficient skill set and motivation to maintain an

ontology (or whatever IA professionals call an ontology) that could be used to provide

reasonable quantitative estimates of federal information systems enterprise security

metrics.

Very early in my research during the “observations phase”, I determined

that the prevailing attitude of the IA professionals involved in IT risk management

is that maintaining enterprise level quantitative metrics are not required. In

essence, the almost unanimous opinion of all of my “subjects” was that maintaining

114

quantified or “dollarized” metrics where unnecessary. Further, almost all who

understood the regulatory environment for federal systems agreed that federal managers

have an obligation to be cost effective but do not have an obligation to quantify the basis

for the cost effective decision at the enterprise level (i.e. a qualitative, prioritizing is

sufficient). The exceptions to this opinion all worked at the tier 1 level, an opinion I did

not encounter until I attended the Information Assurance Symposium. Measuring

security by equating security of $X to a total enterprise value of $Y was interesting, but

did not help determine where best to allocated security budget.

What is, however, unquestionably needed is some method that provides a relative

impact of a risk and a related cost for mitigating that risk. What especially the enterprise

level security managers and systems owners in my groups consistently said is that they

need prioritized lists that indicate the order of importance for focusing their limited

security assets (budget). Where quantified data would occasionally be helpful, however,

is in determining whether to organizationally accept a risk (again, a tier 1 issue).

Most of my participants understood that the quantitative measures that arise in

risk assessment are in reality estimates based on guesses from subject matter experts.

Although certainly valuable during the evaluation of alternative architectures and design

early in the development of an information system, these quantified guesses are often not

worth the expense of maintaining them in an operations and maintenance environment.

Qualitative assessment (e.g. assigning a qualitative value of low, medium, or high) can

provide a quick and easy method of communication risk assessment results and risk

mitigation needs to decision makers. Although qualitative methods may still require the

use of subject matter experts, relaxing the level of detail can greatly reduce the time and

expense of conducting the assessment.

As I studied the role of ISSO during the Continuous Monitoring phase of the

RMF, I discovered that continuous monitoring does not evaluate threat vectors. In fact,

the focus of continuous monitoring is with very rare exception, completely compliance

115

and vulnerability oriented. In this final phase of the RMF, both the ISSO and the ISSM

focus on ensuring that the information system maintain the controls defined in the SSP

and that the systems security preventive maintenance posture is a current as vendor

update cycles allow. The CM phase is essentially grounded in the use of automated

scanning tools.

In one method of categorizing risk assessments approaches, three orientations are

presented: Threat oriented, asset impact oriented, and vulnerability oriented. In threat

oriented, the threat environment is evaluated to identify threat sources and threat vectors

(or events). The goal of threat oriented analysis is to create and then use threat scenarios

to identify and mitigate the risks. Similarly in asset impact oriented assessment, the asset

impacts are prioritized and used as the source for developing risk scenarios. In

vulnerability oriented assessments, the focus is on the known vulnerabilities associated

with information systems assets. Each method has advantages and weaknesses. Threat

oriented tends to require extensive awareness of the potential adversaries but has the

advantage of providing clear focus to the defense posture. Asset impact provides clear

focus on what is the most import set of assets to defend. Vulnerability oriented is

perhaps the most affective at negating unknown or evolving threats and protecting assets

of unknown or shifting value but requires the most exhaustive inventory processes and

relies most heavily on automated tools.

Continuous monitoring as defined in the RMF is vulnerability oriented but

excludes any attempt to assign asset values to the risk assessment function. In addition,

no attempt is made to determine if the threat vectors defined in the vulnerability

databases are relevant to the vulnerabilities identified by the automated scanning tools.

Essentially, the qualitative method most often used by adopters of the RMF (and the

associated IOS/IEC framework) assess risk using the qualitative metrics values of

whatever vulnerability database has been selected to support the automated scanning

tools being used to assess the systems being monitored. Although this type of approach

may result in unnecessary maintenance activity, at the enterprise level, where change is

116

diverse and dynamic, attempting to expend efforts clearly defining potential threat

vectors is arguably more expensive and less effective than simply taking whatever steps

are necessary to minimize all identified vulnerabilities (applicable or not).

Coles-Kemp in The Effect of Organisational Structure and Culture on

Information Security Risk Processes describes the result of an 8 year longitudinal study

of 36 organizations that used ISO 27001 Information Security Management System

(ISMS). The paper states that although the type of assessment (qualitative vs

quantitative) will be determined by the problem being solved, most organizations

typically used quantitative methods early in their existence but evolved into more

qualitative methods as the organization matures.

Christine Kuligowski [Kuligoswski 2009] concludes that the presence of a risk

framework and a program to ensure compliance with the controls defined by the

framework will result in measurable decreases in reportable security incidents. From the

perspective of operation and maintenance leadership, this is the goal. All indications are

that exclusively vulnerability based compliance testing and a qualitative risk assessment

methodology indeed provides a cost effective methodology for mitigation and control.

This discussion would indicate that my hypothesis H1 is correct. But if H1 is

correct is the hard problem wrong? Recall that the hard problem is that quantitative

enterprise security metrics are hard to define, achieve, and maintain. I believe the hard

problem does exist, but do not agree that this problem needs to be solved. I believe

enterprise level leadership has the ability to provide reasonably quantified metrics when

needed. Just as the ISSE uses assets values, threat vectors, and vulnerability analysis

when architecting and designing security solutions and implanting controls, enterprise

leadership can assemble the necessary subject matter expertise to address evolving

threats, enterprise migration to new architectures, paradigm shifting advances in

technology (e.g. cloud), and whatever technology change arises to adequately conduct

risk assessment as needed.

117

A common problem of Information Assurance professionals involved in computer

network and information systems defense is that it is difficult to predict when and what

the next major threat will be. The recent disclosures of privileged insiders, for example,

have highlighted the importance of developing new methods to counter the potential for

uncontrolled and unwanted data loss. Having comprehensive enterprise level quantitative

metrics would most likely have helped very little to avoid this unanticipated threat. This

does not mean that a quality quantitative risk assessment can’t be performed.

Organizations across the country are, in fact, responding by assembling their appropriate

subject matter experts to perform the asset evaluations, threat vector analysis, and

vulnerability analysis necessary to craft risk mitigations responses to this “insider threat”.

In my second hypothesis, H2, I claim that Information Systems Security

Engineers (ISSEs) use methods that can produce reasonable estimates of federal

information systems enterprise security. Further, ISSE’s create reasonable models during

the execution of their functions that, if maintained, could provide these quantitative

metrics. I believe that this hypothesis too is true, but as is the case of the hard

problem, does not matter. Maintaining these quantitative metrics is just not cost

affective when compared to the cost of simply assembling the appropriate experts to

perform a risk assessment when needed (as has been the case in responding to the insider

threat).

In my third and final hypothesis, H3, I stated that the existing Risk Management

Framework used by Information Assurance Professionals may result in the lack of

sufficient skill set and motivation to maintain an ontology (or whatever IA professionals

call an ontology) that could be used to provide reasonable quantitative estimates of

federal information systems enterprise security metrics. The results of my evaluation are

inclusive; I can’t determine definitely that the RMF is shaping the organizational

structures in a way that eliminates the ISSE skill set needed to maintain these metrics.

But whatever the cause, the results are the same; the role of ISSE is not maintained for an

118

enterprise during continuous monitoring. But does this absence mean the skill set is not

available, again, my results are inconclusive mainly because the metrics are not

necessary. I suspect though that my hypothesis is wrong, as I have witnessed

firsthand enterprise leadership forming expert teams that contain the skillset for

creating the metrics, leading me to believe that the same teams could maintain the

metrics. In addition, my questionnaire responses (years of experience, certifications,

multiple roles filled) and my own observations of IA professionals leave me believing

that the skill sets of the technician available during continuous monitoring could at least

maintain quantitative metrics if procedures for the maintenance were properly

documented.

I approached research intending on evaluating the People, Technology, and

Information foundational to this problem. I believe I accomplished that task. To recap

the some of the more interesting findings of my research, through literature review,

extensive immersive observation, peer level collaboration, and independent research

project I determined that:

The IRC hard problem is real but is not important. Using quantitative

metrics during development is useful for supporting quantitative decision

(e.g. alternate designed decisions) but in enterprise environments where

change tends to be less alternative based (e.g. product improvements but

not architecture modifications) qualitative security metrics are sufficient.

Further, continuous monitoring with automated tools is supported by

empirical data as being affective at reducing the number of reportable

incidents for organizations.

The technology is sufficient. As an example framework, The RMF is a

result of intensive study of best industry practices and as one of the two

most used risk management frameworks is applicable to all organizations

(commercial, public, academic, international, and federal). Freely

119

available the RMF contains standards, templates, and procedures for

performing quantitative security metrics.

Information is sufficient. The databases, templates, methodologies for

using subject matter experts for quantifying asset values, threat vectors,

and vulnerabilities are all available.

The roles and responsibilities of the people are well defined.

To create quantitative metrics from the information has dependency on the

skills set defined for the Information Systems Security Engineering role is

not needed once a system enters the authorize phase of the RMF. Since

quantitative enterprise security metrics apparently are not needed during

continuous monitoring, this is not a problem.

The job title “security engineer” is ambiguous and leads to confusion in

establishing occupancy of the role Information Systems Security Engineer.

I took a long time and a lot of effort to come to the conclusion that the hard

problem I was researching wasn’t all that hard; it quite simply just isn’t cost affective.

But I did answer my research questions, contributed to general knowledge in systems

engineering and information systems education and have some topics I intend to continue

to research directly related to this effort; and I thoroughly enjoyed the journey.

As a consequence of unanswered questions raised by this research, I have

identified and initiated two future research activities. I am in the process of authoring a

paper for presentation to through INCOSE on the multiple roles of a security engineer

and intend to continue to contribute to the INCOSE security working group as they

mature the security engineer input to INCOSE’s systems engineering handbook. I intend

120

to submit the paper for consideration for the 2014 INCOSE International Conference (the

call for papers requires submission by the end of November 2013).

The second activity will result in a similar paper but focused on the broader area

of the multiple professional titles in the Information Assurance Profession. I have not

selected a venue for this paper but intend to have it prepared for submission by January

2014 and will most likely select an academic venue that is pedagogy related.

Chapter 8

BIBLIOGRAPHY

[Coles-Kemp 2009] Coles-Kemp, L. (2009). The Effect of Organisational Structure and

Culture on Information Security Risk Processes Administrative Science Quarterly, 17(1),

1- 25.

[Evans 2004] Evans, Karen, Testimony before the Committee on Government Reform,

Subcommittee onTechnology, Information Policy, Intergovernmental Relations, and the

Census, 16 March 2004.

http://www.whitehouse.gov/omb/assets/omb/legislative/testimony/

evans/karen_evans031604.pdf

[Fonseca 2007] Fonseca, F. (2007). The Double Role of Ontologies in Information

Science Research, Journal of the American Society for Information Science and

Technology, 58(6), pp. 786-793.

[Gruber 1993] Gruber, T. (1993). A translation approach to portable ontology

specifications. Knowledge Acquisition, 5, pp.199-220.’

[IEEE 1220] IEEE STD 1220-2005 (Sep 2005) IEEE Standard for Application and

Management of the Systems Engineering Process

[INCOSE SEH] Haskins, C., ed. (2010). Systems Engineering Handbook: A Guide for

System Life Cycle Processes and Activities. Version 3.2. Revised by M. Krueger, D.

Walden, and R. D. Hamelin. San Diego: INCOSE

[ISO/IEC 15288] ISO and IEC (International Organisation for Standardisation and

International Electrotechnical Commission). 2008. ISO/IEC 15288, System Life Cycle

Processes

[ISO/IEC 21827] ISO and IEC (International Organisation for Standardisation and

International Electrotechnical Commission). 2002. ISO/IEC 21827, Information

technology-systems security engineering-capability maturity model [SSE-CMM®]

[ISO/IEC 27000 Series] Wikipedia, The Free Encyclopedia, The ISO/IEC 27000 Series,

http://en.wikipedia.org/wiki/ISO/IEC_27000-series (accessed August 10, 2013).

122

[Kuligowski 2009] Kuligowski, C., Comparison of IT Security Standards. Masters

Thesis, accessed August 2013 from

http://www.federalcybersecurity.org/CourseFiles/WhitePapers/ISOvNIST.pdf/

[Marchant/Bonneau 2012] Marchant R., Bonneau, T, Security Engineering Lessons

Learned for Migrating Independent LANs to an Enterprise Environment. In: Proc

ISECON 2012, v29 (New Orleans)

[Marchant 2013] Marchant R., Security Engineering Models, in INSIGHT 16 (2): 34-36.

[NIST FIPS 199] FIPS (Federal Information Processing Standards).2004. FIPS 199.

Standards Publication for Categorization of Federal Information and Information

System. http://csrc.nist.gov/publications/PubsFIPS.html

[NIST FIPS 200] FIPS (Federal Information Processing Standards).2008. FIPS 200.

Minimum Security Requirements for Federal Information and Information System.

http://csrc.nist.gov/publications/PubsFIPS.html

[NIST IR-7298] National Institute of Standards and Technology. 2013. NIST IR-7298

Rev 1. Glossary of Key Information Security Terms.

http://csrc.nist.gov/publications/nistir/ir7298-rev1/nistir-7298-revision1.pdf

[NIST SP 800-12] National Institute of Standards and Technology, 1995 NIST SP 800-

12, An Introduction to Computer Security: The NIST Handbook.

http://csrc.nist.gov/publications/nistpubs/800-12/handbook.pdf

[NIST SP 800-18] National Institute of Standards and Technology, 2006, NIST SP 800-

18 Revision 1, Guide for Developing Security Plans for Federal Information System.

http://csrc.nist.gov/publications/PubsSPs.html

[NIST SP 800-30] NIST (National Institute of Standards and Technology). 2012. NIST

SP 800-30. Guide for Conducting Risk Assessments.

http://csrc.nist.gov/publications/PubsSPs.html

[NIST SP 800-37] NIST (National Institute of Standards and Technology). 2010. NIST

SP 800-37 revision 1. Guide for Applying the Risk Management Framework to Federal

Information Systems: A Security Lifecycle Approach.

http://csrc.nist.gov/publications/PubsSPs.html

[NIST SP 800-39] NIST (National Institute of Standards and Technology). 2011. NIST

SP 800-39. Managing Information Security Risk: Organization, Mission, and

Information System View. http://csrc.nist.gov/publications/PubsSPs.html

[NIST SP 800-53] NIST (National Institute of Standards and Technology). 2009. NIST

SP 800-53 revision 4. Recommended Security Controls for Federal Information Systems

and Organizations. http://csrc.nist.gov/publications/PubsSPs.html

123

[NIST SP 800-53A] NIST (National Institute of Standards and Technology). 2010, NIST

SP 800-53A Revision 1. Guide for Assessing the Security Controls in Federal

Information Systems and Organizations. http://csrc.nist.gov/publications/PubsSPs.html

[NIST SP 800-60] NIST (National Institute of Standards and Technology). 2008. NIST

SP 800-60 revision 1. Guide for Mapping Types of Information and Information Systems

to Security Categories, Volume 1. http://csrc.nist.gov/publications/PubsSPs.html

[NIST SP 800-64] National Institute of Standards and Technology, 2004, SP 800-64

Revision 1, Security Considerations in the Information System Development Life Cycle.

http://csrc.nist.gov/publications/PubsSPs.html

[NIST SP 800-65] National Institute of Standards and Technology, 2005, SP 800-65,

Integrating Security into the Capital Planning and Investment Control Process

http://csrc.nist.gov/publications/PubsSPs.html

[NIST SP 800-70] National Institute of Standards and Technology, 2011, SP 800-70,

Revision 2, National Checklist Program for IT Products--Guidelines for Checklist Users

and Developers http://csrc.nist.gov/publications/PubsSPs.html

[NIST SP 800-126] National Institute of Standards and Technology, 2009, SP 800-126,

The Technical Specification for the Security Content. Automation Protocol (SCAP):

SCAP Version 1.. http://csrc.nist.gov/publications/PubsSPs.html

[NIST SP 800-137] National Institute of Standards and Technology, 2011, SP 800-137,

Information Security Continuous Monitoring (ISCM) for Federal Information Systems

and Organizations. http://csrc.nist.gov/publications/PubsSPs.html

[NSA IATF] NSA (National Security Agency), 2002. Information Assurance Technology

Framework, revision 3.1. http://www.dtic.mil/dtic/

[OMB A-130] OMB Circular A-130, Management of Federal Information Resources,

November 2000.

[PL 93-579] Privacy Act of 1974 (Public Law 93-579), September 1975.

[PL 97-225] The Federal Managers Financial Integrity Act (FMFIA), July 1982.

[PL 103-62] The Government Performance and Results Act (GPRA), August 1993.

[PL 104-013] Paperwork Reduction Act of 1995 (Public Law 104-13), May 1995.

[PL 104-106] Information Technology Management Reform Act of 1996 (Public Law

104-106), August 1996.

124

[PL 107-347] The E-Government Act of 2002 (Public Law 107-347), Title III of this Act

is the Federal Information Security Management Act of 2002 (FISMA), December 17,

2002.

[Popper 1959] Popper, Karl R. (1959). The Logic of Scientific Discovery. Springer, 1959.

[Verendel 2009] Verendel, Vilhelm (2009). Quantified Security is a Weak Hypothesis:

A critical survey of the results and assumptions. NSPW ‘09

VITA Robert L. Marchant

EXPERIENCE

SOTERA Defense Solutions - Technical Fellow (8/2009 – Present)

Raytheon – Senior Principal Systems Engineer (5/1984 – 8/2009)

Amoco Production Company - VM Systems Programmer (7/1981 – 4/1984)

Storage Technology Corporation - VM Systems Programmer (1/1981 – 6/1981)

United States Army - Officer (1/1977-12/1980)

EDUCATION

Bachelor of Science Computer Science (December 1976), Florida Institute of Technology

- Melbourne, Florida

Master of Business Administration (March 1990), Florida Institute of Technology - St.

Petersburg, Florida

PUBLICATIONS

Robert L. Marchant, Robert Cole, Chao-Hsien Chu: Answering the Need for Information

Assurance Graduates: A Case Study of Pennsylvania State University’s Security and

Risk Analysis Major, In: Proc ISECON 2007, v24 (Pittsburgh)

Robert L. Marchant: A survey of privacy concerns with dynamic collaborator discovery

capabilities. In: Proceedings of the 2007 Symposium on Usable Privacy and Security

2007. pp. 159-160

Robert L. Marchant: Answering Common Access Control Terminology Used in

Multilevel Security Systems, In: Proc ISECON 2012, v29 (New Orleans)

Robert L. Marchant, Thomas Bonneau: Security Engineering Lessons Learned for

Migrating Independent LANs to an Enterprise Environment. In: Proc ISECON 2012,

v29 (New Orleans)

Robert L Marchant. 2013. “Security Engineering Models” INSIGHT 16 (2): 34-36.

PATENTS

System and Method for Providing Voice Communications Over a Multi-Level Secure

Network. R Magon, R Marchant, J Masiyowski, M Tierney, US20100296444

User Interface for Providing Voice Communications over a Multi-Level Secure Network.

R Magon, R Marchant, J Masiyowski, M Tierney, US20100299724

Analog Voice Bridge. R Magon, R Marchant, J Masiyowski, M Tierney,

US20100296507