si4000: systems engineering & analysis [expeditionary warfare: protection...
TRANSCRIPT
- 1 -
SI4000: Systems Engineering & Analysis [Expeditionary Warfare: Protection of the Sea Base]
Information Assurance Plan for the Protection of the Sea Base Information
Systems KWOK Chee Khan (James) KWOK Vi-Keng (David) NEO Soo Sim (Daniel) SAW Tee Huu TAN Boon Hwee (Nicholas) TAN Kheng Lee (Gregory) TEO Tiat Leng Advisor: Prof Karen Burke Jan - Sep 2003 Naval Postgraduate School
- 2 -
THIS PAGE INTENTIONALLY LEFT BLANK
- 3 -
ABSTRACT Increasingly complex information systems are being integrated into traditional war-fighting
disciplines such as mobility; logistics; and command, control, communications, computers,
and intelligence (C4I). These information systems are a double-edged sword — one edge
representing areas that war-fighting components must protect, while the other edge creating
new opportunities that can be exploited against adversaries or used to promote common
interests.
As part of the overall system integration project for the protection of the Sea Base, this IA
study focuses on establishing an IA plan to protect and defend the information and
information systems of the Sea Base, to ensure their confidentiality, integrity, availability,
authentication and non-repudiation. The information assurance requirement is scoped within
the Sea Base information systems’ mission, assets, operating environment, as well as their
likely threats and vulnerabilities.
A three-stage approach was adopted in our study. First, the team studied and identified
potential challenges in the implementation of the current Naval IA policy in the context of the
Sea Base. Guided by the current policy and Navy defense-in-depth strategy, a technology
forecast in the information systems arena was researched and nine technologies were
recommended to be deployed in the Sea Base in 2015. These nine technologies will address
the requirement in the defense-in-depth strategy. Third, the Information Assurance Analysis
Model was applied to four of these technologies to evaluate the relative costs and effects in
implementing these technologies over the baseline system.
Key Words : Protection of Sea Base, Information Assurance, Computer Security Technology,
Information Assurance Modeling.
- 4 -
THIS PAGE INTENTIONALLY LEFT BLANK
- 5 -
TABLE OF CONTENTS 1 Introduction............................................................................................................................... 7
1.1 Background ..................................................................................................................... 7 1.2 Information Operations................................................................................................... 7 1.3 Offensive Information Operations .................................................................................. 8 1.4 Defensive Information Operations.................................................................................. 9 1.5 Information Assurance.................................................................................................. 10 1.6 Defense in Depth Strategy ............................................................................................ 11
2 Mission, Assets & Environment ............................................................................................. 12 2.1 Mission.......................................................................................................................... 12 2.2 Assets ............................................................................................................................ 13 2.3 Environment.................................................................................................................. 13
3 Threats & Vulnerabilities ....................................................................................................... 14 4 Policy ...................................................................................................................................... 17
4.1 The Information Assurance Policy ............................................................................... 17 4.2 Challenges..................................................................................................................... 19 4.3 Customizing to the Operational Context & Mitigating the Challenges........................ 21
5 Protective Controls ................................................................................................................. 24 5.1 Network......................................................................................................................... 25 5.2 Wireless Communications ............................................................................................ 26 5.3 IA Mechanisms ............................................................................................................. 28
6 Technology Look Ahead ........................................................................................................ 30 6.1 E-Bomb......................................................................................................................... 30 6.2 Physical Access Control – Biometrics.......................................................................... 34 6.3 Laser Communication................................................................................................... 36 6.4 Secure Tunnels.............................................................................................................. 46 6.5 Intrusion Prevention and Immune Computer Systems ................................................. 50 6.6 Intelligent Software Decoy ........................................................................................... 57 6.7 System Redundancy - ForceNet.................................................................................... 59 6.8 Security through Obscurity ........................................................................................... 62 6.9 Sim Security.................................................................................................................. 64
7 Modeling Information Assurance ........................................................................................... 70 7.1 Information Assurance Analysis Model (IAAM)......................................................... 70 7.2 Information Assurance Hierarchy (IA)......................................................................... 70 7.3 The Impact of IA on System Operational Capability (IOC)......................................... 71 7.4 The Impact of IA on Resource Costs............................................................................ 72 7.5 Modeling the Future System......................................................................................... 73
8 Recommendations................................................................................................................... 78 9 Conclusion .............................................................................................................................. 80 10 References............................................................................................................................... 82 11 Glossary of Terms................................................................................................................... 88 Appendix 1.................................................................................................................................... 90 Appendix 2.................................................................................................................................... 96 Appendix 3.................................................................................................................................. 112 Initial distribution list.................................................................................................................. 120
- 6 -
THIS PAGE INTENTIONALLY LEFT BLANK
- 7 -
1 INTRODUCTION
1.1 Background
Increasingly complex information systems are being integrated into traditional war-
fighting disciplines such as mobility; logistics; and command, control,
communications, computers, and intelligence (C4I). Many of these systems are
designed and employed with inherent vulnerabilities that are, in many cases, the
unavoidable consequences of enhanced functionality, interoperability, efficiency, and
convenience to users. The low cost associated with such technology makes it efficient
and cost effective to extend the capabilities (and vulnerabilities) to an unprecedented
number of users. The broad access to, and use of, these information systems enhances
war-fighting. However, these useful capabilities induce dependence, and that
dependence creates vulnerabilities. These information systems are a double edged
sword — on one edge representing areas that war-fighting components must protect,
while on the other edge creating new opportunities that can be exploited against
adversaries or used to promote common interests.
1.2 Information Operations
Information Operations (IO) are actions taken to affect adversary information and
information systems, while defending one’s own information and information systems.
[Joint1 1998] IO requires the close, continuous integration of offensive and defensive
capabilities and activities, as well as effective design, integration, and interaction of
C2 with intelligence support.
- 8 -
Figure 1-1. Information Operations - Capabilities and Related Activities
IO are conducted through the integration of many capabilities and related activities.
Major capabilities to conduct IO include, but are not limited to, Operations Security
(OPSEC), Psychological Operations (PSYOP), military deception, Electronic Warfare
(EW), and physical attack/destruction, and could include computer network attack
(CNA). Public affairs (PA) and civil affairs (CA) activities are also considered as IO
activities. (See Figure 1-1).
IO can be further classified into Offensive IO and Defensive IO.
1.3 Offensive Information Operations
Offensive IO may be conducted in a variety of situations and circumstances across the
range of military operations and may have their greatest impact on influencing an
adversary decision maker in peacetime and the initial stages of a crisis.
Offensive IO involves the integrated use of assigned and supporting capabilities and
activities, mutually supported by intelligence, to affect adversary decision makers and
achieve or promote specific objectives. These assigned and supporting capabilities
and activities include, but are not limited to, OPSEC, military deception, PSYOP, EW,
physical attack/destruction, and special information operations (SIO), and could
include CNA.
- 9 -
Figure 1-2. Notional Information Operations Engagement Timeline
The initial IO goals are maintaining peace, defusing crisis, and deterring conflict. As
situation or circumstance moves towards conflict, the ability to target and engage
critical adversary information and information systems becomes more difficult. If
deterrence fails, all capabilities may be applied to meet the stated objectives. As an
adversary prepares for conflict, information systems may become crucial to adversary
operations. (Figure 1-2)
1.4 Defensive Information Operations
Information systems serve as enablers and enhance warfighting capabilities; however,
increasing dependence upon rapidly evolving technologies makes joint forces more
vulnerable. Since it is a practical impossibility to defend every aspect of our
infrastructure and every information process, defensive IO ensure the necessary
protection and defense of information and information systems upon which joint
forces depend to conduct operations and achieve objectives. Four interrelated
processes comprise defensive IO: information environment protection, attack
detection, capability restoration, and attack response. Offensive actions play an
integral role in the defensive process in that they can deter adversary intent to employ
IO and/or neutralize adversary capabilities. The defensive IO processes integrate all
- 10 -
available capabilities to ensure defense in depth. Fully integrated offensive and
defensive components of IO is essential.
Defensive IO integrates and coordinates policies and procedures, operations,
personnel, and technology to protect and defend information and information systems.
Defensive IO is conducted and assisted through information assurance (IA), OPSEC,
physical security, counter-deception, counter-propaganda, counterintelligence (CI),
EW, and SIO. Defensive IO ensures timely, accurate, and relevant information access
while denying adversaries the opportunity to exploit friendly information and
information systems for their own purposes. Offensive IO also can support defensive
IO.
Figure 1-3 provides an overview of the defensive IO process and is a model scaleable
to all levels of war.
Figure 1-3. Defensive Information Operations Process
1.5 Information Assurance
Within the framework of IO, the scope of this study is narrowed to the specific area of
IA. As part of the overall system integration project for the protection of the Sea Base,
this IA study focuses on establishing an IA plan to protect and defend the information
and information systems of the Sea Base, to ensure their confidentiality, integrity,
availability, authenticity, and non-repudiation.
- 11 -
The information systems encompasses the entire infrastructure, organization,
personnel, and components that collect, process, store, transmit, display, disseminate
and act on information. The IA challenge involves protecting these in an integrated
and coherent manner [DoN2 2000].
The key IA attributes are defined as follows: Confidentiality supports the protection of information and resources from
unauthorized disclosures.
Integrity supports the protection of information against unauthorized modification or
destruction, and the ability to detect information that has been altered intentionally or
maliciously.
Availability supports the assurance of timely, reliable access to data and information
systems for authorized users, and precludes denial of service or access in order to
provide assurance of continuity of operations.
1.6 Defense in Depth Strategy
In the protection of the Sea Base, it is necessary to address the fundamental issue of
what strategy is used to accomplish its information Assurance (IA) needs. An ever
increasing organizational dependency on automated capabilities for survival,
coexistence and growth requires a broad and integrated IA strategy. The Defense in
Depth Strategy offers just that.
The Defense in Depth strategy takes a broad security approach by defining a number
of operationally interoperable and complementary technical and non-technical
Information Assurance layers of defense [Boyce 2002]. The totality of these layers is
what provides a cohesive and integrated process for defense. The Defense in Depth
strategy recognizes that because of the highly interactive nature of the various systems
and networks, any single system cannot be adequately secured unless all
interconnected systems are adequately secured. There is a complementary aspect to
Defense in Depth: multiple layers offset weaknesses of other layers.
The threats to confidentiality, integrity, availability, authenticity, and non-repudiation
of the organizational information have been perceived as existing outside the physical
- 12 -
and logical boundaries of the organization. However, there is more of a realization
that employees inside the organization pose a threat similar to that posed by those
outside the organization. Certainly, employees who have been granted higher levels
of privilege to create user accounts, establish configuration settings, and develop and
modify software code represent a potential source of this threat. According to “The
Insider Threat to US Government Information Systems” published by the NSTISSC,
the greatest potential threat to US government information systems comes from
insiders with legitimate access to these systems.
2 MISSION, ASSETS & ENVIRONMENT
2.1 Mission
The mission of the Sea Base is to support and sustain the Sea-To-Objective Maneuver
(STOM) operations.
- 13 -
2.2 Assets
The assets include all the resources that enable the Sea Base information systems to
collect, process, store, transmit, display and/or disseminate information. These
include the following broad categories:
• Information
• Hardware
• Software
• Communications
• Personnel
• Documentation
2.3 Environment
The operating environment is primarily centered on the naval operations of the ExWar
ships that constitute the Sea Base. Secondary operating elements include platforms
deployed from the ExWar ships. The Sea Base has to operate at sea for extended
periods under all-weather, day and night conditions.
- 14 -
THIS PAGE INTENTIONALLY LEFT BLANK
3 THREATS & VULNERABILITIES
- 15 -
IA is defined in Joint Pub 3-13 [Joint1 1998] as “Information operations that protect
and defend information and information systems by ensuring their availability,
integrity, authentication, confidentiality, and non-repudiation. This includes
providing for restoration of information systems by incorporating protection,
detection and reaction capabilities.”
An initial analysis of the Sea Base communications system was conducted (see
appendix 1.1) to facilitate a better understanding of the conceptual Sea Base
communications system. The methodology adopted for the analysis starts with the
division of Sea Base communications into 2 domains; the external and internal
communications of the Sea Base. A further analysis of each of the two domains was
conducted. This was aimed at uncovering characteristics of the communications
systems and their corresponding vulnerabilities and threats. The identified threats and
vulnerabilities then form the basis for the formulation of a holistic information
protection plan for the Sea Base. This methodology is illustrated in figure 3-1.
Figure 3-1. Methodology for analysis of Sea Base communications
The safeguarding of information and resources is accomplished through the
continuous use of safeguards and protection mechanisms. These mechanisms include
administrative, procedural, physical, environmental, personnel, communications,
operations and information measures. A mix of safeguards is used to achieve the
necessary level of security or protection. The mix of safeguards is achieved based on
- 16 -
risk management principles: analyzing the risks and cost benefits; selecting and
implementing the appropriate mix of safeguards; and assessing the results to
determine the achievement of the desired levels of protection.
Risk is a combination of the likelihood that a threat will occur, the likelihood that a
threat occurrence will result in an adverse impact, and the severity of the resulting
impact. Residual risk is the portion of risk remaining after security measures have
been applied
Risk is the expected loss of accountability, access control, confidentiality, integrity, or
availability from an attack or incident. This risk should be identified and analyzed to
access the impact to the war fighting capabilities of the forces. A command decision
would then determine whether the risk was acceptable or whether measures are
required to mitigate the risk to an acceptable level by implementing stronger
protection mechanisms.
The results of the analysis of threats, vulnerabilities and protections are summarized
in Appendix 1.2 for external communications and Appendix 1.3 for internal
communications.
- 17 -
4 POLICY
The current Naval IA policy [DoN2 2000] serves as the baseline policy model for the
Sea Base. Rather than to define a new policy, we shall adopt the existing policy for
the Sea Base. However, we shall examine its challenges and adapt it to the
operational context of the Expeditionary Warfare, in particular, for the protection of
the Sea Base.
4.1 The Information Assurance Policy
[DoN2 2000] advocates a layered defense strategy which incorporates both technical
and non-technical security measures in an integrated manner. This concept of
defense-in-depth requires a strong, enforced security policy and a thorough security
education, training and awareness program.
The respective layers include the following: Communications Security (COMSEC). COMSEC provides protection to deny
unauthorized individuals information derived from the possession and study of
communications. Technical measures include approved COMSEC products, VPN
solutions, etc.
Computer Security (COMPUSEC). COMPUSEC involves measures to obstruct
deliberate or unintentional disclosure, modification, unauthorized use, loss of
information, or denial of service. Technical measures include combinations of
hardware and software solutions.
Emanations Security (EMSEC). Electronic communications can produce
unintentional emanations that could be analyzed to obtain information of intelligence
value. Solutions include adoption of TEMPEST countermeasures.
Personnel Security (PERSEC). Personnel security provides assurance that personnel
entrusted to perform their duties are trustworthy. Measures would involve
background checks and clearance to the required level of information access.
- 18 -
Physical Security. Physical security involves measures to protect the information
technology resources from damage, loss, theft and unauthorized access. Such
resources include assets such as installations, personnel, equipment, electronic media
and documents.
Procedural Security. Procedural security, including OPSEC measures, is usually
adopted to mitigate residual risks. Risk mitigation activities include:
• Establishment of a System Security Authorization Agreement (SSAA).
• Establishment of an Information System Security Policy (ISSP).
• Proper configuration management to maintain accredited security posture.
• Acquisition management to encompass and build-in IA requirements.
• Procedures for dealing with multi-level security and systems interoperability
• Contingency planning to prepare for worst-case scenarios to ensure system
availability and system reconstitution - i.e. redundancy design, backup
operations and post-disaster recovery.
• Incident response procedures
• Vulnerability assessments to assess and improve the IA posture, including
support from Blue Team and Red Team operations.
• Management of information systems infrastructure connections, including
issues related to web-site operation, firewalls, remote-access, mobile
computing, encryption, certificate/key management, and anti-virus and
malicious code protection.
• Data management procedure, especially to preserve confidentiality. This
includes data marking (security classification), release and access procedures.
• Accounts and password management.
Security Education, Training and Awareness. As suggested in both [DoN2 2000] and
[Boyce 2002], policies and measures are unlikely to be effective if personnel involved
do not know they exist or how to use them. Training and awareness measures ensure
that users are familiar with the technology measures in place, and the policies and
procedures that need to be followed. For instance, system administrators have to be
trained to configure the system correctly so that the protection measures are
effectively implemented.
- 19 -
The entire system and the respective IA mechanisms adopted require a certification
and accreditation process in order to proceed to operations. Certification involves
evaluations of the design of technical security features and implementations of non-
technical security features against the security requirements. Accreditation is the
management decision to accept the residual risk remaining in the system after
countermeasures are applied. Site accreditation can be conducted against the
aggregate of all information systems, networks and other resources to support mission
accomplishment at a site.
It is important that the IA policy does not lose sight of the primary purpose of a
system, which is to provide operational value to the users. As suggested in
[Beauregard 2002] and [Boyce 2002], trade-offs are needed to achieve a desired
balance.
4.2 Challenges
[All references made in this section are with respect to [DoN2 2000]]
A review of [DoN2 2000] revealed no apparent weakness. However, a number of
areas were identified that may face implementation challenges.
a. Risk management approach (Ref: Chap 3.0, Pg 9; Chap 3.4, Pg 14). Risk-
based approach is dependent on basis of threat and vulnerability identification. These
evolve and would require regular review. The current guideline is to perform such a
review once every three years. This would suggest the IA posture could largely
remain unchanged for the duration of three years. Any vulnerability identified by an
aggressor thus has an opening for exploitation in that time. It may be prudent to
conduct reviews within shorter cycles for high-value, high-risks systems. In reality,
life cycle upgrades would have required reviews to be conducted, hence requiring re-
accreditation as well. Consequently, the predictability of three-yearly reviews is only
significant for systems where the security functionalities were unchanged throughout
that lifetime.
b. Documentation (Ref: Chap 3.1.6, pg 12). Documentation has to include the
description of risks associated with operating the system “and [to] plan improvements
- 20 -
to the system, including security improvements, as technology permits”. The
difficulty has to do with enforcing such a policy: what would provide the necessary
motivation to perform such improvements to a deployed system? Motivation tends to
be low to do anything about updating the documentation of the deployed systems,
unless forced to, such as during system upgrades! Also, funding and resources may
not have been provided for.
c. Updating of policy, standards and procedures (Ref: Chap 4.2.2, pg 19).
Information systems and network are required to include written SOPs which are
routinely updated and tailored to reflect changes in the operational environment.
However, no guideline is suggested on how frequently this should be performed.
Nothing may be done until something does happen!
d. Software development (Ref: Chap 3.1.3, Pg 11). The policy dictates that the
development process needs to ensure unintentional errors and malicious software are
not introduced, and to ensure adequate protective features are engineered into the
design to protect system software and data during operations. However, this is
difficult to impose when COTS are adopted, such as the operating systems, or
compilers used for development.
e. Communications (Ref: Chap 3.1.4, Pg 11). System connectivity may include
the use of modems. Positioning of the modem access points within a network could
introduce vulnerability. If improperly implemented, this could result in a backdoor
entry point into a secured network, compromising the network defense-in-depth. This
problem is recognized in 4.2.9.6 Remote Access.
f. Legacy systems (Ref: Chap 4.2.2, pg 18). The guidelines recognized the need
for integration with legacy systems. Legacy system may not be as secure as the
system being integrated to. The recommended practice is to place these in a separate
“safe” zone so as to partition the risk. Nonetheless, this suggests that there would still
be an avenue for data to be exchanged between the safe zone and the secure zone,
especially where downwards compatibility is required. This channel could be an
avenue for exploitation as the weaker system is used as a foothold for an attack vector.
As an example, consider the downwards compatibility mode of Link-16 to support
Link-11.
- 21 -
g. Configuration management (Ref: Chap 4.2.3, pg 19; Chap 4.2.9.4, pg 26).
Although not a central issue of configuration management per se, it was highlighted
that, “Software that is personally procured or developed, or obtained as “public
domain” or “shareware” shall not be installed … without ISSM and System
Administrator evaluation for compatibility, correct operation and absence of viruses.”
If source-level codes are not available, reverse engineering of the code may need to be
carried out. This is non-trivial, and in cases where code obfuscation is used, would be
practically infeasible. In addition, the ISSM and System Administrator may not be
sufficiently trained or skilled to undertake such an assessment either. Given the
context of the Sea Base, and a necessity to minimize shipboard manning, it may not
be unusual to find ISSMs and ISSOs overloaded with dual-responsibilities1, holding
both the security portfolio and another operational role. This could lead to malicious
behavior, resulting in security compromises or denial of service.
Also, this paragraph doesn’t really talk about configuration management. CM is an
engineering management discipline required for the protection of assets while
implementing change control management.
h. National Information Infrastructure (Ref: Chap 4.2.9.2, pg 25). DoN systems
that connect directly to Internet are required to implement firewalls to protect from
unauthorized external activities. A firewall alone is insufficient protection, as
recognized in Chap 4.2.9.5.
i. Email address (Ref: Chap 4.2.9.4, pg 26). Web page source codes are required
to include name, organizational code, contact number and email address. This
stemmed from the motivation to provide greater accountability and supportability of
the software. However, at the same time, the e-mail address of the developer is
indicative of the user account (userID) and could be exploited by an attacker to focus
an attack on that user account.
4.3 Customizing to the Operational Context & Mitigating the Challenges
To address the challenges, the following mitigations and considerations are suggested:
1 Overloaded with dual responsibilities, the ISSM and ISSO may well have the greatest potentials to become disgruntled insiders. As privileged and trusted users, the damage they could potentially incur as an insider can be extremely significant. This is especially true for the ISSO who is well-versed and technically trained to conduct day-to-day system administration and support of the systems under their care.
- 22 -
a. Risk management approach. It may be prudent to conduct reviews within
shorter cycles for high-value, high-risks systems. In addition, when significant
changes in threats and vulnerabilities occur, ad-hoc reviews should be carried out to
address them. One way to do so is for the ISSO and ISSM to conduct quarterly
assessments to identify if a need exists and to proceed with the more intensive task of
performing the review only when the need is determined.
b. Documentation. This activity could be conducted as part of the review (as per
the “Risk management approach”). It is difficult to motivate stakeholders on the need
to ensure that this is done, unless they are trained and recognize the possible impacts
of not doing it. SimSecurity2 could be a useful solution in this respect. It is also
important for life cycle planning to include funding for such activities. In reality,
such activities are often traded off to respond to operational tasks which are perceived
to be more critical. A balanced view can be struck if the trade-offs are actually
modeled and analyzed as suggested by Joseph Edward Beauregard in his paper
“Modeling Information Assurance”.
c. Updating of policy, standards and procedures. These can be updated during
system upgrades. In addition, routine Certification and Accreditation audits can
provide the necessary impetus to review and update them.
d. Software development. With respect to COTS software, although a theoretical
solution exists in the form of open-source software, this is not a universally accepted
practice in the commercial industry. There is no real means to guarantee that a
software is well-behaved, except to observe and test out a COTS product in a
quarantine environment, to observe for anomalies. This would not be able to detect
any malicious behavior that would only be triggered by a specific future event.
Consequently, a risk assessment approach is necessary. Further considerations would
then include assessing the reputability of the software provider, who it is, where it is
and so on, in order to gauge the trustworthiness. A more systematic and consistent
approach would be conduct EAL assessment on all software products.
2 The SimSecurity project, headed by the Centre for Information System Security Studies & Research (CISR) in NPS, in collaboration with Rivermind Inc., aims to develop an educational tool to teach information assurance concepts through game play. SimSecurity is described in greater detail in Chapter 6.9.
- 23 -
e. Communications. Modem entry points into a system should be via the DMZ
segment of the network. These should be secured with the necessary Identification
and Authentication means, and further monitored with Intrusion Detection System.
f. Legacy systems. In general, such interfaces with legacy systems should be
avoided3. However, the reality is that the evolutionary nature of system-of-systems
development and integration makes this a necessary problem. The legacy systems
should be upgraded over time or phased-out to reduce the risks.
g. Configuration management. The ISSM should undertake an evaluation only
if he/she is suitably qualified to do so, or enlists one who is. The ISSM could
therefore engage the support of the software development team to perform the
required technical evaluation, examining the security risks and implementations.
Hence, the same ideas as suggested for “4.3(d) Software development” could apply.
h. National Information Infrastructure. Connectivity to the Internet is fraught
with risks. Consequently, the NPIRNET-SPIRNET separation should be adhered to.
By enforcing a gateway to the Internet only via the NPIRNET, it would ensure that a
consistent and vigilant posture can be maintained.
i. E-mail address. The published e-mail address should be a surrogate address
and not the actual user e-mail address. The surrogate address must not correspond to
a user account with privilege access rights on any system. This will limit and contain
the extent of exposure in the event it is attacked.
As the Sea Base is deployed at sea for extended periods, the Sea Base would require
an organic incident response capability to deal with incidents occurring during
operations. An IA Logging-and-Analysis Team as part of its IO organization is
needed to monitor and respond to cyber attacks.
If a centralized team is deployed at the Task Force level, then this suggests that
logging data has to be transmitted from each ship to the command ship. As logging is
bandwidth intensive, it would be impractical to do so in the Sea Base context as such
transmissions would have to be carried over bandwidth-limited wireless 3 This is only applicable where a legacy system has a weaker security posture than the Sea Base system being integrated into.
- 24 -
communication links. Hence, a ship-level local logging and analysis capability is
required.
The initial log analysis would hence be decentralized. The smaller-volume
observations and results are then forwarded to the command ship to be fused centrally
for integrated analysis and to affect coordinated cyber-response.
5 PROTECTIVE CONTROLS
This chapter examines the various protective controls and IA mechanisms that are
already available that can be adopted for the Sea Base. These would form the
constituents of a baseline information assurance model for Sea Base systems
(reference Baseline Model).
- 25 -
5.1 Network This section examines the network and communication network architecture for an
ExWar vessel. In particular, two possible models are studied into for the network
design. These are referred to respectively as the Segmentation network design (Fig 5-
1) and the Encapsulation network design (Fig 5-2).
Lower Security Network
Higher Security Network Outer PerimeterDefenses
Outer PerimeterDefenses
Internal PerimeterDefenses
<Comms>Interface
<Comms>Interface
<Comms>Interface
<Comms>Interface
<Comms>Interface
<Comms>Interface
Fig 5-1. Segmentation network design.
Lower Security NetworkHigher Security NetworkOuter Perimeter
DefensesInner Perimeter
Defenses
<Comms>Interface
<Comms>Interface
<Comms>Interface
Fig 5-2. Encapsulation network design.
The respective designs are premised on the assumption of separating the sensitiveness
of the networks, yet at the same time, supporting a requirement to allow controlled
flow of selective traffic between them.
Neither model is necessarily better than the other. The following table further
contrasts the two respective designs.
Segmentation Encapsulation
Traffic of different security levels enters the system via different paths. Hence, there is a separation of the type of data over the
HSN security traffic has to flow through the LSN, and hence may be exposed to attacks during its carriage. This could be mitigated
- 26 -
networks. This has the added advantage of distributing the network load, possibly improving availability.
through encryption, and in particular VPN technology, to tunnel the traffic in. The risk stems from an insider resident in the LSN attacking the traffic going into or out of the HSN.
Layering the HSN behind the LSN provides an additional layer of defense. Hence, this design offers more layers of protection from external attacks.
The LSN could potentially be designed as a failover (‘redundant’) network for the HSN, providing redundancy and hence improved availability. However, this may be at the expense of the strength of protection offered.
Redundancy would have to be built into the entire network.
Should there be a need to isolate the system, it is possible to isolate the HSN from the LSN, or with the external comms interfaces selectively. This offers some degree of granularity when executing containment measures.
In this model, it is only possible to isolate the HSN from the LSN. Should this happen, links with external comms interfaces are also denied.
HSN: Higher Security Network LSN: Lower Security Network
Table 5-1. Segmentation vs encapsulation.
Within the network, a combination of various technical controls (see 5.3) should be
implemented.
5.2 Wireless Communications
Radio waves have been the primary choice of wireless medium for the exchange of
signals and messages between forces dispersed in a large theatre of action. With
advances in electro-optics and microwave technologies, alternative wireless mediums
such as LASER and Microwave are gaining wide spread usage in the battlefield. The
main impetus for the shift from radio wave to LASER and Microwave
communications has been the greater bandwidth offered by LASER and Microwave
communications. This is primarily attributed to their higher carrier frequency.
Wireless communications are critical to the success of any military operations
spanning over a large theatre of operation. The protection of wireless communications
thus warrant considerable attention and study to continually seek improvements to
current technologies and to explore new concepts for greater security and protection.
An example of how current technologies are implemented to protect a radio wave
based link is the Link 16.
- 27 -
Fig 5-3. Radio Link
In Link 16, both the message and the transmission are encrypted. The message is
encrypted by the KGV-8B encryption device in accordance with a crypto-variable
specified for message security, or MSEC. Transmission security, or TSEC, is
provided by another crypto-variable that controls the specifics of the radio waveform.
One important feature of this waveform is its use of frequency hopping. Both the net
number and the TSEC crypto-variable determine the hopping pattern. This
instantaneous relocation of the carrier frequency spreads the signal across the
spectrum, making it both difficult to detect and difficult to jam. The TSEC crypto-
variable also determines the amount of jitter in the signal, and a predetermined,
pseudorandom pattern of noise that is mixed with the signal prior to transmission.
Anti-jam performance is the ability of a communications system to operate in a
hostile electromagnetic environment. The Link-16 waveform was developed to
provide significant performance enhancements against optimized, band matched
jammers. Its spread spectrum waveform, coupled with flood relay, allows it to provide
continued communication services in all anticipated scenarios.
These current protection methods for Radio Frequency communication links form the
foundation for the protection of LASER and Microwave communications, whenever
applicable. Due to the increasing reliance on LASER and Microwave communications,
newer and more robust protection mechanism need to be explored to ensure the
reliability of these links.
- 28 -
5.3 IA Mechanisms
The following lists various fundamental mechanisms that can be applied to secure
networked systems.
Prevention mechanisms Firewall Firewalls perform a traffic filtering function to manage
and reduce undesired types of traffic flowing into and out of the networks. These may be in the form of stateful and/or stateless firewalls. Typical configuration includes a stateless firewall to perform basic filtering, followed by a stateful firewall to perform more intelligent filtering.
Virtual Private Network (VPN)
VPNs provide confidentiality and integrity assurance by encrypting traffic for transmission over a shared network [Kaufman 2002].
Vulnerability scanner Vulnerability scanners detect (known) potential weaknesses of a host configuration. These weaknesses can then be patched and/or locked down accordingly to ‘harden’ the host’s security. Examples of such tools include NESSUS [Nessus 2003] and Retina Network Security Scanner [Eeye 2003].
Software patches Software vulnerabilities are often identified only after they have been distributed and deployed. It is a fact that most attacks are made against known vulnerabilities. It is hence necessary to constantly keep up with these security patches.
Detection mechanisms Intrusion detection system (IDS)
IDS’s use techniques based on anomaly detection and/or misuse detection to identify potential attacks. Typically, IDS’s are signature-based and would only be able to detect known patterns of attacks. IDS can be network-based (monitors network traffic) or host-based (monitors logs). Examples include SNORT [Snort 2003] and ISS Real Secure [ISS 1998].
Integrity detection Integrity assurance tools, such as Tripwire [Tripwire 2003], can be used to fingerprint an installed system and to determine changes that have occurred following a suspected attack. It monitors key attributes of files that should not change, including binary signature, size, expected change of size, etc. It is also a useful tool as part of a configuration management suite.
- 29 -
Anti-virus software Anti-virus software tools are typically real-time
signature-based tools. They serve to detect and prevent various forms of known virus attacks. These may have been introduced via loading of files from external sources such as offline media, e-mail attachments, etc. An example is the Norton Anti-Virus (NAV) suite of products [Symantec2 2003].
- 30 -
6 TECHNOLOGY LOOK AHEAD Change being the only constant in technology, a technology look-ahead surveying some of
the more promising emerging technologies will provide us with a template of what is possible
in the realm of information systems. In the interest of security and information assurance,
some of these emerging technologies may turn out to be must-have systems and not just nice-
to-have, thereby maintaining the survivability and sustainability of Sea Base operations.
The list of concepts and technologies are:
a. E-Bomb
b. Physical Access Control - Biometrics
c. Laser Communication
d. Secure Tunnels
e. Intrusion Prevention and Immune Computer Systems
f. Intelligent Software Decoy
g. System Redundancy – ForceNet
h. Security Through Obscurity
i. Sim Security
Each of these technologies mentioned could be used as part of the DoN defense-in-depth
requirement. However not all technologies are relevant to every system deployed on the Sea
Base, for example, biometric technology may not be relevant to Radar systems. It is
therefore recommended that engineers of these systems determine the relevance of these
technologies and explore the possibility of deploying them for their system.
6.1 E-Bomb
Current Situation
E-bomb is short for ‘Electromagnetic Bomb’ – a warhead designed to damage targets
with a very intense pulse of electromagnetic energy. An E-bomb which is well
matched to it intended target set can cause electrical damage over a footprint which
might be as large as hundreds of meters in diameter. Victim devices may suffer
secondary damage from their power supply. If the weapon does not generate enough
power to produce permanent damage, it can cause computers to crash, hang or reboot,
thus yielding a temporary disruptive effect. Though very harmful for computer
- 31 -
systems and other electrical devices, e-bomb fortunately do not damage buildings or
harm people.
There are two major concerns in E-bomb.
• The first that very high-frequency pulses, in the microwave range, can worm their
way around vents in Faraday Cages.
• The second concern is known as the “late-time EMP effect”. The EMP that
surged through electrical systems creates localized magnetic fields. When these
magnetic field collapse, they cause electric surges to travel through the power and
telecommunications infrastructure.
The E-bomb can be deployed using guided and unguided aerial bombs, cruise missiles,
glide bombs, artillery shells and guided and unguided ballistic missiles could be used
to deliver an E-bomb warhead.
Stanley Jakubiak, senior civilian official for nuclear command, control,
communications and EMP policy for DOD said, “We know it will impact electronic
equipment, but due to the variation of tolerances built into commercial equipment and
the different system configurations, we can't accurately predict how wide spread any
damage or disruption will be." Still, military systems are becoming increasingly
reliant on commercial off-the-shelf components, which are not designed to withstand
the effects of an EMP attack, he said. [Daniel 1999]
What make E-bombs a likely threat are as follows:-
• Flux Compression Generators (FCG) E-bomb can be produced cheaply
(source quoted as $400).
• E-bomb is an area weapon and therefore direct impact on target is not required.
• The high dependence on electronics and computers in current warfare
technologies.
An E-bomb could effective neutralize the Sea Base
• Ship control systems
• Targeting systems, on the ship and on missiles and bombs
• Communication systems
• Navigation systems
- 32 -
• Long and short range sensor systems.
State of Technology
As with all hostile weapons, the best form of defense is to destroy all possible means
of delivery such as destruction of all missile sites and aircraft during the initial
softening of the target. Forming a protective shield of weapons and sensors will allow
the ExWar ship to engage all incoming airborne and seaborne threat.
The second form of protection is hardening. There are a wide number of measures
which can be applied to hardening a site against E-bomb attacks.
Site Hardening is based upon the model of electromagnetically “soft” computer
equipment being protected from exposure to damaging voltages and electromagnetic
fields.
There are a few areas of concern for site hardening:
a. Networking. Networking cables are exceptionally good at propagating
damaging voltages and network interfaces are often no designed to handle
anything beyond trivial levels of power. Fortunately, the network is probably
the easiest part of a site to harden, because optical fiber variants of most
networking interfaces are readily available. Optical fiber network cable are
wholly immune to any kind of electromagnetic attack, as well as being
inherently immune to problems with building earths, and lightning strikes.
b. Power supply. High voltage spikes or high RF voltages injected
into the power supply will find their way to almost all electrically
powered machines. The spikes could potentially damage or disrupt a
device's power supply and electronics. Surge suppressors and
uninterruptible power supplies may be quite ineffective, as most of
these are designed to cope with much less destructive circumstances. A
possible solution is to use a motor generator power isolator. The power
source is connector to this generator, which produces clean power for
internal distribution within the site.
- 33 -
c. Cabling. To solve the problem of attacks on local main wiring, computer
cables, a Faraday cage or electrostatic shielding can be use to simply exclude
RF signals from the environment occupied by the equipment. If the site is
critical, then comprehensive Faraday cage shielding may be warranted, where
even small gaps and apertures will need to be thoroughly sealed. This implies
that incoming and outgoing cables will need to be routed through RF traps or
Ferrite grommets, doors, windows and air conditioning vents will need proper
flexible seals and an airlock arrangement may be needed for the door.
The weakness of site hardening is that an attacker who can penetrate the site’s
security perimeter (such as insider attack) may have the opportunity to do much
damage.
Equipment Hardening
A electromagnetically hardened equipment will require the ability to cope with high
RF fields, voltages and spiked power lines. This includes addressing the ability for
RF fields from getting in through gaps, cracks and cooling grilles in equipment
There are a few areas of concern for site hardening:
a. Monitors will need conductive materials embedded in the screen or screen
cover to provide it from RF radiation.
b. Main power supply. Many techniques can be used for non-electrical coupling
between the mains and the low-voltage side of the equipment. The simplest is
the miniature motor-generator scheme, hydraulic power transfer or fuel cell
scheme.
c. Machine’s immediate interface. External interface such as keyboard, mouse,
serial ports, external drives etc can be protected by using optical fiber
interfaces rather than copper interfaces. The machine back panel of
connectors will have to be removed because these are all potential entry point.
Hardened machine chassis can be used to cope with electromagnetically
hostile environment.
Applicability and Implementation
- 34 -
• Both site and equipment hardening is applicable to the ExWar ship. It may not be
a commercially viable solution to harden the entire ExWar ship from
electromagnetic attack. Critical area in the ExWar ship will require
comprehensive hardening, while equipment hardening can be deployed for critical
equipment. The command and control centre should be hardened to protect from
potential electromagnetic threat. With a hardened C2 centre, the equipment
within the centre potentially can be normal commercial off-the-shelf (COTS)
system. Other aspects are the defense systems, the missile itself should be
deployed using hardened integrated circuit and cabling.
6.2 Physical Access Control – Biometrics
The goal of any access control system is to allow authorized personnel to enter
specific sites. There are many ways of identifying a person. Card-based access
systems can authorize pieces of plastic, but can't distinguish who is carrying the card.
Systems using personal identification numbers (PINs) require only that an individual
know a specific number in order to gain entry. Biometric devices verify a person's
identity by unique, unalterable physical characteristics, such as hand dimensions, eye
features and/or measurements, fingerprint, or voice.
Current Situation
Biometrics can eliminate the need for cards. Although the initial cost of these cards
has fallen dramatically, the real benefit from eliminating cards is realized in reduced
administrative time. Lost cards must be replaced, but eyes and hands are seldom lost,
stolen or forgotten. They cannot be shared with others, and they do not wear out and
never need to be replaced. However biometric access devices are not silver bullets for
all access vulnerabilities. In fact, if not applied correctly, they can create new gaps in
security.
As the core premise of biometric authentication is the certainty that the identified
person is in-fact the person he/she is supposed to be, ensuring the integrity of a
biometric sample is mission critical. The biosensor is the first line of defense against
impostors. Strong encryption of communication and data prevents a false image from
being forced into the system through other ways. Naturally, it is not enough to provide
the best system integrity as a whole; biometric security is also dependent on the
- 35 -
recognition performance of the technology and the overall system design and
architecture. At present, biometrics technology is based on static recognition. This
limits the system to close range identification and increases the chances for a false
identity.
State of the Technology
In the current scope of biometrics research, the Human Identification at a Distance
(HumanID) program is to develop automated biometric identification technologies to
detect, recognize and identify humans at great distances [Jonathon 2003]. These
technologies will provide critical early warning support for force protection and
defense against terrorist, criminal, and other human-based threats, and will prevent or
decrease the success rate of such attacks against operational facilities and installations.
Methods for fusing biometric technologies into advanced human identification
systems will be developed to enable faster, more accurate and unconstrained
identification of humans at significant standoff distances.
One primary focus of identify humans at a distance is on gait recognition [Georgia
2003]. It is a technique that recovers static body and stride parameters of subjects as
they walk. This approach is an example of an activity-specific biometric: a method of
extracting some identifying properties of an individual or of an individual's behavior
that is only applicable when a person is performing that specific action. The research
is also analyzing the ability of time-normalized joint angle trajectories in the walking
plane as a means of gait recognition and locating and tracking faces (with expressions
and speech), detecting occlusions and doing activity specific background subtraction
Evolving to 2015
Gait recognition is aimed at further improving the security of using biometrics
technologies. On the other hand the technologies are also focusing on increasing the
accuracy.
Biometrics got their start in high security applications, where the primary design
consideration was to keep the "bad guys" out. Little attention was paid to letting the
good guys in. For those applications, a low False Rejection Rate was the most
- 36 -
important specification. As biometrics moved into military applications, the False
Acceptance Rate became more critical.
The definition of False Rejection Rate and False Acceptance Rate are as followed,
• FRR (False Rejection Rate) the frequency of rejections relative to people who
should be correctly verified. When an authorized user is rejected he/she must
represent his/her biometric characteristic to the system. Note that a false rejection
does not mean necessarily an error of the system; for example, in the case of a
fingerprint-based system, a incorrect positioning of the finger on the sensor or
dirtiness can produce false rejections.
• FAR (False Acceptance Rate) the frequency of fraudulent accesses due to
impostors claiming a false identity.
There are many strategies can be adopted to improve the overall authentication system
performance. Biometrics systems can be used together with RFID, Smart Card,
password and PIN to reduce FRR and FAR.
Applicability and Implementation
Biometric-based authentication applications include workstation, network, and
domain access, single sign-on, application logon, data protection, remote access to
resources, transaction security and Web security. Trust in these electronic transactions
is essential to the operation of the Sea Base. Utilized alone or integrated with other
technologies such as smart cards, encryption keys and digital signatures, biometrics
are set to pervade nearly all security aspects of the physical access control. Biometrics
for personal authentication is becoming convenient and considerably more accurate
and secure than current methods (such as the utilization of passwords or PINs). This is
because biometrics links the event to a particular individual (e.g. a password or token
may be used by someone other than the authorized user), is convenient (i.e. nothing to
carry or remember), accurate (i.e. it provides for positive authentication), can provide
an audit trail and is becoming socially acceptable and inexpensive.
6.3 Laser Communication
Current Situation
- 37 -
The signal from current wireless communication means, whether using radio or
microwave frequencies, can be easily tapped by an adversary because of the wide
angle of divergence of the signal. The strength of encryption of these signals has been
greatly diminished by distributed computing. There is a need to limit the accessibility
of classified wireless communications to eavesdropping.
An ideal electromagnetic transmission will travel directly to the receiver. However,
this requires a fully coherent source. That is, the electromagnetic waves in the source
have a constant phase relationship. This implies that there is only a single frequency
being transmitted, which is difficult to achieve practically. If a source is non-coherent,
the electromagnetic waves in the source will interfere with one another, causing the
beam to diverge.
Fig 6-1. An ideal transmission source
Fig 6-2. A non-coherent transmission source
As shown in Figure 6-2, when a transmitted signal diverges, it forms a cone with the
tip at the transmitter end. Practically, this is a spherical cone i.e. the base of the cone
is spherical. However, for simplicity, we assume that the base is flat, giving a normal
cone.
- 38 -
A diverging beam implies that there are many more locations in which an
eavesdropper could intercept the transmitted signal.
State-of-the-art radio and microwave transmitters have a divergence angle (θ) of a few
degrees. Lasers, on the other hand, are generated through wavelength-controlled
photon emissions. Therefore, laser emissions are highly coherent. In addition, laser
beams can be controlled with the use of collimators. Laser systems have been built
with a divergence angle of less than 2 arcseconds (1 arcsecond = 1/3600 degree).
The location in which an eavesdropper can locate a “listening” device is effectively
the volume of the cone. This is found by the formula hr 2
31π , where r is the radius of
the base of the cone, and h is the furthest distance from the transmitter in which the
transmitted signal can still be received.
For comparison purpose, assume that the transmitted signal can reach a distance of 20
km (i.e. h = 20,000 m). For a microwave transmitter with a divergence angle of 2
degrees,
r = h.tan 2θ
= 20,000 × tan 22
= 349.1 m
Volume of cone = hr 2
31π
= 20000)1.349(31 2π
= 2.552456276 × 109 m3 For a laser transmitter with a divergence angle of 2 arcseconds,
r = h.tan 2θ
= 20,000 × tan
×
21
36002
= 0.1 m
Volume of cone = hr 2
31π
= 20000)1.0(31 2π
= 209 m3
- 39 -
The volume of the transmission cone for the microwave transmitter is more than 12
million times greater than that of the laser transmitter. It may be interpreted that an
eavesdropper is 12 million times more likely to find an appropriate location for
his/her listening device.
Many factors may affect whether an eavesdropper may be able to locate a listening
device within the volume of the cone. These are not considered in this report as the
above calculations were only meant to illustrate how the reduced divergence angle
can greatly decrease the likelihood of tapping by an eavesdropper.
State of the Technology
One of the biggest challenges for laser communications technology is the distance
limitation. This is because the various gases in the atmosphere absorb and scatter
EMR at different wavelengths and to various extents.
Fig 6-3. Absorptance of Various Atmospheric Gases vs Wavelength
- 40 -
Figure 6-3 shows the absorptance of the various atmospheric gases with respect to
wavelength. The bottom graph in this figure shows the total absorptance of EMR
radiation in the atmosphere.
Fig 6-4. Atmospheric Transmittance of EMR Wavelengths
Figure 6-4 shows the resultant atmospheric transmittance of the various EMR
wavelengths. It can be seen that the transmittance for microwave wavelengths have a
generally high transmittance while that for laser communications (usually in the
infrared wavelengths) has limited “windows”.
One of the longest laser communication links was demonstrated by the Lawrence
Livermore National Laboratory (LLNL) in 2001. The LLNL successfully conducted a
trial between the LLNL and Mount Diablo, which was 28 km away. Data was
transmitted at 2.5 Gbps with reasonable bit-error rates.
- 41 -
Fig 6-5. Laser communications team member Jeff Cooke stands next to the
transceiver telescope on top of Mount Diablo Current laser communication products are also limited to installations at fixed sites
(usually in or on top of buildings). No known terrestrial link has been established on
mobile platforms, which would be necessary if laser communication is to be
employed on military ships and/or planes.
The only known deployment of laser communications on mobile platforms was in space. The European Space Agency (ESA) is formed by 15 member states (Austria, Belgium,
Denmark, Finland, France, Germany, Ireland, Italy, the Netherlands, Norway,
Portugal, Spain, Sweden, Switzerland and the United Kingdom) and its mission is to
shape the development of Europe’s space capability.
In November 2001, the ESA established the first laser communications data link
between satellites. The Artemis (Advanced Relay TEchnology MISsion) satellite was
in a parking orbit at 31,000 km, while the SPOT 4 satellite was orbiting at an altitude
of 832 km. Links were established 4 times, lasting 4 to 20 minutes, where data was
- 42 -
transferred at a data rate of 50 Mbps. This was using the SILEX (Semiconductor laser
Inter-satellite Link Experiment) system on-board the Artemis satellite.
Fig 6-6. Artemis and SPOT 4 communicating via SILEX system – Artist’s
impression Artemis, at an altitude of 31,000 km, was orbiting at a speed of about 7,000 mph,
while SPOT 4, at an altitude of 832 km, would be orbiting at about 16,600 mph. The
SILEX link by the ESA demonstrated a few capabilities.
The directional precision and tracking capability in communicating between the two
orbiting satellites which were on average 38,500 km apart. The ESA later claimed that
all 26 of 26 attempts to establish communication links were successful.
Bit error rate was consistently in the range of 10-9 to 10-10. Beam Divergence was
approximately 2 arcseconds (0.000556 degrees). Wavelength used was ~800 nm.
Evolving To 2015 As part of the Secure Air-Optic Transport and Routing Network (SATRN) program,
the Lawrence Livermore National Laboratory (LLNL) intends to demonstrate an FSO
link with a data rate of 100 Gbps over a distance of 28 km. If this were to be
successful, it would not only demonstrate the scalability of the bandwidth of laser
communications, it would also prove that laser communications can be achieved over
- 43 -
greater distances. The success of this trial depends on the technology of Adaptive
Optics.
If a laser beam passes through a uniform medium, its speed is slowed but the pattern
of phases still moves together. In a non-uniform medium of different densities and
temperatures, however, some parts of the beam are slowed more than others, leading
to distortions in the uniform wavefront. This not only presents a problem of range, but
also sets a limitation on the data rate that can be transmitted.
This same problem is encountered by astronomers wanting to study the faint and blur
images of the distant galaxies. Adaptive Optics is used to alleviate this problem.
Fig 6-7. Adaptive Optics used in Astronomy
All AO systems work by determining the shape of the distorted wavefront, and using
an "adaptive" optical element (usually a deformable mirror) to restore the uniform
wavefront by applying an opposite canceling distortion. Current AO systems are able
to update the shape of the deformable mirror several hundred times a second.
- 44 -
Fig 6-8. Adaptive Optics System used in Astronomy
Adaptive Optics makes use of a known light source (labeled as a reference beacon in
the diagram) like an adjacent star to determine how much the light from the target has
been deformed. However, in our application of using a laser source for
communication, the light source is well understood and largely in phase. Therefore,
there is no need for this reference beacon. The laser signal received can be duplicated
to the wavefront sensor so that required changes can be made to the deformable
mirror.
Using adaptive optics, the signal from a laser transmitter can be reconstructed,
resulting in better range and allowing higher data rates for the system.
Although the deployment of laser communication terminals has been demonstrated on
satellites, they have yet to be demonstrated on terrestrial mobile platforms like ships
and planes. This may pose an additional challenge as the tracking of planes may
require faster and more accurate trackers.
- 45 -
Applicability and Implementation
According to Jane’s Information Group, the U.S. Air Force has proposed to invest
US$7.6 billion over FY04 – FY09 on laser communication terminals for satellites, the
Global Hawk UAV, U-2 reconnaissance aircraft, E-3 Airborne Warning & Control
System, E-8 Joint STARS ground surveillance platform, E-10 Multi-mission
Command and Control Aircraft and the 'Smart Tanker'.
Fig 6-9. Platforms with Notional Plans for Laser Communication Terminals It is noted that the platforms in which the USAF intends to install laser
communication terminals belong to the Intelligence, Surveillance and Reconnaissance
(ISR) platforms. In the event of international deployments, these platforms may not
have an optical terminal on the ground to download their ISR data. Instead, the ships
of the Sea Base present an ideal downlink for these ISR information for effective
situational awareness. However, these ships will need to be equipped with laser
communication terminals in order for them to communicate with the Air Force
platforms.
The Sea Base ships could also make use of the higher bandwidth of laser
communication to download data from the various other sensors, communicate with
other ships, as well as use satellite to communicate with the homebase.
- 46 -
6.4 Secure Tunnels Current Situation
Access to classified or sensitive data used to involve the procurement of leased lines
or secure dialup connections. However, both of these tended to be costly and were not
easily scalable, which did not suit the increasing requirements for remote access. This
created the demand for tunneling methods as secure alternatives.
Present State Of Technology
A tunnel is a private connection between two machines or networks over a shared
public network [WRQ 2003]. In a secure tunnel, the original data is packaged within a
passenger protocol, such as IPX, NetBeui, or IP, which can be easily read by other
PCs and servers. The passenger packet is then wrapped in an encapsulating protocol,
which can be understood only by an authorized machine. Finally, the encapsulated
packet is wrapped in a carrier protocol (typically IP), which is understood by the
network that the packet is traveling (i.e. the Internet).
Virtual Private Networks (VPNs) use the Internet infrastructure to provide scalability
and are less costly to set up and maintain. VPNs can connect a single PC to a VPN
server, or two VPN servers together (Fig 6-10). Any data flowing through the secure
tunnel is encrypted, but data outside the tunnel is not. The tunnel is established
through an authentication method (usually some kind of public key exchange), which
is usually proprietary and not interoperable with standard authentication methods.
Consequently, it can often be difficult for consumers to evaluate the relative strengths
(or weaknesses) of a VPN vendor's encryption algorithms or authentication methods.
- 47 -
Fig 6-10: Virtual Private Networks
The most popular encapsulating protocols used by VPNs today are GRE (Generic
Routing Encapsulation), PPTP (Point-to-Point Tunneling Protocol), L2TP (Layer 2
Tunneling Protocol), and IPSec (IP Security). GRE is most commonly used with site-
to-site VPNs; PPTP, IPSec, and L2TP, which rely on PPP (Point-to-Point Protocol),
are most often used in remote-access VPNs.
Open standards: OpenSSH and TLS
Open standard protocols overcome proprietary issues of VPNs and allow clients and
servers to interoperate fully. The strengths of their encryption algorithm and
authentication methods are public knowledge and weaknesses are identified and
addressed in the open forum. These open standards include Open Standard SSH
(OpenSSH) and Transport Layer Security (TLS) [OpenSSH 2003].
OpenSSH is an open-standard version of SSH (Secure Shell), and is a compatible
protocol used for end-to-end encryption of data from a client PC to a specific host.
SSH/OpenSSH establishes a secure tunnel using two pieces: a SSH or OpenSSH
client on the PC and a SSH server on the host (Fig 6-11). Once the secure tunnel is
established, any arbitrary TCP/IP protocol, such as Telnet or FTP, can be forwarded
to or from the host through the secure tunnel. OpenSSH encrypts all traffic (including
passwords) to effectively eliminate eavesdropping, connection hijacking, and other
network-level attacks.
- 48 -
Fig 6-11: OpenSSH
Transport Layer Security (TLS) is a protocol that ensures privacy between
communicating applications and their users on the Internet. When a server and client
communicate, TLS ensures that no third party may eavesdrop or tamper with any
message. TLS works in a similar fashion to SSH/OpenSSH. It is made up of two
layers: the TLS Handshake protocol, which allows authentication between the client
and server and negotiates keys and encryption prior to data transmittal, and the TLS
Record protocol, which encrypts and encapsulates the data. The secure tunnel can
either be configured to go directly from the client PC to a TLS-enabled host, or from a
client PC to a TLS-enabled gateway (Fig 6-12).
Fig 6-12: SSL/TLS Encryption
Evolving To 2015
Quantum encryption holds the potential of taking tunneling methods into the next
paradigm by creating encryption codes that are absolutely unbreakable and key
distribution schemes that are un-interceptable [Johnson 2002]. Quantum encryption
promises the ability to completely eliminate the possibility of eavesdropping. Current
encryption / decryption methods are only as good as the length of the key. The theory
is that secret keys for one-time functions let only the receiver decrypt the scrambled
bits, but in practice even the most secret key can be found by trial and error. For
- 49 -
instance, multiplying two prime numbers together is a difficult code to crack, since
there is no known efficient algorithm to find prime factors. But a brute-force
approach, in which a hacker tries a large number of multiplications in the hope of
hitting the result, might pay off. In 1999, the standard 56-bit DES encryption code
was cracked in less than 23 hours [Farber 1999]; its next-generation successor, AES,
ups the ante to a 256-bit key, but code-cracking computers are also speeding up, so
the security is only temporary.
Instead of depending on the computational difficulty of cracking one-way functions,
quantum encryption creates uncrackable codes that employ the laws of physics to
guarantee security. Different quantum states, such as photon polarization, can be used
to represent 1s and 0s in a manner that cannot be observed without the receiver's
discovering it (Fig 6-13). For instance, if hackers observe a polarized photon, then 50
percent of the time they will scramble the result, making it impossible to hide the
eavesdropping attempt from the receiver.
Fig 6-13: Quantum Encryption
Application And Implementation
- 50 -
The use of tunneling methods is adequately mature to be deployed into new and
existing communications networks. In peacetime use, the Internet can be used to
access Intranets to form the VPNs. Alternatively, open standards could be employed
to overcome proprietary concerns. Users would then be able to connect to a local
Internet Service Provider (ISP) and use the tunnel connection through Internet to
access a ship or task force Intranet.
Quantum encryption promises to take secure tunnels a step further by providing
extremely high levels of confidentiality and integrity between parties. Although this
technology is in its infancy, experiments have shown that it can work through optical
fiber as well as in support of satellite communications, providing high bandwidth,
secure communications. As such, its employment potential is very great and could
certainly be applied to the communications channels within the Sea Base
environment, as well as between the Sea Base and external agencies.
6.5 Intrusion Prevention and Immune Computer Systems
Current Situation
Modern computers are plagued by security vulnerabilities such as buffer overflow,
viruses, Trojan horses and attack by hacker. Traditionally, developers have protected
their systems by coding rules that identify and block specific events. Such signature-
based approach looks for corrupted data, firewalls enforce hard-coded permissions,
virus definitions guard against known infections, and intrusion detection systems look
for activities deemed in advance to be suspicious by the system administrators.
Firewalls are often used as a perimeter defense. However, firewalls are not always
effective against intrusion attempts. The average firewall is designed to deny clearly
suspicious traffic, but it is also designed to allow some traffic through (e.g. HTTP
traffic). Many exploits take advantage of weaknesses in these protocols to tunnel
through to penetrate the system, for instance, by inducing buffer overflow attacks on
the web-server, creating trapdoor or simply exploiting insecure configurations
[IntruVert 2003]. These weaknesses continue to be discovered daily.
Layered behind the firewall therefore are Intrusion Detection Systems (IDSs) as the
next layer of ‘defense’ to detect attacks that may be taking place [Rossi 2003]. An
- 51 -
IDS may be a Network IDS or Host-based IDS. A Network IDS monitors the
network traffic to look for suspicious traffic by matching against a signature database.
It then flags these off as alerts. Host-based IDS on the other hand, examines log files
on a host computer to achieve a similar objective. Current IDS technology have a few
significant weaknesses:
a. Both types of IDSs are reactive in nature and are typically only able to raise an
alert only after an attack has already taken place. Although they are useful for
conducting computer forensics to help determine the nature of the attack, they
are unable to prevent the attack from taking place.
b. The current generation of IDS solutions is built around the idea of anomaly
detection and misuse detection. In the case of anomaly detection, these IDS’s
tend to generate voluminous numbers of alerts which require tremendous
commitment of manpower to sieve through painstakingly. This can be
extremely overwhelming and is largely a result of large numbers of false
positives. Attempts are being made to reduce the false positives through
enterprise-wide event correlation and analysis [Symantec1 2003]. In the case
of misuse detection, because of its dependency on known signatures, it has
little or no means of detecting novel forms of attacks. There is also a constant
need to keep up with signature updates.
Beyond these network defenses, hosts are further protected using anti-virus software.
Again, anti-virus software packages are signature-based and are only effective against
known forms of viral infection or Trojan Horses. New forms of virus attack can often
result in crippling effects. It thus suffers from the same deficiency as IDS.
It may be possible to overcome these deficiencies using a combination of both
Intrusion Prevention System [Doty 2002] and Immune Computer System as
complementary technologies.
6.5.1 Intrusion Prevention System (IPS)
State of the Technology
Intrusion prevention goes a step further than a Host-based IDS in protecting against
Trojan Horses by recognizing unusual behavior and blocking it in real-time before the
- 52 -
intrusion can execute. IPS technology appears most suited when deployed as a
network perimeter defense, and against network-based attacks.
Upon detecting an anomaly, the IPS can perform a number of mitigation actions such
as by notifying the network operations center, dynamically throttling traffic (rate
shaping), redirecting to honey-pots, or deny the unsanctioned traffic.
Rules can be configured to control which actions applications can perform on file,
network and system resources. These run unobtrusively by intercepting system
actions, checking policies and then allowing or denying the action. Preconfigured
rules could be packaged for protecting web-servers, mail-servers and end-user
desktop as proposed by [Doty 2002].
Statistical log data can be used to generate reports that indicate overall network health
which the Security Administrator can then use to monitor the rule sets and fine-tune
them.
Requirements
The design requirements of an IDP are articulated as follows [IntruVert 2003]:
• In-line operation. This is required so that an IPS device has the ability to
discard all suspicious packets immediately and to block the remainder of that
flow. This has the consequence of significantly reducing the number of
successful attacks.
• Granularity of control. Decisions can then be made with some granularity
over which malicious traffic to block. This could be controlled by rule-based
definitions in terms of type of attack, by policy or by host.
• Detection accuracy. False positives need to be kept low to prevent self-
induced denial-of-service (DOS). There has to be a high-level of confidence
in the accuracy of the detection.
• Advanced alerting and forensic analysis. Alert mechanisms have to support
quick decision-making and downstream analysis. This may involve the ability
for automated reconstruction of a complex series of attacks, such as
- 53 -
correlation techniques as suggested by [Cuppens 2002] and [Ning 2003].
Such tools can significantly alleviate the current difficulties faced by security
administrators who are swamped by the multitude of alerts from IDS’s today.
• Reliability and availability. Should an inline device fail, it has the potential to
close a vital network path, leading to DOS. Hence either a fail-open or fail-
over design is required else availability could be significantly impacted.
• High performance. Packet processing rates must be at ‘wired speed’ and able
to keep up with the network technology which is currently in the order of
Gigabits. This may involve the use of Field-Programmable Gate Arrays
(FPGA) and/or Application Specific Integrated Circuit (ASIC) technology.
• Low latency. Processing has to be fast to minimize latencies introduced into
the network.
Evolving To 2015
Clearly, the above requirements pose some significant challenges. Because of its
need to perform inline, the IPS can become a potential choke-point. Firstly, failure of
the IPS could close down the network path, resulting in self-induced denial-of-service.
Secondly, its throughput has to keep up with high bandwidth networks, less it
introduces an unacceptable level of latency. Hence, maintaining a high-reliability and
high-availability IPS is essential.
Secondly, false positives need to be reduced. Beyond a desire for this ideal, this
problem is even more severe in an inline IPS than when compared to the traditional
IDS. This is because the IDP performs active responses, in contrast to an IDS which
is only passive detection. Hence, false positives may lead to undesired reactions, also
resulting in self-induced denial-of-service.
Applicability and Implementation
IPS technology is applicable in any networked system. It can be built as an extension
to the IDS; therefore, it can then operate in its minimalist form in a traditional IDS-
only mode. This would enable staged implementation, initially deploying the IDP in
an IDS-only mode, and subsequently increasing the levels of prevention capabilities
- 54 -
until full IPS implementation. The staged implementation would enable assessments
to be carried out to gain confidence, and fine-tune the policies.
6.5.2 Immune Computer System
State of the Technology
The current research work on Immune Computer System by the Department of
Computer Science, University of New Mexico has developed a revolutionary
approach to security implementation. The work which is currently sponsored by
Defense Advanced Research Projects Agency, National Science Foundation, IBM,
Intel Corporation is led by Associate Professor Stephanie Forrest.
The idea behind the work is to develop a computer security system that mimics
biological immune system. Biological Immune Systems has protected animals from
dangerous foreign pathogens, including bacteria, viruses, parasites, and toxins by
distinguishing “self” from dangerous “other” and eliminating “other”. The problem of
protecting computer systems from malicious intrusions can similarly be viewed as the
problem of distinguishing self from dangerous non-self. In this case, a non-self might
be an unauthorized user, foreign code in the form of a computer virus or worm,
unanticipated code in the form of a Trojan horse or corrupted data.
Learning from the biological immune systems, Forrest has identified several
distinguishing features that provide important clues about how to construct robust
computer security system.
• Multi-layered protection. The body provides many layers of protection
against foreign material. Many computer security systems are monolithic, in
the sense that they define a periphery inside which all activity is trusted.
When the basic defense mechanism is violated, there is rarely a backup
mechanism to detect the violation.
• Distributed protection. The immune system’s detection and memory systems
are highly distributed; there is no centralized control that initiates or manages
- 55 -
a response. The success of an immune computer system arises from the highly
localized interactions among individual detectors and effectors, allowing the
immune system to allocate resources (cells) where they are most needed.
• Unique copies of the detection system. Each individual in a population has a
unique set of protective cell and molecules. Computer security often involves
protecting multiple sites and when a way is found to avoid detection at one site,
all sites become vulnerable. A better approach would be to provide each
protected location a unique set of detectors or even a unique version of
software.
• Detection of previously unseen foreign material. Many virus- and intrusion-
detection methods scan only for known patterns leaving systems vulnerable to
attack by novel means, anomaly detection is needed to detect novel attack.
Architecture
One approach to building computer security is to design systems based on the direct
mapping between immune system components and current computer architecture. A
few possibilities are being explored by the research team:-
• Protecting Static Data. Computer viruses typically infect programs or boot
sectors by inserting instructions into program files stored on disk. Self is
interpreted as uncorrupted data and non-self is interpreted as any changes to
self.
• Protecting Active Processes on a Single Host. Each computer supports many
active processes. For active processes, self would be defined by normal
behavior and non-self would be abnormal behavior in the form of intrusions.
The Immune Computer System consists of a “lymphocyte” process which is
able to query other processes, to see whether they are functioning normally. In
response, the lymphocyte process could slow, suspend, kill, or restart the
misbehaving process. To protect the “lymphocyte” process from attack,
“lymphocyte” process could have a randomly generated detector or set of
detectors, living for a limited amount of time, after which it would be replaced
by another “lymphocyte”.
- 56 -
• Protecting a Network of Mutually Trusting Computers. The “lymphocyte”
process could migrate between computers, making them mobile agent. There
is no centralized server to coordinate a response to a security breach; the
detecting lymphocyte can take whatever action is necessary, possibly
replicating and circulating itself to find similar problems on other hosts. In this
way, anomalies found on one computer could also be quickly eliminated from
other computers in the network.
Design
• The idea of immune computer system was implemented to solve the problem
of intrusion detection, computer virus detection and host-based intrusion
detection. In an intrusion detection system, detectors were used to detect
pattern that is comprised of source address, destination address and
communicating program. All normally observed and acceptable connections,
both those within the LAN and those connecting the outside world to the LAN,
form the set of self patterns, and all others form the set of non-self pattern.
• A detector is initially randomly created, and then remains immature for a
certain period of time, which is the toleration period. If the detector matches
any detection pattern a single time during toleration, it is replaced by a new
randomly generated detector string. If a detector survives immaturity, it will
exist for a finite lifetime. At the end of that lifetime it is replaced by a new
random detector string, unless it has exceeded its match threshold and
becomes a memory detector. If the activation threshold is exceeded for a
mature detector, it is activated. If an activated detector does not receive co-
stimulation, it dies. However, if the activated detector receives co-stimulation,
it enters the competition to become a memory detector with an indefinite
lifespan. Memory detectors are analogous to long-lived immune memory cells,
in that they have much extended life spans, and have lower thresholds of
activation. Memory detectors greatly enhance detection of previously seen
attacks by automatically extracting and encoding signatures of attacks.
• Similar implementations were used in the detection of computer viruses and
host based intrusion detection.
- 57 -
Applicability and Implementation
The principle of the immune computer system can be applied to all computer based
system. It spans across from the command and control center in the ExWar ship,
sensors and to the defensive and offensive weapon system in the protection of the
ExWar assets.
At individual machine, detector should be installed to monitor both the active
processes and static data.
Immune-based host and network intrusion system should be installed to protect host
and network computer.
6.6 Intelligent Software Decoy
Intelligent Software Decoy is a concept that implements an intelligent means of
intrusion detection and response to patterns of suspicious behavior. It is an abstraction
for protecting information systems from malicious attacks by using mobile agents. It
employs deception techniques to deceive the attacker into believing it is the object it
advertises itself to be and to reveal the presence of the attacker with appropriate
response.
Current Situation
Two strategies for defending against attacks in cyberspace are widely use; (1)
identifying and fixing known vulnerabilities of an information system, and (2)
detecting attacks before they inflict significant damage on an information system or
legitimate users of the system [Michael2 2003]. These strategies are not sufficient to
ensure either the survivability or the intrusion tolerance of critical information
systems, such as those comprising the information infrastructure of an organization.
These systems have to both survive and tolerate attacks perpetrated by highly trained
aggressors, who unlike script-kiddies, continually customize their existing arsenal of
attack programs and create new ones in order to both avoid detection and achieve the
maximum desired effect.
Intelligent software decoy has both the protection and counterintelligence component.
The decoy consists of one or more software wrappers [Calvin 2000] placed around a
- 58 -
unit of software (e.g., component or method), with each wrapper consisting of a set of
rules for detecting and responding to suspicious behaviors. Instead of indicating to the
attacker that he has been detected, the decoy keeps the attacker occupied by creating
the illusion for the attacker that the attack is progressing as expected, using techniques
ranging from fake error messages to redirecting the interaction with the attacking
computer process to a virtual environment. The goal is threefold: to gather
information about the nature of the attack, adjust the system’s defenses based on the
intelligence gathered, and cause the attacker to experience an opportunity cost (e.g.,
wasted attack resources that could have been better applied, or to expose sources and
methods).
There are two basic requirements for this approach to be successful: being able to
detect the attack, and responding without human intervention. The concept proposes
the use of an event-based language to meet these two requirements. This language
uses event patterns to define suspicious behaviors and the actions to be taken when
the events occur.
State of the Technology A prototype of such a system has been developed using NAI Labs’ Generic Software
Wrapper Toolkit. A case study was done on an FTP-based intrusion. The intelligent
software decoy concept implemented with this wrapper was able to deceive the
attacker into believing that the attack was successful. The deception ends when the
attacker tries to interact with the shell as the shell functionality is not currently
simulated. Therefore the attacker will discover that something went wrong and
possibly suspect that the targeted FTP server utilizes a deception mechanism.
Researchers are currently working through more case studies to identify the language
features before beginning work on building a compiler that will automatically create
wrapper definitions (using NAI’s Wrapper Definition Language).
Evolving to 2015
There are three main aspects in the evolution of the intelligent software decoy concept.
These include:
- 59 -
• Improving detection accuracy by simultaneously applying multiple and
complementary intrusion detection techniques to the same event stream. This
has the potential of providing more accurate detections.
• Strengthening the deception concept through statistical study of attacker
behaviors. This involves the study of human behavior to develop appropriate
deception plans under various environments. The decoy must be able to
change its appearance via polymorphism [Michael 2001].
• Reponses to attack are currently limited to revealing the presence of the
attacker and to contain the attack. The research can be further extended to
include counterattacking the attacker’s system. However, this area of research
is restricted by the law of information conflict [Michael1 2003].
Applicability and Implementation
Present protective software is mainly focused on detection capabilities. Upon
detecting an intrusion, responses are often limited to simply isolating the system
under attack from the network or tracing to the source of the attack. In the context of
the Sea Base, the Intelligent Software Decoy can provide a better solution in the
information warfare environment where responses can be in the form of deception or
counterattacks. The goal for the former is to tie down the enemy information
resources; whilst the latter aims to destroy the enemy information system.
6.7 System Redundancy - ForceNet
Current Situation
Information system redundancy and survivability are critical aspects of information
assurance that seek to provide high availability in software, hardware, network
systems and components. The need for information system redundancy and
survivability in a Sea Base environment is obvious enough – to ensure availability of
information systems throughout the mission of the task force. Given that the forces
are typically far removed from maintenance or support facilities and thus need to be
self-sufficient for the duration of the operation, it is essential that Sea Base
information systems have adequate redundancy and survivability built in to ensure
continuous system availability.
- 60 -
Present State of Technology
Information system availability is currently achieved through [Kalra 2000]:
• cloning, providing multiple instances of servers and other critical systems to
meet high demand;
• load balancing, used to distribute the load across the clones to ensure that a
failed server’s load is seamlessly taken over by other servers;
• fault-tolerance, the ability of the system to tolerate failures in both hardware
and software, and is closely intertwined with availability requirements.
Information systems survivability refers to the ability of a computer-communication
system-based application to satisfy critical requirements in the face of adversities such
as hardware faults, software flaws, attacks on systems and networks perpetrated by
malicious users, and electromagnetic interference.
The problems faced by information systems with respect to system availability today
include the following [Neumann 2000]:
• commercially available mass-market software systems tend to be very poor
with respect to security and reliability, and worse in overall system and
network survivability;
• software components are often incompatible with one another, even when
obtained from the same developer;
• interoperability and reusability are much less than what should reasonably be
expected.
• compatibility with legacy systems is driving many systems into their lowest
common denominators.
Extending the above into the Sea Base context, the present limitations results in a
system with networks that are stove-piped, single-path and fragile. Command and
control is only somewhat centralized [Mayo 2003].
Evolving To 2015
- 61 -
The development of ForceNet over the next decade seeks to provide [Clark 2002]:
• the integration of warriors, sensors, networks, command and control,
platforms, and weapons into a fully netted, combat force;
• the operational realization of network-centric warfare;
• distributed and interoperable systems that provide high availability and
remove single points of failure.
When realized, it will achieve a distributed, collaborative command and control
system supported by a dynamic, multi-path and survivable network; thereby providing
the assurance of system redundancy and survivability.
Application And Implementation
With ForceNet as “the operational construct and architectural framework for Naval
warfare in the information age”, the effectiveness of Naval operations such as Sea
Strike, Sea Shield and Sea Basing will be greatly increased, while providing for
interoperability with other joint forces [Mayo, Nathman 2003].
Fig 6-14: ForceNet Architecture [Navy 2001]
With an extensive network of sensors, processes, databases, applications, weapons,
and forces, ForceNet would be able to support dynamic command and control
between Naval Forces globally (see Fig 6-14). Network defensive measures will
incorporate defense in depth to ensure that networks are reliable. [Navy 2003]
- 62 -
6.8 Security through Obscurity
Current Situation
Many security implementations advocate “Layers of Defense” over “Security through
Obscurity”. It should be noted that Security through Obscurity should not be leisurely
discounted as it may provide much enhancement to the security of system
implementations. Obscurity is therefore seen as a potential enhancement of security.
a. Encryption Algorithms
Many security systems employ the use of well-known public encryption algorithms
like DES, Triple-DES, AES and SkipJack. However, the strength of these public
encryption algorithms are diminished by distributed computing. For example, RSA’s
team of 100,000 computers managed to break DES encryption within 22 hours.
Furthermore, the strength of public encryption algorithms is a play on probability. It is
always possible to crack a public encryption on the first try, however small the
probability. The question is whether the owners of the data are willing to live with the
risk that their encrypted data does have this probability of being cracked by a hacker.
b. Security Hardware
There is a tendency to use publicly proven security hardware like Checkpoint
firewalls and Cisco routers. Firstly, no “proven” hardware can claim to be
vulnerability-free. Bugs and loopholes are continually being found in them and are
being exploited by hackers. Furthermore, popular public security hardware will have
many more hacker attempts to “break” them, as discovery of flaws in these popular
hardware will allow a hacker to break into many networks.
c. Network Configuration
There is much necessity to place higher classification on the physical and software
configuration of networks. Physical configuration includes details as to how the
network equipment are connected, whether there is a DMZ, the modem numbers, etc.
Software configuration includes IP addresses, available services running on servers,
and the version of firmware used. A network configuration is like a roadmap. Letting
this roadmap fall into the hands of a hacker will allow this person to easily navigate
through the network directly to specific targets of interest.
d. Operating Systems and Applications
- 63 -
Popular software are popular exploitation targets. Examples are the Microsoft
Operating Systems and Internet applications (e.g. Microsoft Outlook, Internet
Explorer, etc).
State of the Technology
a. Encryption Algorithms
Technological know-how on developing obscure encryption algorithms is rare. The
training and experience in this area is lacking.
b. Security Hardware
Security Hardware tends to be dominated by major “players”. Once again,
technological know-how in developing obscure security hardware is rare and lacking.
c. Network Configuration
Insufficient importance is being placed in the classification of network configuration
details. This requires education as well as possibly a mindset change amongst network
managers.
d. Operating Systems and Applications
Operating systems and applications are also dominated by major vendors, such as
Microsoft, which is not known for producing secure products. Knowledge in secure
operating systems is lacking and applications support is almost non-existent.
Evolving To 2015
a. Encryption Algorithms
Knowledge and experience should be built in the developing of obscure algorithms. If
a hacker does not know how an algorithm works, he will not know which part of a
transmission constitutes the key which he is trying to guess. Therefore, he will not be
able to conduct dictionary or brute force attacks by observing the bit stream
transmissions which may well be within his reach.
b. Security Hardware
- 64 -
Obscure security hardware should be used in tandem with publicly proven ones. In the
event that a flaw or bug is found in a popular firewall (for example), a hacker may be
able to bypass the popular firewall, but will encounter an obscure firewall which will
be found to be unfamiliar and will have no popular tools to break into with.
c. Network Configuration
If a hacker does not know what the crucial servers are and how they are connected, it
will be much more difficult to carry out attacks. Obscuring the network configuration
can certainly hinder a hacker’s actions. This will prompt the hacker to do more
probing attacks and increase the likelihood of detection by an Intrusion Detection
System.
d. Operating Systems and Applications
With the use of hardened operating systems and applications, hackers will not be able
to use the hacking tools readily available on the Internet to attack a network. Less
security flaws can be expected, resulting in higher availability.
Applicability and Implementation
In the “Layers of Defense” model, obscurity should certainly be one of the pillar
stones. The use of encryptors, security hardware like firewalls and packet filters,
operating systems and applications are all necessary in the building of a secure IT
infrastructure. Having obscurity will harden all these components and make it difficult,
if not impossible, for a hacker to break through the layers of defense.
6.9 Sim Security
Current Situation
Automated Information Systems (AIS) are now pervasively used by non-traditional IT
users, many of whom are trained only trained on the use of specific applications
deployed on the platform, and little else. Although various technical IA means may
be in place to secure these systems, human errors and operator ignorance may leave
systems open to other forms of attacks. Consequently, security education, training
and awareness supplements the technical security measures to form an essential part
of a layered defense strategy.
- 65 -
However, these are time-consuming activities which take users’ time away from their
operational functions. Such training activities, be it classroom-based or via online
training medium, are costly as well. Practical considerations prevent these courses
from providing all the prerequisite knowledge for security awareness. As a result, the
training is too broad and too shallow. Training usually includes all common modes of
attack, while the individuals are more interested in those that their enclave are more
likely to experience. Given limited time, each exploit would only be addressed
superficially and little emphasis is possible on post-incident response and reaction
[Tanner 2002].
If the training is not captivating enough, it is likely that very rapidly, interest and
knowledge will ebb away, marginalizing the value of the training. Mitigating this
through repeat or follow-up trainings at regular intervals is one solution, however, this
increases costs.
A novel approach to overcoming these shortfalls is to adopt gaming technology to
serve as a pedagogical tool. Using games as a means to put the idea across enables
the individual to explore the possibilities and consequences. Simulation-based
training can be more focused and less expensive than today’s lecture/lab-based
trainings. The simulations can also be customized to achieve greater relevance to the
individual’s enclave. If sufficiently captivating, the gamer goes away with the lessons
learnt ingrained. High replay value will encourage self-motivated follow-on training.
“I hear and I forget. I see and I remember. I do and I learn.”
-- Confucious.
State of the Technology
[Saunders 2003] evaluated several of these games. These include CyberProtect and
the Information Security War Gaming System.
CyberProtect is a simulation built under contract by the Defense Information Systems
Agency. The game revolves around the acquisition and deployment of defensive
information assurance measures which are applied to a network of computers. The
networked system is then subjected to a variety of randomly generated attacks at the
end of each of four quarters. The attacks that have taken place are briefly narrated
and a summary of the player’s progress is given at the end of the quarter.
- 66 -
Fig 6-15. CyberProtect.
The Information Security War Gaming System, developed by the National Defense
University, is a tutorial type simulation that provides a more in depth focus on
specific attacks and defenses. The player selects a list of defenses to employ against
each form of attacks depicted pictorially.
The current generation of these games are however fairly restrictive in terms of
options and flexibility. As the game scenarios are static, there is little room for
exploration and replay value is limited.
Design Objectives
[Tanner 2002] describes the design objectives of such a training tool as follows:
• Content: Understand the threat. The first step is to know the enemy.
• Content: Awareness of known weakness and attack techniques. The trainee
learns the weaknesses of his networked computer system and how it can be
exploited. The trainee should be aware which vulnerabilities can be
eliminated and which are unavoidable exposures inherent in the design of the
system.
- 67 -
• Pedagogy: Support multiple training objectives. These include:
o Connecting concepts to practice.
o Repeatability.
o Progressing from novice through more sophisticated scenarios.
o Examining “what if’s” by reconfiguring and trying again.
o Practicing skills in a realistic training environment.
o Developing problem-solving and decision-making skills.
o Learning to recognize operational indicators of normal, abnormal and
emergency conditions.
• Pedagogy: Support multiple views. Enabling understanding of cause-and-
effects by allowing the trainee to take actions and seeing their manifestations
and effects from the perspective of the attacker, defender and the forensic
analyst.
• Content: Model the trainee’s environment. By matching the simulated
environment with the trainee’s operational environment, the training can be
more relevant, in-depth, focused and effective.
• Portable, self contained laboratory. This expands the reach of the tool by
placing it in the hands of the trainee and at his convenience.
Evolving To 2015
[Irvine 2003] describes a concept for an extensive game simulation that closely
matches these design objectives. This has started to take shape in the form of the
development of SimSecurity undertaken by the Center for the Information Systems
Studies and Research (CISR) at the Naval Postgraduate School (NPS). SimSecurity is
designed to be an educational tool for teaching information assurance concepts
through game play. By exercising through various scenarios, each designed to bring
across one or more information assurance concepts, the student gains new insights
and builds-up his/her knowledge of information assurance through hands-on
‘experiences’.
- 68 -
Fig 6-16. SimSecurity.
SimSecurity adopts an experiential interface [Seo 2002] with a 3D world-view
popularized by game genres such as Populous, SimCity 2000, etc. The 3D world
presented to the student shows a physical organizational setup and system
configuration initialized from the scenario definition. This would include resources
such as an operating budget, computing and network resources, personnel, etc. The
students then proceed to acquire and deploy resources, establish security policies and
procedures to equipment, implement physical security to zones (a zone defines a
physical space such as a room), etc. The experiential and interactive interface,
executing in real-time, engages the student and helps captivates the student’s attention
with events taking place, such as through alerts of possible ongoing penetrations and
security breaches. As highlighted by [Irvine 2003], making the game fun and
challenging is an essential element. This is important, not just to captivate the
student’s interest, but also to help ingrain lessons learnt.
Due out in 2004, the initial version is only slated for a single-player mode, playing
from a defensive point of view. Future versions would extend this to support a multi-
user environment, and allows the player to examine from the perspective of the
- 69 -
attacker as well. The game would need to be expanded to support a richer range and
depth of concepts.
Applicability and Implementation
With SimSecurity, scenarios can be crafted using the SimSecurity Scenario Editor to
take the novice through basic scenarios to introduce various IA concepts, progressing
onto more complex scenarios which are closer representations of the problems that
may be experienced in the protection of the Sea Base. New scenarios can be created
as the threat environment that the Sea Base may have to experience changes.
Scenarios can be organized into packages through the use of the SimSecurity
Campiagn Editor to structure the learning process. After Action Reviews (AARs) can
then be performed using the SimSecurity Campaign Analyzer tool, which processes
the scenario log file generated after each game played, to study the cause-and-effects
relationship of policy implementations and the results.
As the game is playable on the Windows operating system, it can be made pervasively
available to trainees, exercising through the scenarios at their own time and pace, and
even while deployed shipboard!
With both richness and reach, SimSecurity can significantly improve the effectiveness
of security education, training and awareness.
- 70 -
7 MODELING INFORMATION ASSURANCE
7.1 Information Assurance Analysis Model (IAAM)
The Information Assurance Analysis Model (IAAM) [Beauregard 2001] is developed
to aid DoD organizations in their efforts to protect valuable information and
information systems. The model is composed three separate hierarchies: the level of
information assurance both as a system baseline and the effort of a new strategy,
technology, or group of technologies; the impact of an information assurance system
on system operational capability; and the impact of an information assurance
approach on resource costs. Figure 7-1 depicts the 3 separate hierarchies – IA, IOC
and IRC of the IAAM.
Figure 7-1: Information Assurance Analysis Model
An IA strategy is defined to be either a physical upgrade (hardware, software, or
physical security), a change in policy with the intent of improving information
assurance, or some combination of the two. The best IA strategies will increase
information assurance, increase the system operational capability, and can to be
implemented at a low cost to the organization.
7.2 Information Assurance Hierarchy (IA)
The IA hierarchy measures the ability of the system and system personnel to assure
information, information systems, and information processes. Information Assurance
is a process that involves the ability to protect information and information systems
(IS), detect events that may interfere with information or IS, and properly react to
situations where information or IS may have been compromised.
The entire IA value hierarchy is given as Figure 7-2, with Information and IS
Protection, Detection, and Reaction composing the highest sub-tier of values.
- 71 -
Figure 7-2: Information Assurance (IA) Value Hierarchy
Information and IS Protection is defined to be the measures taken to ensure that
information and information systems are protected from unauthorized change. This
includes assuring information and IS availability, confidentiality, and integrity.
Detection is defined to be the ability of the system or system personnel to detect an
event. In order for an organization to gain value from their detection capabilities, it
must be done quickly, accurately, and at a sufficient level. Detection is therefore
separated into three sub-values: Timely, Accountability, and Flexibility.
Reaction, in this study, is defined to be measures taken to (1) appropriately respond to
an identified attack, (2) restore the information and IS capabilities to an acceptable
state, their original state, or an improved state, and (3) the ability to learn from
previous events so that they do not cause damage in the future. Reaction is thus
separated into Respond, Restore, and Adapt.
7.3 The Impact of IA on System Operational Capability (IOC)
The impact that an information assurance strategy will have on the system’s
operational capability must be considered when determining what IA strategy or
strategies are best for a given organization. The purpose of an information system is
to help personnel accomplish their mission in a more efficient manner; if the system
cannot do this effectively then it is not a useful system. However, if the user cannot
- 72 -
trust that information in the system is available, accurate, updated, and secure, then
the user’s willingness to depend on the system is greatly decreased. There is a fine
balance between information assurance and system operational capability that must
exist in order to have a secure but usable system.
Figure 7-3: Impact of Information Assurance on System Operational
Capability (IOC) Value Hierarchy
As shown in Figure 7-3 above, the main parameters taken into IOC consideration are
efficiency, functionality, convenience, ease of implementation and flexibility.
7.4 The Impact of IA on Resource Costs
The final consideration when determining what IA strategies to implement is the
impact the strategy will have on information system resources. Resources Costs are
both the fiscal cost and manpower cost that an IA strategy will require. All other
things being equal, the strategy that requires the least amount of resources, either
financially or with respect to personnel time, will be preferred. The complete IRC
value hierarchy is presented as Figure 7-4.
Life Cycle Acquisition Costs is the dollar cost of an IA strategy needed to implement
and maintain that strategy over its lifetime. As in any acquisition, an IA strategy that
costs the least to acquire, implement, and maintain will be valued higher than more
expensive strategies, assuming that they provide an equal amount of assurance.
- 73 -
Along with a dollar cost, implementing and maintaining an IA strategy will certainly
consume organizational manpower. Users and support personnel are again separated
since they are valued differently when considering information assurance strategies,
with preference again given to the user.
Figure 7-4: Impact of Information Assurance on Resource Costs (IRC) Value
Hierarchy
7.5 Modeling the Future System
The IAAM model is applied to study four of the future systems that listed in section 6
of the report. These four areas were selected as they offer greater pay-off potentials
for the protection of Sea Base. The objective of the study is to highlight the relative
costs and effects of implementing these four technologies. The weight given to the
system is based on the finding in the Information Assurance Analysis Model (IAAM)
[Beauregard 2001] and the assigned score are based on the team understanding of the
system operational cost and effectiveness.
The baseline system is based on current system that performing the same or similar
task such as :- (what’s missing?) New Technologies Baseline system Immune Computer System
Present anti-virus software solution.
ForceNet Present standalone task force group. SwDecoy Present Intrusion detection system Laser Radio and Microwave communication
- 74 -
The detail is shown in Appendix 2 and the summary of the results obtained from the
model is shown in Table 7-1.
Baseline ICS ForceNet SwDecoy Laser Impact on IA 0.545 0.759 0.804 0.737 0.634 Impact on IOC 0.717 0.627 0.780 0.633 0.641 Impact on IRC 0.356 0.355 0.064 0.355 0.146
Table 7-1: Summary results obtained from the IAAM model.
Immune Computer System
The effectiveness of Immune Computer System was evaluated using the Information
Assurance Analysis Model (IAAM) with the baseline system as the present day virus
and intrusion detection system. The result of the analysis shows that Immune
Computer System will significantly increase the Information and IS Protection ability
of the Sea Base system. This is attributed to the increase capability to detect and react
to attack. Implementing the Immune Computer System will imposed additional
system overhead thus resulting in a reduction in the system operational capability
(IOC) and resource cost (IRC). However, it should be noted that the analysis was
based on the current technological stage such as processing power and memory
capacity, improvement in these areas will likely to overset the overhead of Immune
Computer System.
ForceNet
An analysis of ForceNet was conducted on the Information Assurance Analysis
Model (IAAM) with the baseline set by known present day naval systems. Appendix
2 provides details of the analysis. The result of the analysis shows that ForceNet is
expected to significantly increase the Impact on Information Assurance (IA) and
Impact of IA on System Operational Capability (IOC). However, since ForceNet
involves a new capability buildup, the Impact of IA on Resource Costs (IRC) is
significantly lower. It should be qualified that the above are based on broad present
and future concepts. A more detailed, system-for-system comparison could be
conducted with the IAAM to provide greater granularity as ForceNet begins to deliver
actual systems.
Intelligent Software Decoy
- 75 -
Intelligent Software Decoy was modeled in the IAAM using the present Intrusion
Detection System (IDS) as a baseline. The result of the analysis showed that system
implementing Intelligent Software Decoy concept has a better score in the
Information Assurance (IA) hierarchy but a lower score for Impact of IA on System
Operational Capability (IOC) and Impact on IA Resources Costs (IRC) hierarchies. A
better score on the IA hierarchy is expected since protection of information is one of
the primary goals in the design of the concept. The reasons of scoring lower in the
IOC and IRC is due to this concept being in the experimental stage. The weightage of
each component in the IAAM model is based on present requirements and this
weightage can vary though time and development of the technology in the future.
Laser Communication
Laser communication is regarded as a technology that could greatly increase the
confidentiality of communications through the reduction of the divergence of
transmission signals. Therefore, it would certainly provide a positive boost to the
Information Assurance aspects of a mission.
However, as laser communication is a new technology, significant time would be
needed to implement, test, and train users and support personnel. Availability is also
expected to drop until new technologies like Adaptive Optics are more effectively
used to improve the reliability of laser communication links. Therefore, the overall IA
System Operational Capability decreases even though user throughput and system
capacity is seen to increase. These, together with the enabling of new missions are not
given significant weights as compared to the issues of availability and time to deploy.
Being a new technology, the initial cost in equipping the Sea Base with laser
communication terminals is expected to be high. Therefore, a greatly negative impact
is seen on the IA Resource Cost.
7.6 Integrated Test and Evaluation Model
Whilst the IAAM serves as a useful tool for comparative analysis of the respective
emerging and future technology vis-a-viz the current technology baseline, an
integrated system-of-systems test and evaluation (T&E) model is necessary as a total
systems engineering approach. The Information Assurance Track supported this later
development, led by the Operations Research Track. The IA portion of this T&E
- 76 -
model, which adopts the Dendritic approach as proposed by [Hoivik 2003], is as
detailed in Appendix 3.
- 77 -
THIS PAGE INTENTIONALLY LEFT BLANK
- 78 -
8 RECOMMENDATIONS
8.1 Recommendations for Future Study The technology areas covered in the look-ahead (Chapter 6) provide many options for further
study, especially as the technology matures and specific systems can be realized. One particular
area that will be of special relevance to the USN is that of ForceNet. As an operational
realization of network-centric warfare, ForceNet integrates multiple sensors, communications
systems, sensor and communications platforms, as well as information systems. It also delves
into aspects of system redundancy, survivability and systems engineering, and together these
provide excellent opportunities for future integration between project teams within the Sea Base
study.
8.2 Preliminary Information Enablers This system-of-systems study was intentionally structured to be unstructured initially so as to
leave the door open to a divergence of ideas. The experience of the IA sub-group project team
was that a framework was needed to guide the study, hence we created one for ourselves.
However, new requirements were introduced during the course of the study: one such example
was the need for an Information Assurance model to integrate with the Operations Research
study, adopting the Dendritic-based approach to obtain measures of effectiveness and
performance of the systems.
It is the view of the IA project team that provision of certain information requirements (such as a
threat analysis) provided at the beginning of the study, would have better served to guide the
entire study with greater focus and certainty.
8.3 Format of the Study The format of this study represents but one form of the application of Information Assurance to
project management. It is not an ideal one, as the IA study was conducted in relative isolation
from the rest of the sub-groups (viz. Communication systems, Sensor systems (EE), Sensor
systems (PH), Operations Research / Modeling & Simulation, and Weapon Systems sub-groups)
in the Sea Base Protection study, resulting in the development of information assurance solutions
- 79 -
for plausible information systems within the Sea Base system. None of these information
systems are related to the systems being studied or developed by the other sub-groups,
unfortunately, as it was felt then that we could not wait in limbo for the others to brainstorm and
conceive their approach to the study. Thus it was deemed at the beginning of the study that this
go-it-alone approach would be the most suitable one for us given the information available and
the situation at hand.
The nature of information assurance, or security for that matter, is that it is really everyone’s
business, and a more ideal approach would have been to integrate an information assurance
component into each of the other project teams, to ensure that each of these sub-group studies
consider the requirements for security and information assurance from their inception. Again, in
an ideal world, this would entail assigning an IA expert to each of the project teams to provide
the requisite advice and inputs. Given that the incumbent members of the IA project team are
themselves beginners in this field, the suggested assignments are not viable.
A possible compromise, however, could be to assign one IA member to each of the other project
teams in the initial period if only to understand what the other teams are developing, and then
coming back together subsequently to determine which of the sub-group project teams to work
with to develop a system that meets the system or capability requirements as specified by the Sea
Base study, while simultaneously meeting the information assurance requirements as specified
by the various DoD directives. This approach would be similar to what the Operations Research
sub-group is doing with their Testing and Evaluation requirements, and serves to provide better
integration and realization of common objectives among the sub-groups.
- 80 -
9 CONCLUSION In this report, we have presented the requirements for information assurance requirements in the
protection of the Sea Base. With the key aspects of information assurance, namely,
confidentiality, integrity and availability as the foundation for our analysis, we discussed the
employment of a defense-in-depth strategy as the strategy of choice in the complex operational,
technical and non-technical environment. This strategy also served as the basis for an analysis of
the Sea Base information systems’ mission, assets, operating environment, as well as their likely
threats and vulnerabilities.
In response to this analysis, the baseline DoN IA policy model showed that while there were no
inherent weaknesses to the policy, several implementation challenges were identified which
could compromise the efficacy of the policy if incorrectly implemented or executed. In line with
these challenges, we also proposed an array of protective controls that could be employed to
enhance the Sea Base information systems.
Acknowledging that the operational environment and threats are never constant, a technology
forecast in the information systems arena was researched, guided by the different layers of the
defense in depth strategy. In order to facilitate a deeper appreciation of the benefits provided by
some of these emerging technologies, an information assurance model was used to examine the
attributes introduced by these technologies. The IAAM model not only compares the differences
in IA strategies, it also highlights the impact of information assurance on system operational
capabilities and resource costs. The four technology areas that were modeled returned a higher
level of information assurance compared to the baseline system, but some came at a cost to
operational capability and / or resources.
- 81 -
THIS PAGE INTENTIONALLY LEFT BLANK
- 82 -
10 REFERENCES [Anil 1997] Anil Somayaji, Steven Hofmeyr, and Stephanie Forrest. “Principles of a Computer
Immune System”. http://citeseer.nj.nec.com/11313.html. Department of Computer Science. University of New Mexico. 1997
[Boyce 2002] Joseph G. Boyce, Dan W. Jennings. “Information Assurance: Managing
Organizational IT Security Risks”. 2002. [Beauregard 2001] Joseph E. Beauregard. “Modeling Information Assurance”. Thesis,
Department of Airforce, Air University. March 2001. [Beauregard 2002] Joseph Beauregard, Richard F Deckro, Stephen P. Chambal. “Modelling
Information Assurance: An Application”. Paper published in Military Operations Research, Vol 7, No 4. 2002.
[Calvin 2000] Calvin Ko, Timothy Fraser, Lee Badger, Douglas K. “Detecting and Countering
System Intrusions Using Software Wrappers”. NAI Labs, Network associates, Inc.. Proc of 9th USENIX Security Symposium, Denver, Colorado, Aug 14-17, 2000.
[Captus 2003] Captus. “Intrusion Detection and Prevention”. Website on Captus IPS solution.
http://www.captusnetworks.com/solutions/pna_idp.html. Jul 2003. [Carlo 1997] Carlo Kopp “Hardening Your Computing Assets”
http://www.globalsecurity.org/military/library/report/1997/harden.pdf Global Security. [Carlo 1996] Carlo Kopp “The Electromagnetic Bomb – a Weapon of Electrical Mass
Destruction” Defence Analyst. http://www.cs.monash.edu.au/~carlo/. University of Monash. Melbourne, Australia.
[Carlo 2003] Carlo Kopp “E-Bomb Frequently Asked Questions (FAQ)”
http://www.globalsecurity.org/military/library/report/2003/hpm-faq.htm University of Monash Melbourne, Australia
[Clark 2002] Admiral Vern Clark, U.S. Navy, “Sea Power 21: Projecting Decisive Joint
Capabilities”, Proceedings, October 2002 [Cuppens 2002] Frederic Cuppens, Alexandre Miege. “Alert Correlation in a Cooperative
Intrusion Detection Framework”. May 2002. [Daniel 1999] Daniel Verton. “Experts say electromagnetic pulse devices threaten U.S.”
http://www.fcw.com/fcw/articles/1999/FCW_101899_42.asp Federal Computer Week. Oct 18, 1999.
- 83 -
[Desai 2003] Neil Desai. “Intrusion Prevention Systems: The Next Step in the Evolution of
IDS”. http://securityfocus.com/printable/infocus/1670. 27 Feb 2003. [DoN1 1999] “Fleet Information System Security - Manager (ISSM) Checklist”. Navy
Information Assurance Website - https://infosec.navy.mil. Jan 1999. [DoN2 2000] “Introduction to Information Assurance Publication”. Dept of the Navy - IA Pub-
5239-01 - https://infosec.navy.mil. May 2000 [Doty 2002] Ted Doty. “New Approach to Intrusion Detection: Intrusion Prevention”.
http://www.itsecurity.com/papers/doty1.htm. 23 Jan 2002. [Eeye 2003] eEye. “Retina Network Security Scanner”.
http://www.eeye.com/html/Products/Retina/index.html. 2003. [ESA 2001] “SILEX Program – Laser Communication between Artemis and Spot 4”. European
Space Agency, 9 Jul 2001 [Farber 1999] Dave Farber, “IP: DES Cracked in 22 hrs, 30 mins”, http://www.interesting-
people.org/archives/interesting-people/199901/msg00071.html [Frodigh 2001] M. Frodigh, S. Parkvall, C. Roobol. “Future Generation Wireless Networks”.
Oct 2001. [Gray 2002] Gray H. Anthes. “Future Watch: Immune Computer Systems”.
http://www.computerworld.com/security/story. Article published in ComputerWorld. 9 Dec 2002.
[Georgia 2003] Georgia Tech. “Human Identification at a Distance”. GVU Center/College of
Computing. http://www.cc.gatech.edu/cpl. Aug 2002. [Harris 2003] Tom Harris. “How E-Bombs Work”. http://science.howstuffworks.com/e-
bomb.html HowStuffWorks. [Hoivik 2003] Thomas H. Hoivik, Lecture Slides on “A Dendritic Approach for Establishing
Issues”, Jul 2003. [Irvine 2003] Cynthia Irvine, Michael Thompson. “Teaching Objectives of a Simulation Game
for Computer Security”. Jun 2003. [IRIDIUM1 2003] “Enhanced Mobile Satellite Services (EMSS)”. Iridium Satellite LLC
website - http://www.iridium.com. 2003.
- 84 -
[ISS 1998] Internet Security Systems. “Network vs Host-based Intrusion Detection”. 2 Oct 1998.
[InruVert 2003] IntruVert Networks. “Intrusion Prevention: Myths, Challenges and
Requirements”. Apr 2003. [Jane’s 2003] “US eyes 'transformational' communications”. Jane's Information Group, 2003. [Johnson 2002] R. Colin Johnson, “Hackers beware: Quantum Encryption is coming”, EETimes,
Nov 12, 2002. [Joint1 1998] “Joint Doctrine for Information Operations”. Joint Publication 3-13. 9 Oct 1998 [Jonathon 2003] Jonathon Phillips. “Human Identification at a Distance (HumanID)”.
Electronic Frontier Foundation. http://www.eff.org/Privacy/TIA/hid.php Aug 2003. [Kalra 2000] Vimal Kalra, NIIT (USA), Inc., “System Redundancy”,
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/ecommerce/plan/sysredun.asp
[Kapp 2002] Steve Kapp. “IEEE 802.11: Leaving the Wire Behind”. Feb 2002. [Kaufman 2002] Charlie Kaufman, Radia Perlman, Mike Speciner. “Network Security: Private
Communication in a Public World”. 2002. [Mayo 2003] VADM Dick Mayo, Commander, Naval Network Warfare Command, “Delivering
ForceNet”, Navy and Marine Corps Symposium, 22 Apr 2003. [Mayo, Nathman 2003] Vice Admiral Richard W. Mayo, U.S. Navy, and Vice Admiral John
Nathman, U.S. Navy, “ForceNet: Turning Information into Power”, Naval Institute Proceedings – February 2003, pp. 42-46.
[MCCDC 2000] “Sea Basing”. A VideoCD published by the Marine Corps Combat
Development Command. 2000. [Michael 2001] Michael J. B.,Riehle, R. D. “Intelligent Software Decoys.” Proc. Monterey
Workshop: Eng. Automation for Software Intensive Syst. Integration, Naval Postgraduate School (Monterey, Calif., June 2001), 178-187.
[Michael1 2003] Michael J. B., Thomas C. Wingfield. “Lawful Cyber Decoy Policy”. Naval
Postgraduate School. Proc IFIP Eighteenth Int. Inf. Security Conf. May 2003. [Michael2 2003] Michael J. B., Georgios Fragkos, Mikhail Auguston, “An Experiment in
Software Decoy Design”. Naval Postgraduate School. Proc IFIP Eighteenth Int. Inf. Security Conf. May 2003.
- 85 -
[Navy 2001] Navy Warfare Development Command 6th Annual Expeditionary Warfare Conference, 29 October - 1 November 2001, “Expeditionary Power Projection”, http://www.dtic.mil/ndia/2001ewc/ncde.pdf
[Navy 2003] Naval Operating Concept for Joint Operations, 2003 [Neumann 2000] Peter Neumann, “Practical Architectures for
Survivable Systems and Networks”, SRI International. http://www.csl.sri.com/users/neumann/survivability.html [Ning 2003] Peng Ning, Douglas Reeves, Yun Cai. “Correlating Alerts Using Prerequisites of
Intrusions”. Jan 2003. [Nessus 2003] Nessus.org. “Nessus”. http://www.nessus.org. 2 Jul 2003. [NPS1 2002] “C4 Architecture”, Chapter 17 of the ExWar Report. Systems Engineering &
Integration project by NPS students/faculty. 2002. [OpenSSH 2003] The OpenSSH Homepage Apr 1, 2003, http://www.openssh.com [Oppenhäuser 2001] Gotthard Oppenhäuser, “A world first : Data transmission between
European satellites using laser light”. European Space Agency, http://www.esa.int/export/esaCP/ESASGBZ84UC_Improving_0.html, 22 Nov 2001
[Rossi 2003] Sandra Rossi. “Intrusion Detection Debate Heats Up”. Article published in
ComputerWorld Australia. http://security.itworld.com/4363/030627/iddebate/page_1.html. 27 Jun 2003.
[Ruggiero 2002] Tony Ruggiero, “Laser Zaps Communication Bottleneck”. Lawrence
Livermore National Laboratory, http://www.llnl.gov/str/December02/Ruggiero.html, Dec 2002
[Shamrock1 2003] “The Immarsat Satellite System”. Shamrock Software website -
http://www.shamrock.de/inm_engl.htm. 2003. [Saunders 2003] John Saunders. “The Case for Modeling and Simulation of Information
Security”. http://www.johnsaunders.com/papers/securitysimulation.htm. 2003. [Sharath 2000] Sharath Pankanti, Ruud M. Bolle, Anil Jain. “Biometrics: The Future of
Identification”. IBM Watson Research Center. IEEE 2000. [Seo 2002] James Jung-Hoon Seo. “Reading the Look and Feel: Interface Design and Critical
Theories”. http://acg.media.mit.edu/people/jseo/courses/cms800/final-paper.html. 14 Jan 2002.
- 86 -
[Snort 2003] Snort. “Snort: The Open Source Network Intrusion Detection System”. http://www.snort.org. Snort website on the open-source NIDS.
[Stephanie ] Steven A. Hofmeyr and Stephanie Forrest. “Immunity by Design: An Artificial
Immune System”. http://ww.cs.unm.edu/~immsec/publications/gecco-steve.pdf Dept. of Computer Science. University of New Mexico.
[Stephanie 1997] Stephanie Forrest, Steven A. Hofmeyr, and Anil Somayaji. “Computer
Immunology”. http://citeseer.nj.nec.com/forrest96computer.html Communications of the ACM. 40(10):88-96. 1997.
[Stephanie 2000] Stephanie Forrest, Steven A. Hormeyr. “Immunology as Information
Processing”. http://www.cs.unm.edu/~steveah/ox.pdf. 2000. [Symantec1 2003] Symantec. Press release by Symantec on their ManTrap “honey-pot”-based
intrusion detection technology, the ManHunt Intrusion Detection product (Network IDS), and the Symantec Host Intrusion Detection System 4.0 (Host-based IDS). http://www.symantec.co.jp/region/au_nz/press/au_021119.html Jul 2003.
[Symantec2 2003] Symantec. “Norton Anti-virus 2003”. http://www.symantec.com/nav. 2003. [Szafranski 1998] COL Richard Szafranski. “A Theory of Information Warfare: Preparing for
2020”. [Tanner 2002] Michael Tanner, Christopher Elsasser, Gregory Whittaker. “Security Awareness
Training Simulation”. http://www.mitre.org/work/tech_papers/tech_papers_01/tanner_security/tanner_security.pdf. 14 Jan 2002.
[Tripwire 2003] Tripwire.org. “Tripwire”. http://www.tripwire.org. 2003. [USNA1 2003] “What is Link16?”. Division of professional development, US Naval Academy
website - http://prodevweb.prodev.usna.edu/SeaNav/NS40x/NS401_old/introduction.html/introduc.html. 2003
[Walt 2001] Charl van der Walt. “Introduction to Security Policies”. A 4-part article -
http://www.securityfocus.com/infocus/1487. Part 1 - 27 Aug 2001, Part 2 - 24 Sep 2001, Part 3 - 9 Oct 2001, Part 4 - 22 Oct 2001.
[WRQ 2003] WRQ Security Primer, “Secure Tunnels: VPNs Versus Open Standards”,
http://www.wrq.com/products/reflection/sec_primer.html
- 87 -
THIS PAGE INTENTIONALLY LEFT BLANK
- 88 -
11 GLOSSARY OF TERMS
AIS Automated Information Systems AO Adaptive Optics C&A Certification & Accreditation. COMPUSEC Computer Security COMSEC Communications Security COTS Commercial Off-the-Shelf. DoN Department of Navy. EMP Electromagnetic Pulse EMR Electromagnetic Radiation EMSEC Emanations Security ESA European Space Agency ExWar Expeditionary Warfare. GCCS Global Command and Control System. FAR False Acceptance Rate FCG Flux Compression Generators FRR False Rejection Rate IA Information Assurance. IAAM Information Assurance Analysis Model ICS Immune Computer System IDS Intrusion Detection System. This can be Host-based or Network-based. IO Information Operations IOC Impact of IA on System Operational Capability IPS Intrusion Prevention System IRC Impact of IA on Resource Costs ISR Intelligence, Surveillance, and Reconnaissance ISSP Information System Security Policy JSIPS Joint Service Imagery Processing System. LAN Local Area Network OpenSSH Open Standard SSH PERSEC Personnel Security PPTP Point-to-Point Tunneling Protocol PSYOPS Pyschological Operations RFID Radio Frequency Identification SOP Standard Operating Procedures. SSAA System Security Authorization Agreement SwDecoy Intelligent Software Decoy TLS Transport Layer Security USAF United States Air Force VPN Virtual Private Network.
- 89 -
THIS PAGE INTENTIONALLY LEFT BLANK
- 90 -
APPENDIX 1 Appendix 1.1 Sea Base Communications Linkages
During an operation involving the Sea Base assets, it is envisaged that several generic
communications links will continue to exist. Such links include voice and data communications
with the Carrier Strike Group, External Sensors, Platforms, Headquarters and the MEB base
ashore. Figure A1 illustrates the linkages between the Sea Base and other cooperating units in a
STOM/ Sea Base operation.
Figure A1 – Sea Base Communication Linkages
An operation involving Sea Base assets, one of the foremost concerns is the protection of the Sea
Base itself. An array of air and naval assets are needed to enforce the air, surface and subsurface
security of the Sea Base assets. The Sea Base assets are required to operate under the air
umbrella provided by an adjacent Carrier Strike Group (CSG). This drives the need for the Sea
Base to maintain both voice and data communications with the CSG in order to facilitate the
command and control of air assets, threat prioritisation and Common Operating Picture (COP).
In addition to the aircraft deployed from the CSG, the Sea Base’s organic protection capabilities
will include surface ships, submarines, UAVs and other off board sensors. These platforms
- 91 -
primarily need to sense the environment in order to achieve early threat detection. This is the
necessary preamble for threat localization, prioritization, engagement, if necessary reengagement
and battle damage assessment. As such, the Sea Base needs to maintain a COP with these
platforms in order to achieve superior battle space awareness for its.
Due to the extended nature of the area of operations, the organic protection platforms may
receive surveillance data from non-organic platforms (external platforms) in order to cover the
last mile of the AO. These external platforms may include UAVs, aircraft etc. Some of these
external platforms; e.g. missiles, have been deploy for the purpose of target engagement in
addition to sensing.
One of the missions of the Sea Base is to provide logistics and other support to the MEB that
have been project in land. As such, the Maritime commander needs to maintain communication
links with the MEB commander ashore. It is envisage that these links include high bandwidth
voice, data and video linkages. The Sea Base’s deployment will always be in response to a larger
strategic scenario, the maritime force commander therefore needs to maintain links with the
Theatre Command; e.g. Central Command, CINCPAC. These links are likely to be via satellites,
due to the remoteness of the Command.
- 92 -
Appendix 1.2 Summary of results of analysis of Sea Base External Communications
Sea Base Communications
Characteristics Vulnerabilities Threats Protection Mechanism
C3I Links Wireless Latencies Burst / Block Volume of data Dynamic Channelling Manned / Unmanned
Physical capture / Destruction of terminals Malicious users Shared Bandwidth Signal Degradation
Confidentiality Eavesdropping by Passive EW devices Insider leakage of classified data Integrity Spoofing by the enemy Synthetic messages inserted by the enemy (Man-in-Middle) Availability Destruction of Platforms/ Sensors by Guns, Bombs and Missiles Jamming and Data Corruption by Active EW equipment
Encryption Emission Control Access Control Authentication / Secret Keys Link redundancies Filters
ISR Links Wireless Latencies Burst / Block Volume of data Dynamic Channelling Manned / Unmanned
Limited or no organic self protection and cognitive ability of sensors Finite bandwidth, processing capability at sensors
Confidentiality Interception by Passive EW devices Insider leakage of classified data Integrity
Encryption Emission Control Authentication / Secret Keys
- 93 -
Sea Base Communications
Characteristics Vulnerabilities Threats Protection Mechanism
Shared and limited bandwidth
Spoofing by the enemy Availibility Limited Bandwidth due to Bandwidth mismanagement Jamming by Active EW equipment Inhibition of sensors by directed energy weapon
Link redundancies Filters
- 94 -
Appendix 1.3 Summary of results of analysis of Sea Base Internal Communications
Vulnerable Areas
Threats Specific Protection Mechanisms General Protection Mechanism+
C2 Centre Physical Destruction by Guns, Bombs and Missiles Equipment Failure Infiltration / Sabotage Power Failure Malicious Software Hacking Intrusion
Administration procedures Software Integrity Tools Intrusion Detection System Encryption Software Patch Firewall Trusted Equipment and Software
Redundancy with separation Self defence weapons Vulnerability reduction Access control
Communications Equipment
Physical Destruction by Guns, Bombs and Missiles Equipment Failure Infiltration / Sabotage Power Failure
Organic maintenance capability Design for supportability
Power Source Physical Destruction by Guns, Bombs and Missiles Equipment Failure Infiltration / Sabotage
Organic maintenance capability Design for supportability
Antenna Sensors Physical Destruction by Guns, Bombs and Missiles Equipment Failure Infiltration / Sabotage
Organic maintenance capability Design for supportability Jam resistance features ; e.g. spread spectrum technology
- 95 -
Vulnerable Areas
Threats Specific Protection Mechanisms General Protection Mechanism+
Power Failure Jamming / Directed Energy
Internal Linkages / LANS
Physical Destruction by Guns, Bombs and Missiles Equipment Failure Infiltration / Sabotage Eavesdropping / Tapping
Organic maintenance capability Encryption Intrusion Detection System
Staying Power Physical Destruction by Guns, Bombs and Missiles Infiltration / Sabotage
Vulnerability reduction Access Control
People Physical Destruction by Guns, Bombs and Missiles Infiltration / Sabotage Intentional / Unintentional disclosure
Administration Procedures Encryption Intrusion Detection System
+ Some threats are common to several vulnerable areas; as such their protection mechanisms are commonly applied to all the affected vulnerable areas.
- 96 -
APPENDIX 2 Appendix 2.1 Information Assurance Hierarchy (Immune Computer System)
Objective Sub-objective 1
Sub- objective 2 Measure Weight Baseline Score Baseline
Value New
Value Baseline
Total New Total
Information and IS
Protection Availability System up Time 0.063 99% 99.90% 0.98 0.99 0.06174 0.06237
Confidentiality Change in Confidentiality 0.253 No change Increased 0.5 0.75 0.1265 0.18975
Integrity Change in Integrity 0.190 No change Increased 0.5 0.8 0.0952 0.15232
Compliance % Automated Compliance Procedures
0.012 20% 80% 0.2 0.8 0.0024 0.0096
% Validated Compliance 0.051 50% 80% 0.1 0.5 0.0051 0.0255
Detection Timely Physical Internal Time to Detect 0.003 2 hrs 2 hrs 1 1 0.003 0.003
Electronic Internal Time to Detect 0.030 3 hrs 3 hrs 1 1 0.0302 0.0302
Physical External Time to Detect 0.012 4 hrs 4 hrs 0.12 0.12 0.00144 0.00144
Electronic External Time to Detect 0.060 1 min 1 min 1 1 0.06 0.06
Accountability Ability to Detect Event
% Automated Detection 0.132 60% 80% 0.35 0.6 0.0462 0.0792
Ability to
Categorize Event
% Automated Detection 0.025 60% 80% 0.4 0.6 0.01 0.015
Flexibility Is system flexible? 0.021 No? Yes 0 1 0 0.0211
Reaction Respond Timely Time to notify support personnel (SP) 0.031 Instantaneous
Indirect Instantaneous
Indirect 0.9 0.9 0.0279 0.0279
Time to Correctly id event 0.016 0.75 hrs 0.25 0.625 0.85 0.01 0.0136
Time to take proper action 0.016 15 min 5 min 0.75 0.92 0.012 0.01472
Flexible Deterrence
Point at which event isolated 0.006 Single
Service? Single
Service 0.75 0.75 0.004725 0.004725
- 97 -
Verify Did SP detect, id, act properly 0.016 Yes Yes 1 1 0.016 0.016
Restore Timely Time to retore full infrastructure 0.016 2 hrs 2 hrs 0.5 0.5 0.008 0.008
Time to retore data 0.005 0.5 hrs 0.5 hrs 1 1 0.005 0.005
Accurately % recoverable information 0.030 90% 85% 0.35 0.3 0.0105 0.009
Adapt / Learn Ability of SP to teach system 0.001 Partially
Taught Fully Taught 0.5 1 0.0005 0.001
Ability of system to teach self 0.010 Adapts with
SP help Adapts
Automatically 0.9 1 0.009 0.01
Total 1.000 Total 0.545405 0.759425
- 98 -
Appendix 2.2 Information Assurance Hierarchy (ForceNet)
Objective Sub-objective 1
Sub- objective 2 Measure Weight Baseline Score Baseline
Value New
Value Baseline
Total New Total
Information and IS
Protection Availability System up Time 0.063 99% 99.90% 0.98 0.999 0.06174 0.062937
Confidentiality Change in Confidentiality 0.253 No change Increased 0.5 0.75 0.1265 0.18975
Integrity Change in Integrity 0.190 No change Increased 0.5 0.8 0.0952 0.15232
Compliance % Automated Compliance Procedures
0.012 20% 80% 0.2 0.8 0.0024 0.0096
% Validated Compliance 0.051 50% 80% 0.1 0.5 0.0051 0.0255
Detection Timely Physical Internal Time to Detect 0.003 2 hrs 1 hr 1 0.95 0.003 0.00285
Electronic Internal Time to Detect 0.030 3 hrs 1 hr 1 0.95 0.0302 0.02869
Physical External Time to Detect 0.012 4 hrs 2 hrs 0.12 0.25 0.00144 0.003
Electronic External Time to Detect 0.060 1 min 1 min 1 1 0.06 0.06
Accountability Ability to Detect Event
% Automated Detection 0.132 60% 90% 0.35 0.8 0.0462 0.1056
Ability to
Categorize Event
% Automated Detection 0.025 60% 90% 0.4 0.8 0.01 0.02
Flexibility Is system flexible? 0.021 No? Yes 0 1 0 0.0211
Reaction Respond Timely Time to notify support personnel (SP) 0.031 Instantaneous
Indirect Instantaneous
Direct 0.9 1 0.0279 0.031
Time to Correctly id event 0.016 0.75 hrs 1 hr 0.625 0.5 0.01 0.008
Time to take proper action 0.016 15 min 10 min 0.75 0.85 0.012 0.0136
Flexible Deterrence
Point at which event isolated 0.006 Single
Service? Single
Service 0.75 0.75 0.004725 0.004725
Verify Did SP detect, id, act properly 0.016 Yes Yes 1 1 0.016 0.016
Restore Timely Time to retore full infrastructure 0.016 2 hrs 1 hrs 0.5 0.7 0.008 0.0112
Time to retore data 0.005 0.5 hrs 0.5 hrs 1 1 0.005 0.005
- 99 -
Accurately % recoverable information 0.030 90% 98% 0.35 0.75 0.0105 0.0225
Adapt / Learn Ability of SP to teach system 0.001 Partially
Taught Fully Taught 0.5 1 0.0005 0.001
Ability of system to teach self 0.010 Adapts with
SP help Adapts
Automatically 0.9 1 0.009 0.01
Total 1.000 Total 0.545405 0.804372
- 100 -
Appendix 2.3 Information Assurance Hierarchy (Intelligent Software Decoy)
Objective Sub-
objective 1 Sub-
objective 2 Measure Weight Baseline Score Baseline Value
New Value
Baseline Total
New Total
Information and IS
Protection Availability System up Time 0.063 99% 99.90% 0.98 0.99 0.06174 0.06237
Confidentiality Change in Confidentiality 0.253 No change Increased 0.5 0.75 0.1265 0.18975
Integrity Change in Integrity 0.190 No change Increased 0.5 0.8 0.0952 0.15232
Compliance % Automated Compliance Procedures
0.012 20% 80% 0.2 0.8 0.0024 0.0096
% Validated Compliance 0.051 50% 80% 0.1 0.5 0.0051 0.0255
Detection Timely Physical Internal Time to Detect 0.003 2 hrs 2 hrs 1 1 0.003 0.003
Electronic Internal Time to Detect 0.030 1 hrs 1 hrs 1 1 0.0302 0.0302
Physical External Time to Detect 0.012 4 hrs 4 hrs 0.12 0.12 0.00144 0.00144
Electronic External Time to Detect 0.060 30 min 30 min 0.125 0.125 0.0075 0.0075
Accountability Ability to Detect Event
% Automated Detection 0.132 80% 90% 0.6 0.8 0.0792 0.1056
Ability to
Categorize Event
% Automated Detection 0.025 60% 90% 0.4 0.8 0.01 0.02
Flexibility Is system flexible? 0.021 No? Yes 0 1 0 0.0211
Reaction Respond Timely Time to notify support personnel (SP) 0.031 Instantaneous
Indirect Instantaneous
Indirect 0.9 0.9 0.0279 0.0279
Time to Correctly id event 0.016 1 hrs 0.5 hrs 0.5 0.75 0.008 0.012
Time to take proper action 0.016 15 min 5 min 0.75 0.92 0.012 0.01472
Flexible Deterrence
Point at which event isolated 0.006 Single
Service? Completely
Isolated 0.75 1 0.004725 0.0063
Verify Did SP detect, id, act properly 0.016 Yes Yes 1 1 0.016 0.016
Restore Timely Time to retore full infrastructure 0.016 2 hrs 2 hrs 0.5 0.5 0.008 0.008
Time to retore data 0.005 0.5 hrs 0.5 hrs 1 1 0.005 0.005
- 101 -
Accurately % recoverable information 0.030 90% 85% 0.35 0.3 0.0105 0.009
Adapt / Learn Ability of SP to teach system 0.001 Partially
Taught Fully Taught 0.5 1 0.0005 0.001
Ability of system to teach self 0.010 Adapts with
SP help Adapts with
SP help 0.9 0.9 0.009 0.009
Total 1.000 Total 0.523905 0.7373
- 102 -
Appendix 2.4 Information Assurance Hierarchy (Laser Communication)
Objective Sub-
objective 1 Sub-
objective 2 Measure Weight Baseline Score Baseline Value
New Value
Baseline Total New Total
Information and IS
Protection Availability System up Time 0.063 99% 90% 0.98 0.25 0.06174 0.01575
Confidentiality Change in Confidentiality 0.253 No change Greatly
Increased 0.5 1 0.1265 0.253
Integrity Change in Integrity 0.190 No change No change 0.5 0.5 0.0952 0.0952
Compliance % Automated Compliance Procedures
0.012 20% 90% 0.2 0.9 0.0024 0.0108
% Validated Compliance 0.051 50% 50% 0.1 0.1 0.0051 0.0051
Detection Timely Physical Internal Time to Detect 0.003 2 hrs 2 hrs 1 1 0.003 0.003
Electronic Internal Time to Detect 0.030 3 hrs 3 hrs 1 1 0.0302 0.0302
Physical External Time to Detect 0.012 4 hrs 4 hrs 0.12 0.12 0.00144 0.00144
Electronic External Time to Detect 0.060 1 min 1 min 1 1 0.06 0.06
Accountability Ability to Detect Event
% Automated Detection 0.132 60% 60% 0.35 0.35 0.0462 0.0462
Ability to
Categorize Event
% Automated Detection 0.025 60% 60% 0.4 0.4 0.01 0.01
Flexibility Is system flexible? 0.021 No? No? 0 0 0 0
Reaction Respond Timely Time to notify support personnel (SP) 0.031 Instantaneous
Indirect Instantaneous
Indirect 0.9 0.9 0.0279 0.0279
Time to Correctly id event 0.016 0.75 hrs 0.75 hrs 0.625 0.625 0.01 0.01
Time to take proper action 0.016 15 min 15 min 0.75 0.75 0.012 0.012
Flexible Deterrence
Point at which event isolated 0.006 Single
Service? Single
Service? 0.75 0.75 0.004725 0.004725
Verify Did SP detect, id, act properly 0.016 Yes Yes 1 1 0.016 0.016
Restore Timely Time to retore full infrastructure 0.016 2 hrs 2 hrs 0.5 0.5 0.008 0.008
Time to retore data 0.005 0 0 1 1 0.005 0.005
- 103 -
Accurately % recoverable information 0.030 90% 90% 0.35 0.35 0.0105 0.0105
Adapt / Learn Ability of SP to teach system 0.001 Partially
Taught Partially Taught 0.5 0.5 0.0005 0.0005
Ability of system to teach self 0.010 Adapts with
SP help Adapts with
SP help 0.9 0.9 0.009 0.009
Total 1.000 Total 0.545405 0.634315
- 104 -
Appendix 2.5 IA on System Operational Capability (Immune Computer System)
Objective Sub-objective 1
Sub- objective 2 Measure Weight Baseline Score Baseline
Value New Value Baseline Total
New Total
Efficiency Ability to Process Users Change in User Throughput 0.049 No change No change 0.6 0.6 0.0294 0.0294
Impact on System Overhead Change in System Capacity 0.198 No change Decrease 0.6 0.2 0.1188 0.0396
Functionality Missions Enabled Did Strategy Enable New Mission? 0.03 No No 0 0 0 0
Availability Change in Availability 0.242 No change Significantly
increased 0.9 1 0.2178 0.242
Compatibility Degree of difficulty 0.122 No difficulty Simple 1 0.9 0.122 0.1098
Convenience Accessibility Change in Accessibility 0.055 No change No change 0.5 0.5 0.0275 0.0275
Complexity User Change in user complexity 0.154 No change No change 0.5 0.5 0.077 0.077
Support Personnel (SP)
Change in SP complexity 0.038 No change Minimal
Increase 0.6 0.45 0.0228 0.0171
Ease of Time to implement and test Software 0.012 4 hrs 5 hrs 0.5 0.2 0.006 0.0024 Implementation Hardware 0.006 2 days 2 days 0.6 0.6 0.0036 0.0036
Physical 0.002 4 weeks 4 weeks 0 0 0 0
Usage History Exposure in similar industry 0.006 Industry
standard Moderate exposure 1 0.55 0.006 0.0033
SP experience 0.024 High experience
Moderate experience 1 0.55 0.024 0.0132
Flexibility Upgradability Can system be upgraded? 0.031 Yes Yes 1 1 0.031 0.031
Expanability Can system be expanded? 0.031 Yes Yes 1 1 0.031 0.031
Total 1.000 Total 0.7169 0.6269
- 105 -
Appendix 2.6 IA on System Operational Capability (ForceNet)
Objective Sub-objective 1
Sub- objective 2 Measure Weight Baseline Score Baseline
Value New Value Baseline Total
New Total
Efficiency Ability to Process Users Change in User Throughput 0.049 No change Increase 0.6 0.85 0.0294 0.04165
Impact on System Overhead Change in System Capacity 0.198 No change Increase 0.6 0.85 0.1188 0.1683
Functionality Missions Enabled Did Strategy Enable New Mission? 0.03 No Yes 0 1 0 0.03
Availability Change in Availability 0.242 No change Significantly
increased 0.9 1 0.2178 0.242
Compatibility Degree of difficulty 0.122 No difficulty No Difficulty 1 1 0.122 0.122
Convenience Accessibility Change in Accessibility 0.055 No change Minimal
Increase 0.5 0.8 0.0275 0.044
Complexity User Change in user complexity 0.154 No change Minimal
Increase 0.5 0.2 0.077 0.0308
Support Personnel (SP)
Change in SP complexity 0.038 No change Minimal
Increase 0.6 0.45 0.0228 0.0171
Ease of Time to implement and test Software 0.012 4 hrs 5 hrs 0.5 0.2 0.006 0.0024 Implementation Hardware 0.006 2 days 2 days 0.6 0.6 0.0036 0.0036
Physical 0.002 4 weeks 4 weeks 0 0 0 0
Usage History Exposure in similar industry 0.006 Industry
standard Moderate exposure 1 0.55 0.006 0.0033
SP experience 0.024 High experience
Moderate experience 1 0.55 0.024 0.0132
Flexibility Upgradability Can system be upgraded? 0.031 Yes Yes 1 1 0.031 0.031
Expanability Can system be expanded? 0.031 Yes Yes 1 1 0.031 0.031
Total 1.000 Total 0.7169 0.78035
- 106 -
Appendix 2.7 IA on System Operational Capability (Intelligent Software decoy)
Objective Sub-
objective 1 Sub-
objective 2 Measure Weight Baseline Score Baseline Value New Value Baseline
Total New Total
Efficiency Ability to Process Users Change in User Throughput 0.049 No change No change 0.6 0.6 0.0294 0.0294
Impact on System Overhead Change in System Capacity 0.198 No change Decrease 0.6 0.2 0.1188 0.0396
Functionality Missions Enabled Did Strategy Enable New Mission? 0.03 No Yes 0 1 0 0.03
Availability Change in Availability 0.242 No change No change 0.9 0.9 0.2178 0.2178
Compatibility Degree of difficulty 0.122 No difficulty Simple 1 0.9 0.122 0.1098
Convenience Accessibility Change in Accessibility 0.055 No change No change 0.5 0.5 0.0275 0.0275
Complexity User Change in user complexity 0.154 No change No change 0.5 0.5 0.077 0.077
Support Personnel (SP)
Change in SP complexity 0.038 No change Minimal
Increase 0.6 0.45 0.0228 0.0171
Ease of Time to implement and test Software 0.012 4 hrs 5 hrs 0.5 0.2 0.006 0.0024 Implementation Hardware 0.006 2 days 2 days 0.6 0.6 0.0036 0.0036
Physical 0.002 4 weeks 4 weeks 0 0 0 0
Usage History Exposure in similar industry 0.006 Industry
standard Moderate exposure 1 0.55 0.006 0.0033
SP experience 0.024 High experience
Moderate experience 1 0.55 0.024 0.0132
Flexibility Upgradability Can system be upgraded? 0.031 Yes Yes 1 1 0.031 0.031
Expanability Can system be expanded? 0.031 Yes Yes 1 1 0.031 0.031
Total 1.000 Total 0.7169 0.6327
- 107 -
Appendix 2.8 IA on System Operational Capability (Laser Communication)
Objective Sub-
objective 1 Sub-
objective 2 Measure Weight Baseline Score Baseline Value New Value Baseline
Total New Total
Efficiency Ability to Process Users Change in User Throughput 0.049 No change Significantly
increased 0.6 1 0.0294 0.049
Impact on System Overhead Change in System Capacity 0.198 No change Significantly
increased 0.6 1 0.1188 0.198
Functionality Missions Enabled Did Strategy Enable New Mission? 0.03 No Yes 0 1 0 0.03
Availability Change in Availability 0.242 No change Decrease 0.9 0.2 0.2178 0.0484
Compatibility Degree of difficulty 0.122 No difficulty No difficulty 1 1 0.122 0.122
Convenience Accessibility Change in Accessibility 0.055 No change No change 0.5 0.5 0.0275 0.0275
Complexity User Change in user complexity 0.154 No change No change 0.5 0.5 0.077 0.077
Support Personnel (SP)
Change in SP complexity 0.038 No change No change 0.6 0.6 0.0228 0.0228
Ease of Time to implement and test Software 0.012 4 hrs Significant 0.5 0 0.006 0 Implementation Hardware 0.006 2 days Significant 0.6 0 0.0036 0
Physical 0.002 4 weeks Significant 0 0 0 0
Usage History Exposure in similar industry 0.006 Industry
standard Minimal
exposure 1 0.3 0.006 0.0018
SP experience 0.024 High experience
Minimal experience 1 0.1 0.024 0.0024
Flexibility Upgradability Can system be upgraded? 0.031 Yes Yes 1 1 0.031 0.031
Expanability Can system be expanded? 0.031 Yes Yes 1 1 0.031 0.031
Total 1.000 Total 0.7169 0.6409
- 108 -
Appendix 2.9 IA on Resource Cost (Immune Computer System)
Objective Sub-objective 1
Sub- objective 2 Measure Weight Baseline Score Baseline
Value New
Value Baseline
Total New Total
Life Cycle Initial Computer System 0.134 $0.8M $1.0M 0.075 0 0.01005 0 Acquisition Physical Construction 0.133 $0.0M $0.0M 1 1 0.133 0.133
Recurring Normalized cost per Year 0.133 $0.1M $0.1M 0.5 0.5 0.0665 0.0665
Personnel User Time needed to leard IA strategy 0.545 2 hrs 2 hrs 0.2 0.2 0.109 0.109
Support Time Initial Time to train SP 0.001 7 days 8 days 0.45 0.35 0.00045 0.00035
Personnel (SP) Frequency of training 0.005 Quarterly Quarterly 0.5 0.5 0.0025 0.0025
Time per training session 0.003 1 day 1 day 0.75 0.75 0.00225 0.00225
Number %Change in SP needed 0.046 0% -15% 0.7 0.9 0.0322 0.0414
Total 1.000 Total 0.35595 0.355
- 109 -
Appendix 2.10 IA on Resource Cost (ForceNet)
Objective Sub-objective 1
Sub- objective 2 Measure Weight Baseline Score Baseline
Value New
Value Baseline
Total New Total
Life Cycle Initial Computer System 0.134 $0.8M $1.0M 0.075 0 0.01005 0 Acquisition Physical Construction 0.133 $0.0M $5.0M 1 0 0.133 0
Recurring Normalized cost per Year 0.133 $0.1M $0.2M 0.5 0 0.0665 0
Personnel User Time needed to learn IA strategy 0.545 2 hrs 4 hrs 0.2 0.1 0.109 0.0545
Support Time Initial Time to train SP 0.001 7 days 10 days 0.45 0.25 0.00045 0.00025
Personnel (SP) Frequency of training 0.005 Quarterly Quarterly 0.5 0.5 0.0025 0.0025
Time per training session 0.003 1 day 1 day 0.75 0.75 0.00225 0.00225
Number %Change in SP needed 0.046 0% 10% 0.7 0.1 0.0322 0.0046
Total 1.000 Total 0.35595 0.0641
- 110 -
Appendix 2.11 IA on Resource Cost (Intelligent Software Decoy)
Objective Sub-
objective 1 Sub-
objective 2 Measure Weight Baseline Score Baseline Value
New Value
Baseline Total
New Total
Life Cycle Initial Computer System 0.134 $0.8M $1.0M 0.075 0 0.01005 0 Acquisition Physical Construction 0.133 $0.0M $0.0M 1 1 0.133 0.133
Recurring Normalized cost per Year 0.133 $0.1M $0.1M 0.5 0.5 0.0665 0.0665
Personnel User Time needed to learn IA strategy 0.545 2 hrs 2 hrs 0.2 0.2 0.109 0.109
Support Time Initial Time to train SP 0.001 7 days 8 days 0.45 0.35 0.00045 0.00035
Personnel (SP) Frequency of training 0.005 Quarterly Quarterly 0.5 0.5 0.0025 0.0025
Time per training session 0.003 1 day 1 day 0.75 0.75 0.00225 0.00225
Number %Change in SP needed 0.046 0% -15% 0.7 0.9 0.0322 0.0414
Total 1.000 Total 0.35595 0.355
- 111 -
Appendix 2.12 IA on Resource Cost (Laser Communication)
Objective Objective 1 Sub-objective Measure Weight Baseline Score Baseline Value
New Value
Baseline Total
New Total
Life Cycle Initial Computer System 0.134 $0.8M Significant 0.075 0 0.01005 0 Acquisition Physical Construction 0.133 $0.0M Significant 1 0 0.133 0
Recurring Normalized cost per Year 0.133 $0.1M $1.0M 0.5 0 0.0665 0
Personnel User Time needed to train users 0.545 2 hrs 2 hrs 0.2 0.2 0.109 0.109
Support Time Initial Time to train SP 0.001 7 days 7 days 0.45 0.45 0.00045 0.00045
Personnel (SP) Frequency of training 0.005 Quarterly Quarterly 0.5 0.5 0.0025 0.0025
Time per training session 0.003 1 day 1 day 0.75 0.75 0.00225 0.00225
Number %Change in SP needed 0.046 0% 0% 0.7 0.7 0.0322 0.0322
Total 1 Total 0.35595 0.1464
- 112 -
APPENDIX 3
Appendix 3 Operations Research Testing and Evaluation
1.1 Background
As part of the systems engineering integration effort, the Operations Research Track was tasked
to develop a Joint Test and Evaluation Model. The following is the Information Assurance
Track’s segment that spells out the relevant critical operational issues (COI), measures of
effectiveness (MOE) and measures of performance (MOP).
1.2 Purpose
The IA Plan serves to support the defensive information operation component of the Sea Base.
With the expected widespread adoption of information technology in the Sea Base, there will be
considerable dependency and hence the need to ensure the confidentiality, integrity and
availability of these systems. It is not inconceivable that a future threat in the information sphere
may involve the capability to compromise these elements of information assurance via external
and internal (insider) attacks if defensive measures are not appropriately in place.
2. System Description
The IA Plan examines the current Navy IA Policy to seek out challenges that may well result in
weak implementations. Various mitigation measures are therefore recommended to overcome
these potential challenges and to ensure relevancy for the Sea Base.
In addition, a technology review is also performed to examine various emerging technologies for
applicability in protecting Sea Base information systems in 2015. Specifically, these examine
weaknesses of current technical IA measures and how these emerging technologies are able to
improve upon or overcome them.
No Technology Capability 1 E-Bomb Protective measures to harden physical system against
electromagnetic attacks. 2 Physical access controls -
Biometrics Non-intrusive identification and authentication.
- 113 -
3 Laser communication Narrow-beam, high bandwidth transmission to reduce susceptibility to interception and jamming.
4 Secure tunnels Secure transmission over shared medium and detection of interception attempts.
5 Intrusion prevention and immune computer system
Detection and prevention against known and novel forms of intrusions.
6 Intelligent software decoy Deception approach to contain intrusion attempts.
7 System redundancy - ForceNet
Improved availability through large-scale distributed system redundancy.
8 Security through obscurity Augmentation approach to supplement the Defense-in-Depth strategy.
9 SimSecurity Novel approach for user training on security.
3. Mission Need Statement
The mission area is in the information sphere and encompasses the requirement of enhancing the
confidentiality, integrity and availability of Sea Base systems. The IA policy is applicable for all
Sea Base information systems. Consequently, it has to be consistently applied to all systems
being developed for protecting the Sea Base. The emerging IA technologies have specific
contexts in which to apply to and these serve as solution “templates” which should be inserted
into target information-based systems being developed, to improve the overall security posture of
these systems. Collectively, these meet the needs to protect against internal and external attacks,
against known and novel forms of attacks, ability to prevent, detect and contain these attacks.
4. Operational Requirements
a. Identification and authentication of users.
b. Prevention and minimization of both internal and external attacks.
c. Maximize the fundamental attributes of IA, namely confidentiality, integrity and
availability, with respect to specified requirements.
d. Around the clock detection of known and novel forms of attacks.
e. Containment and flexibility in response to successful penetrations.
f. Heightened awareness of IA and conformance to IA policies.
- 114 -
5. Dendritics
Information Assurance Analysis Model
Figure C1: Information Assurance Analysis Model
Information Assurance Heirarchy (IA)
The IA hierarchy measures the ability of the system and system personnel to assure information,
information systems, and information processes [Beauregard 2001]. Information Assurance is a
process that involves the ability to protect information and information systems (IS), detect
events that may interfere with information or IS, and properly react to situations where
information or IS may have been compromised.
The entire IA value hierarchy is given as Figure A2, with Information and IS Protection,
Detection, and Reaction composing the highest sub-tier of values.
Figure C2: Information Assurance (IA) Value Hierarchy
- 115 -
Information and IS Protection is defined to be the measures taken to ensure that information and
information systems are protected from unauthorized change. This includes assuring information
and IS availability, confidentiality, and integrity.
Detection is defined to be the ability of the system or system personnel to detect an event. In
order for an organization to gain value from their detection capabilities, it must be done quickly,
accurately, and at a sufficient level. Detection is therefore separated into three sub-values:
Timely, Accountability, and Flexibility.
Reaction is defined to be measures taken to (1) appropriately respond to an identified attack, (2)
restore the information and IS capabilities to an acceptable state, their original state, or an
improved state, and (3) the ability to learn from previous events so that they do not cause damage
in the future. Reaction is thus separated into Respond, Restore, and Adapt.
Impact of IA on System Operation Capability (IOC)
The impact that an information assurance strategy will have on the system’s operational
capability must be considered when determining what IA strategy or strategies are best for a
given organization. The purpose of an information system is to help personnel accomplish their
mission in a more efficient manner; if the system cannot do this effectively then it is not a useful
system. However, if the user cannot trust that information in the system is available, accurate,
updated, and secure, then the user’s willingness to depend on the system is greatly decreased.
There is a fine balance between information assurance and system operational capability that
must exist in order to have a secure but usable system.
- 116 -
Figure C3: Impact of Information Assurance on System Operational Capability (IOC) Value
Hierarchy As shown in Figure A3 above, the main parameters taken into IOC consideration are efficiency,
functionality, convenience, ease of implementation and flexibility.
Impact of IA on Resource Costs (IRC)
The final consideration when determining what IA strategies to implement is the impact the
strategy will have on resources. Resources Costs are both the fiscal and manpower costs that an
IA strategy will require. All other things being equal, the strategy that requires the least amount
of resources, either financially or with respect to personnel time, will be preferred. The complete
IRC value hierarchy is presented as Figure C4.
Life Cycle Acquisition Costs is the dollar cost of an IA strategy needed to implement and
maintain that strategy over its lifetime. As in any acquisition, an IA strategy that costs the least to
acquire, implement, and maintain will be valued higher than more expensive strategies, assuming
that they provide an equal amount of assurance. Along with a dollar cost, implementing and
maintaining an IA strategy will certainly consume organizational manpower. Users and support
- 117 -
personnel are again separated since they are valued differently when considering information
assurance strategies, with preference again given to the user.
Figure C4: Impact of Information Assurance on Resource Costs (IRC) Value Hierarchy
6. Critical Operational Issues
1. Prevention. Can the system prevent all forms of successful information attack
with minimum resources?
2. Detection. In the event that a penetration has taken place, can the system detect
the attack?
3. Containment and response. Can the system minimize the effects and degradation
of the system following a successful penetration and respond to it?
4. Availability and Recovery. To what extent can the system maintain operational
availability, and how quickly and accurately can a damaged system be restored to permit
system to resume normal operations?
7. MOE and MOP
CI 1. Prevention
MOE 1.1 Confidentiality and intrusion prevention
MOP 1.1.1 Percentage of intrusions prevented.
MOP 1.1.2 Number of illegitimate/unintended access to protected information.
MOP 1.1.3 Time needed for interceptor to decode encrypted data.
- 118 -
MOE 1.2 Identification and authentication
MOP 1.2.1 Rate of false acceptance
MOP 1.2.2 Rate of false rejection
MOP 1.2.3 Time taken to authenticate with a system to begin operations.
MOE 1.3 Conformance
MOP 1.3.1 Percentage of compliance defects (in conformance to security policy).
MOP 1.3.2 Frequency of security audit and review.
MOP 1.3.3 Operations time taken away to train a user to reach required proficiency.
MOE 1.4 Processing overheads
MOP 1.4.1 Percentage of network tied-up by encryption overheads.
MOP 1.4.2 Processing time for encryption.
MOP 1.5.3 Processing time for decryption.
CI 2. Detection
MOE 2.1 Timeliness
MOP 2.1.1 Time to detect an internal physical attack.
MOP 2.1.2 Time to detect an external physical attack.
MOP 2.1.3 Time to detect an internal electronic attack.
MOP 2.1.4 Time to detect an external electronic attack.
MOE 2.2 Accountability and forensics
MOP 2.2.1 Percentage of attacks detected.
MOP 2.2.2 Rate of false intrusion alerts.
MOP 2.2.3 Percentage of attacks accurately categorized.
CI 3. Containment and response
MOE 3.1 Flexibility
- 119 -
MOP 3.1.1 Average time system is able to continue to engage the attacker (whilst
formulating response).
MOE 3.2 Updatability
MOP 3.2.1 Average time needed to generate and propagate new rules to guards (e.g.
firewall, intrusion prevention systems, anti-virus software, etc).
CI 4. Availability and Recovery
MOE 4.1 Availability
MOP 4.1.1 Operational availability - percent of system uptime during operational use.
MOP 4.1.2 Average time needed to resume system operations.
MOE 4.2 Restoration
MOP 4.2.1 Average time needed to restore system.
MOP 4.2.2 Percentage of data correctly and successfully restored.
- 120 -
INITIAL DISTRIBUTION LIST 1. Prof Karen Burke
Department of Computer Science Naval Postgraduate School
Monterey, California 2. Dudley Knox Library Naval Postgraduate School Monterey, California 3. Temasek Defence Systems Institute
Block E1, #05-05 1 Engineering Drive 2 Singapore 117576
4. Mr Kwok Chee Khan James Defence Science & Technology Agency
Singapore 5. Mr Kwok Vi-Keng David
Defence Science & Technology Agency Singapore
6. Mr Neo Soo Sim Daniel
Defence Science & Technology Agency Singapore
7. CPT Saw Tee Huu Singapore Armed Forces
Singapore 8. MAJ Tan Boon Hwee Nicholas Singapore Armed Forces Singapore 9. LTC Tan Kheng Lee Gregory Singapore Armed Forces Singapore 10. Mr Teo Tiat Leng Defence Science & Technology Agency Singapore