the use of integrated architectures to support agent based
TRANSCRIPT
THE USE OF INTEGRATED
ARCHITECTURES TO SUPPORT AGENT
BASED SIMULATION: AN INITIAL
INVESTIGATION
THESIS
Andrew W. Zinn, Captain, USAF
AFIT/GSE/ENY/04-M01
DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY
AIR FORCE INSTITUTE OF TECHNOLOGY
Wright-Patterson Air Force Base, Ohio
APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED
The views expressed in this thesis are those of the author and do not reflect the official
policy or position of the United States Air Force, Department of Defense, or the United
States Government.
AFIT/GSE/ENY/04-M01
THE USE OF INTEGRATED ARCHITECTURES TO SUPPORT AGENT BASED SIMULATION: AN INITIAL INVESTIGATION
THESIS
Presented to the Faculty
Department of Aeronautics and Astronautics
Graduate School of Engineering and Management
Air Force Institute of Technology
Air University
Air Education and Training Command
In Partial Fulfillment of the Requirements for the
Degree of Master of Science in Systems Engineering
Andrew W. Zinn, BS
Captain, USAF
March 2004
DISTRIBUTION STATEMENT PENDING
AFIT/GSE/ENY/04-M01
THE USE OF INTEGRATED ARCHITECTURES TO SUPPORT AGENT BASED
SIMULATION: AN INITIAL INVESTIGATION
Andrew W. Zinn, BS Captain, USAF
Approved: ____________________________________ ____________ David R. Jacques (Chairman) date ____________________________________ John O. Miller (Member) date ____________________________________ ____________ Larry B. Rainey (Member) date
AFIT/GSE/ENY/04-M01
Abstract
Architecture descriptions following the DoD Architectural Framework (DoDAF)
will be used to aid in the acquisition of all major defense information systems. One of
the primary purposes of these architectures is to help conduct military-worth analysis.
Recent work in operations analysis of information driven combat is showing that agent
based simulation technology is needed to understand the military value of Command,
Control, Communications, Computers, Intelligence, and Reconnaissance (C4ISR)
systems. This thesis investigates the utility of architecture descriptions based on the
DoDAF to provide the needed data for agent based simulation. This is accomplished by
means of a case study where data from a proposed Air Operations Center architecture is
used in the combat model System Effectiveness Analysis Simulation (SEAS). The
research concludes that the DoDAF, if implemented properly, does provide the needed
information. A process for taking information from a DoDAF architecture and importing
it into agent based simulation is proposed.
iv
AFIT/GSE/ENY/04-M01
To my wife
v
Acknowledgments
I would like to thank my advisor, Dr. David Jacques, for his guidance throughout
this effort. Comments and suggestion for improvement by Dr. Jacques, as well as Dr.
J.O. Miller, Lt Col Colombi and Dr. Larry Rainey contributed greatly to the quality of
this document and the research in general. I would especially like to thank my sponsor,
Mr. Robert Weber, who had the foresight and vision to propose this topic. His expertise
in combat simulation and systems development was a key resource, and the background
information he provided was indispensable. I would also like to thank Jay Vittori, Dr.
Tom Pawlowski, Steve Ring, Scott Surer, Lt Col Mark Hall, and others in the system
architectures community who took the time to lend me advice and guidance.
Greg DeStefano, my partner on this research, deserves the credit for turning
theory into results. A special thanks goes out to Greg for his hard work, analyst insight,
and modeling skills.
Finally, I would like to thank my wife. Her encouragement and support helped
me keep things in perspective when the going got tough.
Andrew W. Zinn
vi
Table of Contents
Page Abstract .............................................................................................................................. iv Table of Contents.............................................................................................................. vii List of Figures .................................................................................................................... ix List of Tables ..................................................................................................................... xi I. Introduction and Problem Statement .............................................................................. 1
1.1 Introduction.............................................................................................................. 1 1.2 Problem Formulation ............................................................................................... 5
1.2.1 General Problem .................................................................................................5 1.2.2 Specific Problem................................................................................................6
1.3 Relevant Activity ..................................................................................................... 7 1.4 Scope of Work ....................................................................................................... 11
II. Background ................................................................................................................. 14
2.1 Background on Architectures in the DoD.............................................................. 14
2.1.1 The DoD Architectural Framework Version 1.0 .............................................18 2.1.2 The CADM and DARS....................................................................................29 2.1.3 DoDAF Guidance ............................................................................................30
2.2 Popkin System Architect........................................................................................ 32 2.2.1 IDEF3 Notation ...............................................................................................35
2.3 The AOC Architecture........................................................................................... 36 2.4 The C4I Support Plan............................................................................................. 39 2.5 Background on DoD Modeling and Simulation .................................................... 39
2.5.1 SEAS................................................................................................................41 2.6 Summary ................................................................................................................ 44
III. Methodology.............................................................................................................. 45
3.1 Introduction............................................................................................................ 45 3.2 Agents in SEAS ..................................................................................................... 45 3.3 The Kosovo War File............................................................................................. 48 3.4 Applying Previous Research to the Case Study..................................................... 49 3.5 Fitness of the DoDAF to support Agent Based Simulation................................... 52 3.6 Choosing the Architecture(s) ................................................................................. 53 3.7 Setting up the Case Study ...................................................................................... 56
3.7.1 Mapping the General Attributes ......................................................................57
vii
3.7.2 Mapping the Communications.........................................................................59 3.7.3 Mapping the Orders .........................................................................................62
3.8 Summary ................................................................................................................ 68 IV. Results and Analysis.................................................................................................. 70
4.1 Results.................................................................................................................... 70
4.1.1 General Attributes............................................................................................70 4.1.2 Communications ..............................................................................................71 4.1.3 Orders ..............................................................................................................75 4.1.4 The Modified SEAS War File .........................................................................80 4.1.5 Using the New War File ..................................................................................83
4.2 Analysis.................................................................................................................. 86 4.2.1 Missing Pieces .................................................................................................87 4.2.2 Potential for Automation .................................................................................88 4.2.3 Comparison to DoDAF’s “Architectural Uses” Figure 2-2.............................89
V. Conclusions and Recommendations ........................................................................... 91
5.1 Conclusions............................................................................................................ 91 5.2 Recommendations.................................................................................................. 92
5.2.1 Recommendations for Architecture Developers..............................................92 5.2.2 Recommendations for Tool Vendors...............................................................94 5.2.3 Recommendations for Further Research .........................................................94
Appendix A: DoDAF Products list (from section 2.2 of Volume II) .............................. 96 Appendix B: DoDAF’s Architecture Products by Use (from section 2.2 of Volume II) 97 Bibliography ..................................................................................................................... 98 Vita.................................................................................................................................. 102
viii
List of Figures
Page
Figure 1 Requirements and Acquisition Process Depiction (DoD 5000.2, 2003:3).......... 2
Figure 2 Architecture Impact Areas (AF-CIO/A https://cao.hanscom.af.mil/af-cio.htm). 4
Figure 3 Using Architectures in SE and Acquisition (Dickerson and Soules, 2000) ........ 8
Figure 4 EAMA Viewchart (Pawlowski et al, 2003:slide 24) ......................................... 10
Figure 5 Zachman Framework (www.zifa.com).............................................................. 17
Figure 6 Three Views of Architecture (DoDAF, 2003)................................................... 20
Figure 7 IDEF0 Model Syntax (Levis, 2003:3-26).......................................................... 24
Figure 8 Integrated Architectures (DoDAF Vol I, 2003:Fig 3-3).................................... 25
Figure 9 Building Architectures (DoDAF Volume I, 2003:5.2)...................................... 31
Figure 10 IDEF3 Symbols (Mayer et al, 1995:22) .......................................................... 36
Figure 11 Example of IDEF3 diagram (Mayer et al, 1995:40)........................................ 36
Figure 12 Agent Hierarchy (from DeStefano, 2004:2.3.3.1)........................................... 42
Figure 13 SEAS Top-Level Event Processing (Gonzales et al, 2001:56) ....................... 43
Figure 14 The 3 Entities in SEAS (www.teamseas.com SEAS_Agents.ppt:3)............... 45
Figure 15 A Conceptual SEAS Agent (www.teamseas.com SEAS_Agents.ppt:8) ........ 46
Figure 16 The Synthesis Phase (Levis and Wagenhals, 2000) ........................................ 50
Figure 17 Mapping General Attributes from a DoDAF Architecture to SEAS............... 59
Figure 18 Sample SV-2 (DoDAF Volume II, 2003:5-13) ............................................... 61
Figure 19 Mapping Communications from a DoDAF Architecture to SEAS................. 62
Figure 20 Example of IDEF3 diagram (Mayer et al, 1995:40)........................................ 65
Figure 21 Sample OV-5 diagram for "Select Contractor" ............................................... 65
ix
Page
Figure 22 Example of OV-5 Report + Pseudocode from OV-6a..................................... 66
Figure 23 Mapping Orders from a DoDAF Architecture to SEAS .................................. 68
Figure 24 Mapping DoDAF Products to Agent Based Simulation................................... 69
Figure 25 OV-5 Activity Report Generated by Popkin SA for TCT Activity 6.............. 76
Figure 26 IDEF3 diagram for "Conduct Dynamic Assessment of Target" (TCT 2005 Architecture, 2003:OV-6a) ....................................................................................... 78
Figure 27 Comparison of Baseline vs. Modified War File - Kosovar Casualties
(DeStefano, 2004:4.3)............................................................................................... 84 Figure 28 Sorties Flown - TCT vs. Baseline War File (DeStefano, 2004:4.3)................ 86
x
List of Tables
Page
Table 1. Popkin DoDAF Product Implementation (Popkin, 2003:slide 45).................... 33 Table 2 Sample SV-6 ....................................................................................................... 60 Table 3 Modified Sample SV-6 ....................................................................................... 61 Table 4 System Data Exchange Spreadsheet (AOC Architecture, 2003:SV-6) ............. 71 Table 5 Modified Information Exchange Matrix (AOC Architecture, 2003:OV-3 with
SV-2)......................................................................................................................... 74 Table 6 Paired T-Test Results for "All Agents Killed" (DeStefano, 2004:4.3)............... 85
xi
THE USE OF INTEGRATED ARCHITECTURES TO SUPPORT AGENT BASED
SIMULATION: AN INITIAL INVESTIGATION
I. Introduction and Problem Statement
1.1 Introduction The Department of Defense has committed itself to the use of integrated
architectures to aid in the development of all future weapons systems. These
architectures must follow the guidelines laid out in the Command, Control,
Communications, Computers, Intelligence, and Reconnaissance (C4ISR) Architecture
Framework (soon to be superseded by the DoD Architecture Framework v 1.0, known as
the DoDAF). The C4ISR Framework version 2.0 states that, “The purpose of C4ISR
architectures is to improve capabilities by enabling the quick synthesis of ‘go-to-war’
requirements with sound investments leading to the rapid employment of improved
operational capabilities and enabling the efficient engineering of warrior systems.”
Recently, the DoD revised the acquisition policy memorandums, known as the 5000
series. DoD 5000.1, in reference to the interrelationships between weapons systems,
makes it policy that “Joint concepts and integrated architectures shall be used to
characterize these interrelationships.” (DoD 5000.1, 2003:5). The new DoD 5000.2,
entitled “Operation of the Defense Acquisition System” explicitly states the DoD policy
toward architectures:
The Under Secretary of Defense (Acquisition, Technology, and Logistics) (USD(AT&L)), the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (ASD(C3I)), the Joint Staff, the Military Departments, the Defense Agencies,
1
Combatant Commanders, and other appropriate DoD Components shall work collaboratively to develop joint integrated architectures for capability areas as agreed to by the Joint Staff. (DoD 5000.2, 2003:2) The 5000.2 goes on to say that these architectures should be used for developing
roadmaps to “conduct capability assessments, guide systems development, and define the
associated investment plans as the basis for aligning resources and as an input to the
Defense Planning Guidance, Program Objective Memorandum Development (POM), and
Program and Budget Reviews” (DoD 5000.2, 2003:3). In other words, architectures will
provide much of the foundation upon which the rest of the acquisition process depends.
Figure 1 illustrates the role of architectures in system development. Exactly how
architectures will be used in this process is the subject of ongoing research.
DOTMLPF
MS A
Analysis of Materiel
Approaches
Demo
Demo
Demo
AoA
TechnologyDevelopment
DABJROC
JROC
Increment 3Increment 3
Increment 1Increment 1
MS B
MS C
MS B
MS B MS C
MS C
- Materiel -Process
DOTLPFProcess
FunctionalArea
Analysis
Functional AreaFunctional Concept
Integrated Architecture
Overarching PolicyNSS/NMS/Joint vision
Joint Concept of Operations
Feedback
ICD
CDD
CPD
ConceptRefinementCD JROC DAB
DAB
Increment 2Increment 2
Figure 1 Requirements and Acquisition Process Depiction (DoD 5000.2, 2003:3)
The defense communities’ commitment to the use of architectures in system
development is not only backed up by policy but also by law, including the Clinger-
2
Cohen Act and the Office of Management and Budget’s Circular A-130, Management of
Federal Information Resources. A complete list of architectural “Governance and
Guidance” can be found in section 2.1 of the Air Force Enterprise Architecture
Framework Version 2.0 or in section 2 of the new DoDAF, Volume I. A tell-tale
indication that architectures will be a part of future systems planning comes out of the
recent DoD transformation movement. As part of the transformation, the Joint
Capabilities Integration and Development System (JCIDS) was created to replace the old
Requirements Generation System. The new JCIDS points to integrated architectures as
the primary means to evaluate systems prior to milestone decision points. Furthermore,
the DoD states that integrated architectures will completely replace the Capstone
Requirement Document once architectures “are fully available and mature” (DoD
Business Transformation Brief, 2003:slide 13). As was shown in Figure 1, architectures
will be required for every high level review in a product’s life cycle as, “Integrated
architecture products must be included in mandatory appendixes for the Initial
Capabilities Document (ICD), Capability Development Document (CDD), and Capability
Production Document (CPD)” (DoDAF, 2003:2.3.2). Clearly, the DoD recognizes that
integrated architectures will provide the foundation for JCIDS analysis in the future. The
means by which this analysis will be performed are still being investigated.
The architecture mandate is well known within the defense community;
nevertheless, the full benefits of using architectures are not. Most often, the complexity
and required interoperability of modern day “systems of systems” is the reason sighted
for the use of architectures. This, however, is just one of the benefits of the use of
3
architectures. The Air Force Chief Architect’s (AFCAO) website lists three key impact
areas where the use of architectures provides real benefit, shown in Figure 2:
Figure 2. Architecture Impact Areas (AF-CIO/A at https://cao.hanscom.af.mil/af-cio.htm)
Often forgotten are the benefits offered in the programming and planning stage, of
which “mil-worth analysis” is a part. The AFCAO lists M&S as a potential way to
harness the information in an architecture to aid in the planning of a system. Furthermore,
the C4ISR Framework v 2.0 states that architectures built under the framework can be
compared to judge military utility and aid decision makers. Dr. Alexander Levis, the
Chief Scientist of the Air Force, notes that this is a worthy goal, but the framework lacks
explicit guidance on how to achieve it. He presents his own ideas on how to do this:
“The derivation of an executable model of the architecture from the three views and the associated integrated dictionary provides a basis for understanding the interrelationships among the various architecture products and establishes the foundation for implementing a process for assessing and comparing architectures” (Levis and Wagenhals, 2000:226).
4
An executable architecture (referred to by Levis as an executable model) is defined by the
DoDAF as “use of dynamic simulation software to evaluate architecture models”
(DoDAF, 2003:7.3). Levis goes on to assert the value of the executable model, “To fully
describe and understand the dynamic aspects of the system requires an executable model”
(Levis and Wagenhals, 2000:228). Essentially, the static diagrams presented in the
architecture do not adequately convey, “the logical, behavioral, and performance
properties of the architecture.” Thus, to Levis, the creation of the executable model is
essential to be able to portray a dynamic system and measure its utility versus other
systems. To implement the executable model, Levis (along with Handley and Zaidi) has
advocated the use of the Colored Petri Net (CPN) (Handley et al, 2000). CPNs are a
means to create an executable model that
“. . . combines all the information in the various static models or views into one model that can illustrate dynamic behavior: the flow of data, the conditions under which functions are performed, and the order in which they are performed. Behavior analysis and performance evaluation can then be carried out using scenarios consistent with the operational concept.” (Handley et al, 2000:15).
1.2 Problem Formulation
1.2.1 General Problem
Research by Levis and Wagenhals has shown that data from architectural
products can be used to generate an executable CPN (Wagenhals et al, 2000). Relatively
little work has been put into taking data from the architectures of future weapon systems
and exercising them in a constructive combat simulation environment. This capability
would have far reaching effects on the ability of planners to evaluate and shape the future
force structure.
5
1.2.2 Specific Problem
While the CPN is certainly a valuable tool for understanding the dynamic
behavior of a system, it falls short in its ability to model a combat environment where the
rules of engagement (ROE) are changing and the enemy model is learning and evolving.
Clearly, to evaluate the military worth of a future system, you must do so in a simulated
combat environment – one in which the enemy has his own architecture. The CPN is not
able to effectively model a variable environment like a battlefield (Klügl et al, 2002).
To address these concerns, one must look toward the Modeling and Simulation
(M&S) community, which has been taking on these issues for decades. Lately, the M&S
community has been undergoing a transformation of its own. Recent work in operations
analysis of information driven combat is showing that agent based simulation technology
is needed to understand the military value of C4ISR systems when matched with suitable
operations concepts (Gonzales et al, 2001 and Weber, 2003). Older “aggregated” models
based on Lanchester equations were not originally designed to model the effects of
C4ISR systems or take into account the opposing forces’ concepts of operations
(CONOPS). More background on this issue will be provided in Chapter Two of this
document. To address this problem, the System Effectiveness Analysis Simulation
(SEAS) was developed for the Air Force’s Space and Missile Systems Center (SMC).
Independent studies have confirmed that SEAS, partially due to its ability to characterize
“hierarchies of interacting agents that can adapt their behavior,” can successfully model
the impact of Information Superiority (IS) on the battlefield (Gonzales et al, 2001:11).
Unfortunately, the accurate representation of C4ISR systems in agent based simulation
software such as SEAS constitutes a significant interface problem. Because agent based
6
models like SEAS model systems at a high level of detail, programming them “by hand”
requires extensive research and exhaustive testing (Weber, 2003). Given the emergence
of architectures in the DoD mentioned at the beginning of this thesis, it is logical to look
to architectures as a source for the needed simulation data. The research explained below
has looked into doing just that.
1.3 Relevant Activity
Supporting Levis’ assertion on the value of the executable model is a study by the
US Navy, which has begun to use architectures not only to aid in the design of new
systems, but also to improve the acquisition process. In a study conducted in 1999, the
Navy showed that C4ISR architectures can be used effectively “to enable a capabilities
based approach to the planning and acquisition of DoD families of systems.” (Dickerson
and Soules, 1999:2). The study documented the Navy’s Time Critical Targeting process
in a Fleet Battle Experiment using DoDAF products. The results were then successfully
used to influence the next POM cycle. The below figure summarizes the process used by
Dickerson and Soules. Note that the “3rd Order Analysis: Dynamic Interoperability” is a
key step in forming the acquisition strategy.
7
Figure 3. Using Architectures in SE and Acquisition (Dickerson and Soules, 2000)
There is a huge push in the simulation community to become more interoperable
with C4ISR systems. The C4ISR Technical Reference Model (TRM) is currently being
developed with the goal of becoming the standard frame of reference between C4ISR and
Modeling and Simulation (M&S) systems (Griffin, 2003:2.2). In a position paper for the
2003 International Conference on Grand Challenges for M&S (ICGCMS ’03), Dr.
Andreas Tolk proposed the grand challenge of “merging of C4ISR and M&S components
into heterogeneous systems supporting the Warfighter” (Tolk, 2003:1). These and other
efforts have the top level goal of increasing interoperability between the two communities,
but are focusing on the interface of existing C4ISR systems with virtual simulations. Left
largely unaddressed is the possibility of building constructive combat simulations based
8
on architectural data, a capability which would have significant impact on both
communities.
Pertinent research in this area is currently being conducted by Pawlowski, Barr,
and Ring at MITRE Corporation with the Executable Architecture Methodology for
Analysis (EAMA). Recognizing the limits imposed by the static nature of most
architecture products, EAMA is developing a methodology to convert static architecture
products into executable architectures to:
• Support dynamic analysis of a system or capability • Measure process performance • Measure organizational work efforts & resource utilization over time
(Pawlowski et al, 2003:slide 5) EAMA uses architectural products as the foundation for a Petri-Net based executable
model, much like Levis has advocated above. The Petri-Net is executed in a business
process model called Bonapart, which uses a communications network model (NS2) to
simulate communications delays. The goal of all this is to essentially provide a time
delay estimate to Eagle, a top level combat simulation. Figure 4 gives an example of how
this might work for a simulated artillery duel; the horizontal axis represents time. The
figure shows how analysts can use the results to determine Measures of Force
Effectiveness (MOFEs) – in this case the ability to destroy enemy artillery, and Measures
of Effectiveness (MOEs) – in this case the time required to process and conduct counter-
battery fire.
9
Mitre
24
Mitre
Model Interactions & Sample Measures of Merit Model Interactions & Sample Measures of Merit
TargetDataSent
Key Event E1 (Enemy Artillery Fire)Triggers
Mission Thread A1 - A4 (Counterfire)
E1 E2 - Enemy Artillery March Order
FireMission
Sent
A3 A4A1
TargetData
Received
OA Process Model
SA Network Model
OperationalContext
A2
Mission Thread Complete
MOFE - Target Destroyed
OA MOPStaff Bottlenecks
Bonapart
NS2
TIDATSent
TIDATReceived
OA MOE - Overall Staff Process Time
Fire Mission
Received
SA MOPNetwork Bottlenecks
EagleEagle
Radar G2 DIVARTYFCE
MLRS
Node in Comms Network
Figure 4 - EAMA's "Modal Interactions and Sample Measures of Merit" (Pawlowski, 2003:slide 24)
The EAMA process generates it’s own Measures of Performance (MOPs) from Bonapart
and NS2 simulations, which were built with the aid of architectural products. For
example, the time it takes to send target information over the network is a potential
bottleneck - it is collected as an MOP, and used to calculate the overall process time
(which is collected as the above mentioned MOE). EAMA allows for a high resolution
look at important details like the identification of communications bottlenecks and areas
for improvement for internal business processes. Furthermore, unlike the isolated Petri-
Net executable models, exercising the architecture in Eagle allows it to be tested against a
simulated enemy. However, since Eagle is a deterministic combat model, the application
10
of this federation of simulations to the evaluation of C4ISR architectures is not as useful
as one using a stochastic combat model such as SEAS which plays system errors and
flawed perceptions of the various agents. Whenever assessing risk of undesirable
outcomes is important, the analyst needs to start with a stochastic simulation of the
dynamic activity (Weber, 2003). Furthermore, what if one is only interested in a top
level evaluation of a weapons system? In a higher level analysis, MOFEs like “time to
halt” or “loss/kill ratio” take precedence over 2nd and 3rd order MOPs like
communications delays and processing times. Often, the objective is to evaluate how a
new weapons system fits into the existing battlespace. In this case, the performance
parameters of the weapons system could potentially be taken directly from the
architecture. One would expect that the transition from architecture to simulation could,
therefore, be less complex and theoretically go directly to the combat simulation model.
Agent based simulations like SEAS are ideally suited for conducting this top level
analysis of first order effects (Gonzales et al, 2001:54).
The above researchers recognize the full potential of architectures can only be
reached through the help of modeling and simulation. While the creation of executable
models has been going on for some time, the debate goes on as to the best way to use
them to aid decision makers in the DoD. The solution will require a collaborative effort
between the systems engineering and operations research community to bridge the gap
between defense systems architectures and M&S.
1.4 Scope of Work
Planners and operations analysts would benefit greatly from a methodology that
would guide the transfer of information from a system’s architectural description into an
11
agent based simulation for further analysis of military value in specific conflict scenarios.
The combined and coordinated use of the new DoD Architectural Framework Template
(the DoDAF) representations from the engineering community in conjunction with
simulation of operational dynamics in agent based software offers great promise in
creating for the first time a collaborative, quantitative infrastructure which bridges the
current chasm between the system acquisition community and that of the operational
warfighting commands.
This thesis investigates the feasibility of using information from a weapon
system’s DoDAF based architecture to model the system in an agent based simulation.
Popkin System Architect, a commercially available architecting tool, and SEAS are the
primary software involved. Key questions being addressed include, “what data is
required to model in SEAS, and do architectures (as defined by the DoDAF) provide it?”
The problem is attacked in three ways. First, the DoD Architectural Framework is
reviewed to asses its theoretical potential to provide data for agent based simulation.
Next, several current DoD architectures are evaluated to determine if actual architectures
are capturing relevant data required for simulation. One of these architectures will be
chosen to help validate initial theories. Simultaneously, Capt Greg DeStefano will be
researching exactly what is required in order to adequately model a weapons system in an
agent based simulation. Capt DeStefano’s research (DeStefano, 2004) and this research
will be a collaborative effort approaching the problem from the Operational Analyst’s
viewpoint and the Systems Engineer’s viewpoint, respectively. Finally, a case study is
conducted where data from the Air Operations Center (AOC) Architecture will be taken
from Popkin System Architect and imported into a SEAS scenario based upon the
12
Kosovo conflict. Conclusions are drawn on the best way to conduct the “mapping” of
information from architecture to simulation and the potential for automating the process.
Chapter 2 of this thesis reviews the use of architectures in the DoD and the rise of
agent based simulation. Chapter 2 also provides background on Popkin System Architect
and SEAS. Chapter 3 outlines the proposed methodology that will be used in the case
study, and provides a detailed description of the AOC Architecture and Kosovo SEAS
scenario. Chapter 4 provides the results of the case study. In order to determine if the
changes made to the existing SEAS model were significant, a comparison with the
baseline model is described. Chapter 4 also contains an analysis of the results, including
a look at the potential to automate the process. Chapter 5 will offer recommendations for
improving the tie between architectures and simulation and also proposes future research
in this area.
13
II. Background 2.1 Background on Architectures in the DoD
The series of events that culminated with the DoD mandate for the use of
architectures started with the lessons learned from the Persian Gulf War of 1991.
According to Dr. Levis, the primary drivers were uncertainty and interoperability (Levis,
2003:1–55). The operations in the Gulf showed that there is a wide degree of uncertainty
for requirements due to a constantly changing battlefield. While older platforms were
being used in ways never envisioned, newer technology was being applied piecemeal. In
the words of Dam and Willis, “the systems associated with C4ISR were becoming
increasingly complex and rapidly obsolete due to the explosion of capabilities provided
by the information technology revolution” (Dam and Willis, 2002:1). This contributed to
Levis’s other main driver: interoperability problems. The interoperability problem was
evident in Joint and Coalition operations as well as with communication within the DoD
(Levis, 2003:1-55). A classic example of service specific systems not talking to each
other was how paper copies of the Air Tasking Order (ATO) had to be flown from the
command center in Riyadh to the decks of aircraft carriers. Clearly, there was a major
problem conducting joint operations. In order to fix this, joint planning was required, but
this was also lacking. According to Dam and Willis, “the services, commands, and DoD
agencies had a penchant for designing and developing their own independent capabilities
and solutions despite extraordinary similarities among the requirements, performance
criteria, and applications contexts. At the end of these independent trails, the resulting
systems would not work well together further complicating difficult joint operations”
14
(Dam and Willis, 2002:1). The information age had changed the nature of warfare, and
the acquisition process needed to adapt. As Eberhardt Rechtin, one of the leaders in the
field of Systems Architecting, said in 1991:
“The proliferation of information system architectures, as we shall see is accelerating. As a result, some of the newest approaches to architecting have come from information systems as the conventional approaches to searching for solution – trial, error, and analysis – prove inadequate.” (Rechtin, 1991:10)
Of course, trial and error on the battlefield can lead to disaster and loss of life. With the
lessons of the Gulf War fresh on the mind, the DoD began looking for a solution.
It was in this context that the services were studying the science of acquiring and
using information systems. Three important studies were conducted between 1993 and
1995 that pointed to architectures as the answer (Levis, 2003:1-56). Realizing that
architectures had the potential to address the issues of uncertainty and interoperability
and recognizing that architectures should be standardized across the services, the DoD
took the first steps to forming a framework. In an October 1995 Memo, the Deputy
Secretary of Defense stated, “I am directing the acceleration of the development of C4I
integration and architecture efforts through the creation of a DoD-wide C4I Integrated
Product Team.” the goals of which would include, “to define and develop better means
and processes to ensure C4I capabilities most effectively meet the needs of the
warfighters” (Nimz, 2000:8). This direction resulted in the creation of the Architecture
Working Group (AWG). The AWG had the daunting task of figuring out what a DoD
architecture should look like, but they weren’t starting from scratch.
The DoD was not the first organization to apply architectures to the design of
information systems. Rechtin states that, “Systems Architecting is the process of creating
15
(conceptualizing, designing, and building) unprecedented, complex systems” (Rechtin,
1991). According to IEEE an architecture is, “the fundamental organization of a system
embodied in its components, their relationships to each other and to the environment and
the principles guiding its design and evolution” (IEEE STD 1471, 2000). The AWG
needed to bring standardization and structure to DoD organizations and services who
“traditionally developed their C4ISR architectures using techniques, vocabularies, and
presentation schemes that suited their unique needs and purposes” (C4ISR AF v 2.0,
1997:1-2). This would be done by creating a framework that would set standards based
on best practices, and allow different architectures to be compared. Several architectural
frameworks were already in existence at the time. The business community often uses
the Zachman Framework, created by IBM researcher John Zachman in the 1980s.
According to Zachman, the framework was created to help companies in “dealing with
the complexities and dynamics of the Information Age Enterprise” (Zachman, 1997).
The framework (shown below) is essentially a matrix of 36 cells representing the what
(data), how (function), where (network), who (people), when (time), and why
(motivation) at 6 different levels from context down to detail.
16
Figure 5. Zachman Framework (www.zifa.com)
While the Zachman and other frameworks had proven themselves in the business
world, the DoD required something different. Most frameworks are set up to aide in the
selling of goods or services to make a profit; this is not relevant to the DoD. At this time,
congress passed a comprehensive bill designed to reform and streamline the federal
acquisition system known as the Information Technology Management Reform Act (or
Clinger-Cohen). The Clinger-Cohen Act mandated the use of architectures in executive-
level departments to “develop, maintain, and facilitate integrated IT (Information
Technology)” (Levis, 2003:1-8). The Office of Management and Budget (OMB) circular
A-130 further defined this mandate by requiring the architectures to have a framework.
Now the use of architectures had not only been recommended but essentially made law.
In June 1996 the AWG released the C4ISR Architectural Framework version 1.0. After a
year and a half of much needed additions and revisions, version 2.0 was released. The
17
Undersecretary of Defense for Acquisition and Technology then mandated the use of the
new framework in a February 1998 memo saying,
“We see the C4ISR Architecture Framework as a critical element of the strategic direction in the Department, and accordingly direct that all ongoing and planned C4ISR or related architectures be developed in accordance with Version 2.0. Existing C4ISR architectures will be redescribed in accordance with the Framework during appropriate revision cycles.” (USD 23 Feb 1998 Memorandum). 2.1.1 The DoD Architectural Framework Version 1.0
Following the 1998 Memorandum, DoD agencies began building architectures
according to the C4ISR Architectural Framework v 2.0. While the overall response was
positive, there were many suggestions for improvement and requests for more guidance.
Taking these into account, the Architecture Framework Working Group (AFWG)
released a major revision in late 2003. To better reflect the wide applicability of the
framework, the “C4ISR” was dropped and the document was renamed the DoD
Architectural Framework Version 1.0 (hereafter referred to as the DoDAF – which is
currently in “Final Draft” form). Instead of designating certain products as “essential” or
“supporting,” the DoDAF suggests products based on the intended use of the architecture
(and specifies a minimum set required for the architecture to qualify as an “integrated
architecture”). The changes to the framework largely enhance the user friendliness and
better explain the applicability of architectures. For example, in response to the
increasing popularity of the Unified Modeling Language (UML), the DoDAF gives
examples of types of UML representations that can be used for each architectural
product. The core remained largely unchanged, such that minimum effort should be
required to make an architecture created under C4ISR 2.0 compliant with the new
DoDAF. The DoDAF consist of three parts – two volumes and a deskbook. The first
18
volume contains background, definitions and guidelines while the second volume
describes each product in detail. The DoDAF Deskbook provides supplementary
information, such as processes and implementation recommendations. The DoDAF does
not dictate modeling methodology or format. In fact, Levis and Wagenhals have shown
that architectures can be developed under the framework using Structured Analysis and
Object Oriented approaches (Levis and Wagenhals, 2000).
One challenge in building an architectural framework is dealing with the fact that
different people view a system from different perspectives. The commander of an
operational fighter squadron views his airplane differently than the chief engineer in
charge of its design. The former is concerned with concepts of operation (CONOPS)
while the latter may focus on the integration of subsystems, for example. Yet, they are
both dealing with the same “architecture.” In 1994, the Army Science Board proposed
the solution of having one architecture with three views: Operational, Systems, and
Technical. When data is properly shared across the three views, an “integrated
architecture” is born. According to the DoDAF:
An architecture description is defined to be an integrated architecture when products and their constituent architecture data elements are developed such that architecture data elements defined in one view are the same (i.e., same names, definitions, and values) as architecture data elements referenced in another view. (DoDAF, 2003:1.5)
In other words, the views are consistent. A key task in the engineering of a system is
mapping operational activities to system functions. To show both sides (operational and
systems) an integrated architecture is required. The DoDAF provides for this, as we shall
see. This is one of the reasons that Levis insists that the responsibility for generating
each view not be divided. “There is one architecture with multiple views. Consequently,
19
choose one Architect responsible for all the architecture views – not an Operational
Architect and a Systems Architect” (Levis, 2003:1-65). The figure below shows the
relationships between the three views.
Figure 6. Three Views of Architecture (DoDAF, 2003)
Some aspects of the architecture encompass each of the three views; they are
called the All-Views (AV) products. This document will give a brief overview of the
products, and a complete list of the products is provided in Appendix A. For more detail,
reference the DoDAF Volume II. The order by which they are presented and numbered
is not necessarily the order in which they are typically created, as will be shown later.
The first AV document (AV-1) is the Overview and Summary Information. The
AV-1 “provides executive-level summary information in a consistent form that allows
quick reference and comparison among architectures” (DoDAF, 2003:3.1.1). An AV-1 is
usually presented in a report format, and must contain the following information: product
definition, purpose, and detailed description. The product detailed description consists of
20
architecture product identification (i.e. project name, architect’s name), scope, purpose
and viewpoint, context, tools and file formats used, and findings (DoDAF, 2003:3.1.1).
Clearly this is an evolving product, as the “findings” section would not be started until
the effort is well underway. However, it is very important that the scope, purpose and
viewpoint are defined up front. Maier and Rechtin say that “Systems Architecting is a
process driven by a client’s purpose or purposes” (Maier and Rechtin, 2002:10).
According to Levis, a well defined purpose is crucial:
“Having a specific and commonly understood purpose before starting to build an architecture greatly increases the efficiency of the effort and utility of the resulting architecture. The purpose determines how wide the scope needs to be, which characteristics need to be captured, and what timeframes need to be considered. This principle applies equally to the development of an architecture as a whole and to the development of any portion or view of an architecture” (Levis, 2003:2-56).
The new DoDAF supports this in its Figure 2-2 “Architecture Products by Use” matrix
that provides guidance on what products should be developed based on the intended use
of the architecture (Appendix B in this document). Consequently, clearly defining the
purpose of the architecture early (during AV-1 development) will help in creating a
useful architecture and prevent the creation of products that do not add value.
AV-2 is the Integrated Dictionary (sometimes referred to as the Data Dictionary).
It is the repository for all the definitions of terms used in the architecture. The AV-2 also
includes taxonomies (common definitions) and metadata (information about architecture
data types). The AV-2 is crucial to understanding an architecture, as it “enables the set of
architecture products to stand alone, allowing them to be read and understood with
minimal reference to outside resources” (DoDAF, 2003:3.2.1).
21
Though simultaneous development of multiple views is feasible, most architects
begin with the operational view. According to the DoDAF, the operational view is “a
description of the tasks and activities, operational elements, and information exchanges
required to accomplish DoD missions” (DoDAF, 2003:1-2).
The High-Level Operational Concept Graphic (OV-1) is simply a graphical
representation of the CONOPS, with accompanying descriptive text. The OV-1 is a good
product to show the scope of the architecture (what is internal and what is external) and is
intended to be briefed at high levels in the chain of command.
The Operational Node Connectivity Description (OV-2) graphically depicts
dependencies between nodes to exchange information. The DoDAF describes a node as,
“element of the operational architecture that produces, consumes, or processes
information” (DoDAF, 2003: 4.2.1). Common nodes include organizations, people,
locations, etc . . . Nodes can be internal or external to the system, and are connected by
needlines that represent one or many information exchanges.
These information exchanges are detailed in the Operational Information
Exchange Matrix (OV-3). It identifies, “who exchanges what information, with whom,
why the information is necessary, and how the information exchange must occur.”
(DoDAF cites CJCSI 6212.01B, 2000). To accomplish this, the matrix relates the
information exchanges to the nodes and activities as well. For this reason, it is usually
created after the activity model, OV-5.
The Organizational Relationships Chart (OV-4) is the stereotypical “org-chart.”
It is typically a diagram that shows the structure of the operational unit in “tree” format.
The DoDAF does not limit the relationships to supervisory or command, stating,
22
“Architects should feel free to define any kinds of relationships necessary and important
within their architecture to support the goals of the architecture.” (DoDAF, 2003:4.4.1).
At the heart of the architecture is OV-5, the Operational Activity Model. It
describes the activities that occur to accomplish the mission (as depicted in OV-1). It is a
diagram that contains one or more activities connected by inputs and outputs. The most
common method used to generate an OV-5 is the Integration Definition for Function
Modeling (IDEF0), though the DoDAF does not require it. IDEF0 allows the depiction
of controls and mechanisms for the activities (see FIPS 183 for more information on
IDEF0). Controls are “factors that affect the way that the activity is performed”
(DoDAF, 2003: 4.5.1) and, according to IDEF0 convention, every activity must have at
least one control. Mechanisms are the systems that perform the activities; note that it
may be premature to add these early in the OV development as it may restrict the system
design (DoDAF, 2003:4.5.1). In IDEF0 diagrams, Inputs, Controls, Outputs and
Mechanisms are collectively referred to as ICOM arrows. Like many diagrams in the
framework, the OV-5 can be decomposed in to many other diagrams. For example, the
top level activity (called A0) alone would make up the “context diagram.” It may then be
decomposed into three lower level (or “detailed level”) activities, A1, A2, and A3 which
are depicted in another diagram. A1, A2, and A3 are a decomposition of A0. The
activities at the lowest level of decomposition are called the “leaf activities.” Figure 7
shows a decomposed IDEF0 diagram; inputs come in from the right (I1, I2), outputs exit
from the left (O1, O2, O3), controls come in from the top (C1, C2, C3), and mechanisms
come in from the bottom (M1).
23
Figure 7 IDEF0 Model Syntax (Levis, 2003:3-26)
The leaf activities in the OV-5 would be what the system engineer (SE) uses to
perform a functional analysis. Many in the SE community consider it to be the most
critical product in the DoDAF. The leaf activities from OV-5, combined with the
information on operational nodes from OV-2, join to form the OV-3. A common mistake
made in architectural development is to have one product decomposed down several
layers while other products are neglected. This leads to architectures that are far from
integrated. The DoDAF cautions against this in the case of the activity model, stating,
“The decomposition levels and the amount of detail shown on OV-5 should be aligned
with the operational nodes that are responsible for conducting the operational activities
24
(shown on corresponding OV-2 products).” (DoDAF, 2003, 4.5.1). This is one of the
steps on the way to an integrated architecture (see slide below).
Figure 8. Integrated Architectures (DoDAF Vol I, 2003:Fig 3-3)
The OV-6 products describe the dynamic behavior of the system. According to
the DoDAF: “The dynamic behavior referred to here concerns the timing and sequencing
of events that capture operational behavior of a business process or mission thread for
example” (4.6.1). This is accomplished via the following three documents. The OV-6a
is the Operational Rules Model. These rules are often simple IF-THEN-ELSE statements
tied to activities in the OV-5 (only leaf activities should be described by rules). The
Operational State Transition Description (OV-6b) is a diagram that shows various states
the system can be in and the events that causes it to transition between states. A template
often used is the familiar “statechart,” which marks transitions with the “Event [Guard
25
Condition]/Action” block. Event timing is best described by the OV-6c, the Operational
Event-Trace Description. It models the time history of nodal information exchanges of a
particular scenario. UML sequence diagrams and IDEF3 are the most common formats
used (DoDAF, 2003:4.6.11). A major thrust of this research is to look at the OV-6
products and determine their utility in modeling an architecture using agent based
simulation.
The last OV diagram is the Logical Data Model (OV-7). The OV-7 “defines the
architecture domain’s system data types (or entities) and the relationships among the
system data types” (DoDAF, 2003:4.7.1). Those familiar with software engineering will
recognize this as the Entity-Relationship (ER) diagram. The IDEF1X convention is
frequently used to model this product.
The Systems View contains eleven products that describe the systems that support
operations. For this reason, work is usually started on the SV products during, but not
before, OV product development. The SV-1, Systems Interface Description, is a good
example of this. It shows the system nodes that support the operational nodes in the OV-
2 (Operational Node Connectivity Diagram) (DoDAF, 2003:5.1.1). It is only logical,
therefore, that the SV-1 would be created after the OV-2.
The Systems Communications Description (SV-2) provides information about
how systems communicate with each other, including the type of media involved.
According to the DoDAF, “SV-2 shows the communications details of SV-1 interfaces
that automate aspects of the needlines represented in OV-2 (5.2.1).” Communication is
documented between internal nodes and entities external to the system.
26
The SV-3 Systems-Systems Matrix provides more information about the system
interfaces diagramed in the SV-1. Like an N2 diagram, systems are arranged in the rows
and columns, and the relation between the systems is shown in the corresponding cell.
This product may be useful for identifying redundancies.
The rough equivalent to the OV-5 in the systems view is the Systems
Functionality Description (SV-4). Unlike the OV-5, the SV-4 is intended to describe
system functions and often shows the passing of data from system to system. For this
reason, the data flow diagram (DFD) is often the model of choice instead of IDEF0. Like
the OV-5, the SV-4 is often decomposed into several child diagrams to show the
functionality of subsystems.
The Operational Activity to Systems Function Traceability Matrix (SV-5) maps
operational activities to system functions. The activity is often referred to as “functional
allocation” and is a key step in the systems engineering process. As an example, an
automobile’s architecture would allocate the stopping function to the brakes and the
safety function to the seatbelt, airbag, and other components. “Such a matrix allows
decision makers and planners to quickly identify stovepiped systems,
redundant/duplicative systems, gaps in capability, and possible future investment
strategies all in accordance with the time stamp given to the architecture” (DoDAF,
2003:5.5.1). While in development, the ability of a system to fulfill a particular activity
isn’t always black and white. For this reason, the DoDAF outlines a red/yellow/green
“stoplight” system that adjusts depending on the maturity of the system. For the
automobile example, proven safety components like seatbelts and airbags may be marked
27
with a green light while a new Heads-Up Display that is still on the drawing board may
be marked as yellow.
The Systems Data Exchange Matrix (SV-6) describes the data exchanged between
systems. Because it is an SV product, the emphasis is on automated information
exchanges. At a minimum, the SV-6 usually describes “how the system data exchange is
implemented, in system-specific details covering periodicity, timeliness, throughput, size,
information assurance, and security characteristics of the exchange” (DoDAF,
2003:5.6.1). These are reflected as columns in the matrix.
The Systems Performance Parameters Matrix (SV-7) is simply a listing of
quantitative performance parameters for the systems. They are usually annotated as
threshold and objective, and tied to a specific time frame. Examples of common
parameters include availability, response time, and detection accuracy.
The SV-8 is the System Evolution Description. It is essentially a timeline that
shows when the system will evolve over a period of time. Often this timeline will denote
milestones such as the release of new versions. Supporting the SV-8’s evolution
description is the SV-9, System Technology Forecast. SV-9 attempts, within reason, to
forecast changes in technology relevant to the architecture. It is typically created in a
spreadsheet.
Just as OV-6 a, b, and c represented the operational dynamic model of the
operational view, SV-10 a, b, and c follow the same structure. SV-10a describes the rules
that constrain the system, SV-10b shows system changes in a state transition diagram,
and SV-10c diagrams how system data exchanges occur as time passes. Like the OV-6,
the SV-10 b and c diagrams must pertain to a specific scenario (different scenarios =
28
different diagrams). The SV dynamic models differ from their OV counterparts,
however, as they deal with system states and system functions rather than operational
states and activities.
The last SV product is the Physical Schema (SV-11). It essentially shows how
the OV-7 will be implemented. The SV-11 can take many forms, including a class
diagram (familiar to those with experience in UML). According to the DoDAF, the SV-
11 is “of the architecture products closest to actual system design in the Framework”
(5.11.1).
The Technical View consists of two products. The TV provides general technical
guidance for the OV and SV as well as technical and engineering standards for the SV.
TV-1, the Technical Standards Profile, is a collection of standards that are relevant to the
architecture. For military systems, the Joint Technical Architecture (JTA) is often cited
in TV-1. The Technical Standards Forecast (TV-2) documents how the standards in TV-
1 are likely to change. Because TV-2 and SV-8 both deal with changing technology as
time passes, the DoDAF gives the architect the option of combining the products into one
document (DoDAF, 2003:6.2.1).
2.1.2 The CADM and DARS
Much has been written in the DoDAF on the benefits of “integrated architectures”
that use common data across products. Furthermore, the DoDAF stresses that one of the
major goals of the framework is to ensure interoperability. The DoD realized shortly
after C4ISR 2.0 was released in 1997 that these goals were not being achieved.
Architectures were being created using a myriad of different tools that did not reference a
common database. For example, a drawing tool like Visio might be used to create OV-5
29
while the OV-3 was created from scratch in Excel. Commercial products were emerging
at that time that used automated tools to create diagrams and used a “repository based”
architecture to store common data. It would be unwise, however, for the DoD to mandate
a specific commercial product. Something needed to be done to help standardize how
architectural data is stored so that comparisons could be made between architectures built
using different tools – the Framework needed a data model of its own. The DoDAF v 1.0
addresses this with the Core Architecture Data Model (CADM). “CADM provides the
logical basis for moving architectures from compendiums of documents, spreadsheets,
and graphics to architecture data that can be stored in architecture data repositories and
manipulated with automated tools” (DoDAF Vol I, 2003:6.2). This is accomplished by
providing a data element table for each product in Volume II of the DoDAF. Once a
DoD architecture is “CADM-compliant,” it may be stored in the DoD Architecture
Repository System (DARS). When operational, users will export their CADM-compliant
architectural data generated in a commercial tool into XML format. XML stands for
“eXtensible Markup Language” and is becoming a worldwide standard and is non-
proprietary. After being parsed, the XML is uploaded into the DARS repository. From
there, it is available to authorized users who can then reverse the process and view the
architecture in a CADM-compliant product of their choosing.
2.1.3 DoDAF Guidance
The DoDAF also presents guidance on how to build and eventually use
architectures ( ). This six step process is to be seen as guidance only, as specific
situations may lead the architect to tailor this process.
Figure 9
30
Figure 9. Building Architectures (DoDAF Volume I, 2003:5.2)
This research, of course, is focused on Step 6, “Use Architecture for Intended
Purpose.” On this, the DoDAF says,
“The architecture description will have been built with a particular purpose in mind. As stated in the discussion of Step 1, the ultimate purpose may be to support investment decisions, requirements identification, system acquisition, interoperability evaluation, operations assessment, or some other purpose. The architecture description facilitates and enables these purposes but does not provide conclusions or answers. For that, human and possibly automated analysis must be applied” (DoDAF Vol I, 2003:5.2).
The transition from the architecture to the so-called “automated analysis” is what
this thesis is investigating. One of the benefits cited by the DoDAF for using the CADM
to standardize repository based architectures is the “ability to use multiple architecture
tools and modeling, simulation, and analysis tools” (6.3). Because these compliant
31
architectures will all be stored in a common place, using a common language (XML),
with a common data model, the potential exists for an automatic export to M&S tools. In
fact, research has already been done on interfacing XML with M&S by Paul Fishwick
(Fishwick, 2002).
2.2 Popkin System Architect The previous section mentioned commercial products that aide in architecture
development through the use of a data repository. One such product is System Architect
by Popkin Software from New York City. Popkin Software was founded by Jan Popkin
in 1986 and began providing tools to aide companies with business process modeling
through the use of enterprise architectures. After the C4ISR Architectural Framework
was released, there was demand for a tool that could help generate the myriad of
architectural products and store common data in a central repository. To meet this
demand, Popkin added a C4ISR “option” to System Architect designed specifically for
architectures using the framework. When starting a version of System Architect
purchased with the DoDAF (formally C4ISR) option, the user is greeted with the familiar
framework matrix that can be used to browse through the OVs, SVs and TVs. System
Architect can be set up so that the architecture’s data repository (referred to by Popkin as
an “encyclopedia”) is resident on central server, and under configuration control. Much
like software development, members of the architecting team can “check out” particular
products for modification. This allows several team members to work on the same
encyclopedia at the same time. Because of this and other features, many DoD
organizations, including the Air Force, designated Popkin System Architect as their
“preferred tool.”
32
As previously mentioned, the DoD does not specify methodologies, tools, or
processes for architectural products. For example, nowhere in the DoDAF does it say
you must use IDEF0 for OV-5. In response to criticism of C4ISR v 2.0, DoDAF v 1.0
seems to have gone out of its way to not dictate certain methodologies. To aide in
describing certain products, the Framework sometimes shows examples or “templates”
that abide by specific methodologies but cautions, “Where applicable, the templates for
the Framework products reference industry standard methodologies and techniques,
although there is no requirement to comply with the template’s chosen standard”
(DoDAF Vol II, 2003:2.4.4). Therefore, it is up to the architect to determine the form of
the products. Likewise, it is up to the tool vendor to decide which automated modeling
technique to support. The table below shows how Popkin System Architect has chosen to
support the DoDAF.
Table 1. Popkin DoDAF Product Implementation (Popkin, 2003:slide 45) Product C4ISR Name System Architect Solution
AV-1 Overview and Summary Information AV-1 Overview and Summary Information Definition*AV-2 Integrated Dictionary The System Architect RepositoryAV-3 Capability Maturity Profile None - Future
OV-1 High-Level Operational Concept Description OV-1 Operational Concept Diagram*OV-2 Operational Node Connectivity Description OV-2 Operational Node Connectivity Diagram*OV-3 Operational Information Exchange Matrix OV-3 Operational Information Exchange Report*OV-4 Organizational Relationships Chart OV-4 Organization Chart Diagram*OV-5 Activity Model OV-5 Activity Model Diagram and Node Tree DiagramOV-6a Operational Rules Model OV-6a Operational Rules Model DiagramOV-6b Operational State Transition Description OV-6b Operational State Transition Diagram*OV-6c Operational Event/Trace Description OV-6c Operational Event/Trace Diagram*OV-7 Logical Data Model OV-7 Logical Data Model Diagram
SV-1 System Interface Description SV-1 System Interface Diagram*SV-2 Systems Communications Description SV-2 Systems Communication Diagram*SV-3 Systems(2) Matrix SV-3 System to System Matrix and Report*SV-4 Systems Functionality Description SV-4 Data Flow Diagram and Decomposition DiagramSV-5 Operational Activity to System Function
Traceability MatrixSV-5 System Function to Operational Activity Matrix and Report*
SV-6 System Data Exchange Matrix SV-6 System Data Exchange Report*SV-7 System Performance Parameters Matrix Performance Relate Definitions and Report*SV-8 System Evolution Description Derived from use of SA repository informationSV-9 System Technology Forecast Technology & Technology Area Definitions + Report*SV-10a System Rules Model Property on System Function using BNF SyntaxSV-10b Systems State Transition Description SV-10b Systems State Transition Diagram*SV-10c Systems Event/Trace Description SV-10c Systems Event/Trace Diagram*SV-11 Physical Data Model SV-11 Physical Data Model Diagram
TV-1 Technical Architecture Profile Technical Architecture Profile Definitions + Report*TV-2 Standards Technology Forecast Standards Technology Forcast Definitions + Report**Capability added with C4ISR extension to System Archtiect
33
The “System Architect Solutions” that are described as diagrams are created in a
completely different manner than those described as reports. Diagram solutions are
manually drawn in a System Architect workspace using symbols that are specific to the
product being created (i.e. an OV-2 Node Diagram will have Nodes and Needlines as
symbols). These symbols represent data that may or may not already exist in the
underlying architecture encyclopedia – they are called “definitions” by Popkin. If the
definitions already exists, Popkin provides the developer with a convenient list of
“choices” that can simply be dragged and dropped onto the workspace. Proprietary
features allow the drawing process to be partially automated for some products (for
example, the OV-4 symbols automatically snap to an Org Chart like form).
When generating reports, unlike diagrams, it is required that definitions already
exist in the encyclopedia (or else the report would be empty, of course). The data can be
imported from a variety of sources, or manually input via dialogs (this is often done often
during diagram creation, as the above paragraph describes). While diagrams are viewed
inside System Architect, reports are commonly generated as stand alone documents in
various formats such as Word, Excel, HTML, or XML (recall that the DARS plans to use
XML format).
While report products like OV-3 are usually simple spreadsheets with column
headings defined by the user, diagram products must follow a previously defined format.
Therefore, Popkin chose several industry standard methods to portray the diagram
products. OV-5 is modeled in Popkin using the IDEF0, a familiar standard for those in
the systems engineering community. Popkin’s method to portray the OV-6a diagram is
slightly less conventional. Rather than providing a dialog to describe the rules model for
34
each activity, Popkin choose to use a diagram to portray the OV-6a Operational Rules
Model. The format chosen for the diagram was IDEF3, which is less familiar to
engineers than IDEF0. Because the OV-6 will prove to be a key player in this research, a
brief description of IDEF3 notation is given below.
2.2.1 IDEF3 Notation IDEF3 was created to aid in business systems modeling by capturing
“descriptions of sequences of activities” (Mayer, 1995:1). For a particular scenario, an
IDEF3 diagram can show you which processes precede others, which occur in parallel,
where decision points exist, etc. Thus, as Mayer indicates, the IDEF3 is closer to a
sequence diagram (OV-6c) than an OV-6a Rules Model as Popkin has labeled it.
Nevertheless, the IDEF3 is certainly a dynamic model and can be a powerful tool in
understanding the performance of a system. shows the standard symbols used
by IDEF3 and Figure 11 is a simple example of an IDEF3 diagram. The example in
shows a decision point (XOR junction) following a process labeled “evaluate
proposal.” If the decision is in the affirmative, the next junction, an OR junction, shows
that either path can be taken prior to the last process, “Contract Award.” While IDEF3
diagrams show a clear sequence of processes, the challenge for this research will be
deducing the rules that guard process completion. This challenge comes to light in
Chapter 3.
Figure 10
Figure 11
35
Figure 10 IDEF3 Symbols (Mayer et al, 1995:22)
Figure 11 Example of IDEF3 diagram (Mayer et al, 1995:40)
2.3 The AOC Architecture A good example of a typical DoD architecture currently under development is the
Air Operations Center (AOC) Weapon System Block 10.1 Architecture. Using Popkin
System Architect as the primary tool, the AOC Architecture is being developed for the
36
Air Force by the MITRE Corporation in Hampton, VA. The AOC architectural effort
was started for several purposes, the principal of which was to document a coherent
baseline for an AOC. According to the AV-1, “the reference material for the architecture
exists as a collection of mostly unrelated heaps of PowerPoint briefings, documents, and
emails. It is inconsistent, sparse in many areas, out of date, uneven in fidelity, or in some
cases, non-existent.” As the Air Force was considering releasing a Request for Proposal
(RFP) for an AOC integration contract, they realized that there would be nowhere to start
from except the above mentioned myriad of documents. Thus, the architecture is
designed to support the RFP development and then “serve as a maintained, authoritative
decision making tool after contract award.” The use of the word “maintained” is
important – the architecture is not just used once and discarded. After contract award, the
architecture will be maintained by the contractor, but still used by the government for
other purposes. Thus the AOC, as a classic example of a “system of systems,” requires
an architecture description primarily for the following reason, as described in the DoDAF
Volume I:
Acquisition program management and system development – e.g., Services and Agencies developing systems can use architectures to determine system concepts related to operational concepts and ensure interoperability within a family of systems/system of systems (FoS/SoS). (DoDAF, 2003:3.5).
In short, the AOC architecture was created primarily to document a baseline.
According to AV-1, “This architecture does not reflect any substantive analysis. Rather,
it focuses on the capture of the as-is baseline architecture and will need to be analyzed in
the context of a decision or focused study” (AOC Architecture, 2003:AV-1). With
regards to time frame, “The architecture depicts the evolution of the weapon system from
37
its current AOC WS Block 10.0 state (2001) and certain evolutions expected to be
implemented through FY06.”
The research in this document is based off the AOC Architecture Version 2.0
which consists of 12 products. Note that the current effort using Popkin SA took over
from a previous effort using multiple tools (see “Initial Working Form” vs. “Final
Working Form”). Diagram products (like OV-5) are created in the drawing space of the
Popkin SA tool, sometimes using previously defined data elements. Tabular products
(OV-3, for example) are generated by SA somewhat automatically, using data from the
repository “encyclopedia.” This appears to be an improvement over the “Initial Working
Form,” in which graphic and tabular data were generated independently and manually
with Netviz and Excel, respectively. The current methodology allows all the data in the
architecture to be stored in one place, the Popkin encyclopedia.
The AOC is considered an “integrated architecture,” based on the DoDAF
standards. It contains the 7 essential products required, and data elements are consistent
across the views. For example, the operational activities in OV-5 are mapped to SV-4
functions. This relationship is defined in the encyclopedia, so that the SV-5 matrix can
be generated automatically, with some manual tweaking required.
The AOC architecture is a good choice for the case study undertaken by this
research. It is an integrated architecture representing a classic “system of systems.” It is
highly reliant on C4ISR systems, which SEAS is designed to model to a high degree of
fidelity. The architecture is relatively complete, with some products containing several
levels of decomposition. More about the AOC 2.0 architecture and its choice for use in
the case study will be written in Chapter 3 of this document.
38
2.4 The C4I Support Plan As shown in Chapter 1 of this document, the DoD has mandated the use of
integrated architectures in the acquisition process. A specific example of this is the
Command, Control, Communications, Computers, and Intelligence Support Plan or
C4ISP, documented in DoD 4530.8. “For all DoD Acquisition Category (ACAT)
programs, C4ISP shall be used to document interoperability and supportability
requirements” (DoD 4630.8, 2002:4). The C4ISP must be in place for Milestone B and
“shall be maintained throughout the acquisition life cycle” (DoD 4630.8, 2002:33). The
following products must be included in the C4ISP: OV-1, OV-2, OV-3, OV-6c, SV-1,
SV-6, and TV-1 (DoDAF Vol I, 2003:3.6.4.2). According to Levis, “The purpose of a
C4ISP is not to build architectures – its purpose is to use architectures to support
requirements analysis and evaluation” (Levis, 2003:2-46). Various tools exist to support
this analysis. The following section will outline a simulation model that is particularly
relevant to C4ISR systems.
2.5 Background on DoD Modeling and Simulation Modeling and Simulation (M&S) techniques used in the DoD evolve not only
with the nature of warfare, but also with the increase in computational power. Both of
these factors are currently forcing a major change in the M&S community. The
traditional (and most widely used) modeling technique in the DoD uses differential
equations to calculate casualties and changes in the frontlines. These equations are called
“Lanchester Equations” and were originally published in 1914 (Tighe, 1999:27). Many
modern simulation models use variants of the Lanchester Equations to predict the
attrition rates of opposed forces massed in parallel strips across the battlefront (the
39
“piston” analogy is often used here). According to a RAND study, “These models were
developed when computers had much more limited capabilities, making it necessary to
reduce the number of simulation entities and to use aggregation techniques” (Gonzales et
al, 2001:10). More detailed information on these models can be found in DeStefano’s
Thesis Chapter 2.
The aggregated Lanchester equation based models are beginning to fall out of
favor in some DoD agencies for several reasons. Tighe cites a paper by Battilega and
Grange that shows that the equations “do not accurately model many historical battles”
(Tighe, 1999:28). This is surprising given that many of these battles follow the massive
“force-on-force” attrition warfare formula that the equations are designed for. With
modern warfare increasingly relying on C4ISR systems, the old models breakdown even
further. Gonzales points out that the “legacy models” can’t model individual C4ISR
effects, and can only be represented by adjusting parameters.
“Because of the parametric approaches employed in legacy models and because these models are largely scripted, they also cannot take into account how information is used to support command decision making processes. Thus, it is not possible to asses many IS (Information Superiority) concepts, such as force synchronization, that may be enhanced or enabled by advanced C4ISR systems in traditional legacy models” (Gonzales et al, 2001:11).
In other words, what good is C4ISR when your forces’ moves are scripted?
These problems have been addressed by a new generation of models that attempt
to model “complex adaptive systems.” Forces are made up of individual “agents” that
are programmed to follow a rough set of rules. “The individual agents are then
responsible for making their own decisions as to how they should prosecute the battle”
(Tighe, 1999:33). Hence they are “adaptive.” While the rules that govern an individual
40
agent may be simple, a collection of agents will exhibit complex behavior. These “agent
based” models have begun to catch on in the DoD M&S community. The Navy’s Naval
Simulation System (NSS) is used to model Fleet Battle Exercises
(http://www.metsci.com/ssd/nss.html). The Marine Corps uses JIVES (Joint integrated
visualization environment simulation) to simulate urban combat (DeStefano, 2004:2.3.3).
This new generation of models is better suited for the analysis of modern concepts such
as Network Enabled Warfare and Effects Based Operations (Gonzales et al, 2001 and
Weber, 2003).
2.5.1 SEAS The System Effectiveness Analysis Simulation (SEAS) was developed for the Air
Force by the Sparta Corporation. The Aerospace Corporation and RAND have also
provided assistance with SEAS development. According to RAND, “SEAS is an entity-
based, time-stepped, stochastic, multimission-level model specifically designed to help
evaluate the military utility of C4ISR systems” (Gonzales et al, 2001:8). The terms
“entity based” and “agent based” are often used interchangeably. Individual agents are
not typically modeled in excruciating detail. This, and the lightening fast speed of
modern computers allows SEAS scenarios to contain many agents yet still exhibit decent
run times. The below figure shows how SEAS uses agents within the modeling hierarchy.
41
Figure 12. Agent Hierarchy (from DeStefano, 2004:2.3.3.1)
Every agent in SEAS runs a parallel execution thread and follows the “orders” that have
been coded to dictate its behavior. Interactions between agents are resolved at each one
minute time increment. The below figure shows the sequence of events an agent goes
through each time step. This capability for adaptive behavior allows SEAS to credibly
model the effects of C4ISR. SEAS is particularly well suited to model space-based
platforms down to individual satellites. SEAS is also set up to model weather and
geographic effects (Gonzales et al, 2001).
42
Figure 13. SEAS Top-Level Event Processing (Gonzales et al, 2001:56)
The Air Force Standard Analysis Toolkit (AFSAT) “is a collection of simulations
that are accepted and supported by the Air Force analysis community, and which have
AF-level oversight and investment priority” (http://afmrr.afams.af.mil). The AFSAT
divides simulation into three levels: Campaign, Mission, and Engagement. SEAS is
included in the toolkit as a mission level simulation. In practice, SEAS is better
described as a “fully capable multi-mission model which can and has been used to
perform campaign-level analysis” (Weber, 2003). SEAS accomplishes this using a
“vertical slice approach,” where significant entities (but not necessarily all platforms) are
modeled at every level of depth on the battlefield. Currently, SEAS is being used most
frequently by the Space and Missile Systems Center (SMC) and RAND to support
military worth analysis of C4ISR systems.
43
2.6 Summary
The DoDAF provides a well defined framework for architectural development. It
is important to remember that architectures are not the “ends” but merely a means to an
ends (Levis, 2003). Using the architecture to accomplish a purpose is the true goal. As
the AV-1 for the AOC architecture states: architectures are “potential energy.” While
architecture descriptions were originally brought into the DoD to help acquire C4ISR
intensive systems, agent based simulations like SEAS were created to aide in
understanding the effects of C4ISR systems. It is only natural then that an interface
between these two separate but related resources be investigated.
44
III. Methodology
3.1 Introduction
This chapter describes the methodology used to determine the most effective way
to model in agent based simulation using a DoDAF architecture. In order to understand
the unique structure of agent based simulation, a review of how agents function in SEAS
is presented. This is followed by a brief explanation of the SEAS scenario utilized in the
case study. Using previous research as a starting point, the case study is then set up for
an actual DoDAF architecture of our choosing. A mapping of DoDAF products to agent
attributes in SEAS is conducted to support the case study.
3.2 Agents in SEAS
Before digging into the architecture looking for data, it is important to understand
how agents operate. Figure 14 provides insight into the entities that make up SEAS.
SEAS
SEAS is Built Around Three Simple Entities
Environment
Agent
Devices
Agent
Devices
Agent
DevicesInteraction
Inte
ract
ion Interaction
Agents, Devices, Environment Agents Interact
Through Devices• Weapons• Sensors• Comm
With each other and the Environment
To Emerge Conflict
Outcomes
Figure 14 The 3 Entities in SEAS (www.teamseas.com SEAS_Agents.ppt:3)
45
Recall from section 2.5 that an agent is simply an entity (such as a tank or a small special
forces unit) that has the ability to make decisions. Essentially, agents use devices to
interact with other agents and the environment. The environment includes factors such as
terrain and weather. Typical devices include weapons, sensors, and communications
equipment. These interactions occur during each one minute time step (See Figure 13
from section 2.5.1).
Agents are distinguished from devices by their ability to adapt and make decisions
based on pre-programmed logic. The figure below shows what makes up an agent in
SEAS.
SEAS
SEAS Agent
Comm
OwnedPlatforms
Weapons
Sensors
LocalTarget List•TOS 1•TOS 2•etc.
User ProgrammedBehaviors
• Perception• Awareness• Knowledge• Understanding• Decisions
- Commands- Target
Sightings- Broadcast
Variables
Target
LocalOrders List•Formation Column•Move Location•etc.
Conceptual View of a SEAS Agent
Moving
Personnel
There are four key concepts that apply to agent actions and interactions in SEAS;
The local target list (LTL)The local orders list (LOL)The target interaction range (TIR)The broadcast interval (BI)
OwnedSub-units
Weather
Terrain
BroadcastVariables
_____________________
TPL TPL
TPL
Day/Night
I’d better surrender
Cognitive DomainInformation DomainPhysical DomainActionEffectDeviceEnvironment
Figure 15 A Conceptual SEAS Agent (www.teamseas.com SEAS_Agents.ppt:8)
46
Note the “User Programmed Behaviors” block. This is the brain behind the agent and is
what separates agent based simulation from traditional models. By modifying code in the
war file, the programmer can shape the behavior of particular agents. This does not mean
that programmers just add in scripted events that happen at a certain time like you might
see in a traditional model. Rather, they are “orders” programmed into each agent that
allows them to make decisions autonomously. For example, orders for a Surface to Air
Missile Radar may say to not activate for longer than a certain amount of time to avoid
detection. If no orders are written for an agent it will fall back to default behavior hard
wired into SEAS.
Another important part of an agent’s description is how it communicates. While
units with onboard sensors can find their own targets, many targets are passed along
communications lines. These targets then become part of the agent’s “Local Target’s
List.” Furthermore, orders may be sent to subordinate units, and Battle Damage
Assessments (BDA) may be reported up the chain of command. Each communications
device has its own attributes such as reliability, susceptibility to jamming, and delay.
Furthermore, each communications device may have a multiple number of channels
available for use. Like targets, orders may also be passed along communications lines
and become part of the agent’s “local orders list.” By default, orders specific to the agent
take precedence over orders generated externally (www.teamseas.com
SEAS_Agents.ppt:8). Information sent through communications channels in SEAS that
cannot be described as targets or orders are referred to as “broadcast variables.”
As we can see in Figure 15, there are many other attributes of an agent that are not
orders or communications devices. For the purpose of this case study, I will group these
47
together as “general attributes.” These consist primarily of subordinate units,
performance parameters (such as how fast it can move), and subordinate devices (such as
weapons, sensors, and general information such as number of people). Other attributes
that fall into this category are more obscure, such as the time it takes to deploy and take
cover (dig in) if attacked. In this case study, general attributes, communications, and
orders will serve as the three major categories of information needed by SEAS to model
an agent. The context this agent will exist in is described below.
3.3 The Kosovo War File
This case study utilized an existing SEAS war file that had been developed by
SMC for other purposes. It was not prudent or necessary to create an entire war file from
scratch to demonstrate the transition from an architecture to SEAS. Because certain
information needed for combat simulation (like the day’s weather) are scenario specific
and do not belong in a weapon system’s architecture, it was reasonable to simply reuse
these aspects in an existing war file. The scenario chosen is a scaled down representation
of the conflict in Kosovo. It essentially models Red forces conducting “ethnic cleansing”
operations against the Brown civilians. The objective of the Blue forces is to stop the
killing. This type of contingency was identified in a capabilities based planning study
prepared for the Office of the Secretary of Defense as a likely future conflict the United
States should be prepared for (Davis, 2000:20). Several different air assets, including F-
15s, F-16s, JSTARS, Global Hawk and Predator are available to the Blue side. To
control these assets there is a rudimentary AOC agent already present in the Kosovo war
file which will serve as a starting point for the case study.
48
Based on the hierarchy in Figure 12 in section 2.5.1, the AOC (referred to as the
CAOC in this scenario, for “Combined” AOC) would be best described as a “unit agent.”
It is important that the CAOC be some form of an agent, as only agents have logical
decision making ability in agent based simulation. As a unit agent, the CAOC controls
“platform agents” which may also have their own decision making ability defined by
orders. In SEAS, these orders are written in a particular format known as TPL code (for
Tactical Programming Language). A step down from platform agents on the hierarchy is
“equipment” such as sensors and weapons. Equipment, like agents, have their own
parameter definitions (like “speed,” for instance), but are devoid of orders – as they rely
on their parent platform agent for this. In summary, we must model the CAOC in the
format SEAS requires unit agents to be modeled (which was described previously).
3.4 Applying Previous Research to the Case Study
Prior to launching the case study mentioned in Chapter 1 of this document, an
analysis was conducted to determine how products in the DoDAF framework can best be
used to model a system in SEAS. Conclusions drawn in this analysis could then be
exercised in the case study. A starting point for this research is the work of Levis and
Wagenhals, who, as we have seen, have done similar research on the creation of an
executable model (albeit in Petri Net form) from architecture products (Levis and
Wagenhals, 2000). The figure below shows what Levis considers to be the essential
building blocks in this process.
49
Figure 16. The Synthesis Phase (Levis and Wagenhals, 2000)
Noteworthy is the requirement of the Dynamics Model – this will present a challenge in
the selection of a suitable real-world architecture given that most architectures in the
DoD do not yet have one. The building blocks outlined above should all be provided by
products described in the DoDAF. Levis notes that additional information is also needed,
specifically timing information and “scenarios to run simulations and collect data for the
Measures of Performance and the Measures of Effectiveness” (Levis and Wagenhals,
2000:243). To validate this theory, work was done by Wagenhals, Shin, Kim and Levis
to create an actual executable Colored Petri Net (CPN) based on architectural products.
A CPN is a graph with two types of nodes: circles indicate places and bars indicate
transitions. Tokens, which are distinguished by colors, move from place to place via
directed connections (“directed” meaning you can only move in the designated direction).
Wagenhals based his CPN model on the following four products: the activity model, data
model, rule model, and state transition diagram (Wagenhals et al, 2000:273). These
correspond with OV-5, OV-7, OV-6a, and OV-6b – respectively. These products
provided the necessary data to create CPN based simulation that models the logical
50
behavior of the system. The qualifier “logical behavior” of the system is an important
one: you may analyze how data flows through functions yet you do not have enough
information to determine how long it takes. To capture the performance of the system,
elements of the “physical architecture” must be added to the recipe, as shown in the
above Figure 10. Wagenhals suggests that the System Performance Parameters Matrix
(SV-7) and a communications description would be needed in addition to the products
mentioned above. For their research, Levis, Wagenhals, et al generated their own
architecture description (describing Mobile’s SpeedPass “pay at the pump” system) and
then used them to produce the executable model. The logical executable model they
created was detailed and complete while the performance model was more of an
abbreviated concept demonstration using a modified version of the logical model.
The work in this thesis is partially an attempt to apply the findings of Levis and
Wagenhals to a real world DoD architecture created based on the DoDAF; it is also
something completely new. A performance model will be created that will not only be
able to show the effects of changing individual weapons system parameters, but also the
effects of changing the CONOPS that guide their operations against an enemy that has a
CONOPS of his own. Furthermore, the simulation paradigm the executable model is
based on is vastly different in this experiment. This research will be using the agent
based, mission level combat simulation SEAS rather than CPNs, which are process
oriented. Nevertheless, the findings of Levis and Wagenhals provide a strong basis upon
which to proceed.
51
3.5 Fitness of the DoDAF to support Agent Based Simulation
This section will briefly comment on the methods used to evaluate the DoDAF’s
ability to provide data to Agent Based Simulations such as SEAS. Findings from this
research, backed up by results from the case study will be presented later in this
document.
The first step in conducting this evaluation was to familiarize ourselves with how
SEAS works and what it needs in order to execute properly (see 3.2). We took advantage
of AFIT graduate courses on simulation (OPER 561 Discrete Event Simulation and
OPER 671 Combat Modeling, for example – see AFIT, 2003-2004 for description),
RAND studies using SEAS (Gonzales et al, 2001 and Davis, 2002), and relevant
literature (Tighe, 1997 and Cares, 2002) to get a good idea of how agent based simulation
works. Furthermore, trips to Los Angeles Air Force Base to go over recent SEAS
applications and discuss SEAS with modelers and developers from the Sparta
Corporation proved very helpful. Knowing SEAS requirements, the next step was to find
out if they could be met by the DoDAF. Knowledge of the DoDAF (then C4ISR 2.0)
stems from the AFIT Systems Architecture course (SENG 640). Further education on
Architectures came from countless briefings and papers (many less than a year old due to
the dynamic environment surrounding architectures in the DoD – Dandashi, 2003; Dam,
2002; Handley, 2003; and Pawlowski et al, 2003) as well as personal interviews from
architecture modelers and tool vendors (Ring, 2003; Vittori, 2003; Surer, 2003; and
Popkin, 2003).
As mentioned in the previous section, the information required by SEAS were
categorized as General Attributes (including chain of command), Communications
52
Information, and Orders. For each of these areas, the DoDAF Volume II was reviewed
for appropriate matches. The work of Levis and Wagenhals, as well as Figure 2-2
“Architectural Products By Use” in the DoDAF (Appendix B) served as a good starting
point. The findings are verified by evaluating several real world architectures and
choosing one for use in a case study, as explained below.
3.6 Choosing the Architecture(s)
As mentioned in Chapter 2, the AOC architecture was chosen for use in the case
study intended to validate hypotheses addressing architecture to agent based simulation
mapping. This section provides a detailed account of why the AOC architecture was
chosen, problems encountered in the early stages of modeling, and the steps taken to
address these problems.
As shown in Chapter 2, systems architecting in the DoD is still in its relative
infancy. It should be noted that for practical and administrative reasons, this research
attempted to survey only US Air Force architectures. This narrowing of candidates was
not a concern, as it was assumed that the Air Force was at least on par with the other
services in architecture development. Despite the fact that the use of architectures was
mandated in 1998 for information systems, there are surprisingly few architecture
descriptions in circulation compared with the total number of acquisition programs. The
AF acquisition center for Space and Missiles (SMC) is using architectures in its new
Space Based Radar System Program Office (SPO) but the architecture is in the early
stages of development. This was unfortunate as SMC is the primary user of SEAS and
the sponsor of this research. Little more success was encountered at the Aeronautical
Systems Center (ASC) which is collocated with AFIT at Wright-Patterson Air Force Base
53
in Ohio. While systems highly reliant on C4ISR (like those at the Reconnaissance SPO)
were using architectures, they were not available for academic use. An additional source
for candidate architectures is the Electronic Systems Center (ESC) at Hanscom AFB near
Boston. ESC is in charge of the acquisition of command & control and information
systems (such as airborne platforms like the AWACS) to ground based information
systems (like air traffic control). Because architectures are intended to aid in the
development of C4ISR systems, it is logical that ESC would be at the forefront in this
area.
ESC is participating in the creation of a top level architecture for the Air Force
Deputy Chief of Staff for Warfighting Integration, AF/XI. The Command and Control
(C2) Constellation architecture is being created to help better define how the Air Force
conducts air operations, focusing on the Theater Air Control System (TACS). In addition
to more traditional uses such as aiding system development and budgeting, the C2
architecture is intended to define core data that will serve as a reference model for other
architectures (C2 Constellation Architecture, 2003:AV-1).
Some areas in an architecture deserve more attention and analysis than others.
For this reason, a subset of activities is often modeled at a higher level of detail than the
rest of the architecture. This subset is often called a “key thread.” In the C2
Constellation Architecture, there is a Time Critical Targeting (TCT) key thread. The
TCT thread is modeled in its own development environment (it has its own Popkin
encyclopedia) but uses data elements from the C2 and AOC architectures. The TCT
thread includes an OV-6a Rules Model, while the AOC and C2 architectures do not.
These and other architectures from ESC, AF/XI, and other organizations were
54
evaluated for use in this research. The following criteria were used in choosing an
architecture:
1) The architecture should be built using a repository based tool. This is clearly
the future of systems architecting and facilitates data reuse and transfer to external programs.
2) It should describe a system that is C4ISR intensive and relevant to the combat modeling performed in SEAS.
3) It should be unclassified. 4) It should be an “integrated architecture” according to the DoDAF standard. 5) It should be as complete as possible in the total number of products and depth
of detail within the products. Although our choices were limited, the AOC architecture (described previously)
best met our parameters. The AOC not only had a multitude of different products, but
also had an impressive amount of depth within each product. The OV-5, for example,
routinely took top level internal functions down to four or five levels of decomposition.
External functions, like those performed by Bombers or JSTARS, were modeled down to
one level of decomposition – just enough for the viewer to understand where external
information exchanges originated from.
It was originally intended for this research to use the AOC architecture
exclusively. After an initial evaluation of the architecture it was clear that more would be
needed. While the OV-5 was modeled in extreme detail, it was not accompanied by any
dynamics models (OV-6 products). Therefore, the viewer may know in detail the
activities of the AOC, but could not decipher what triggered the transition from activity
to activity nor have any idea as to their sequence. This is why Levis and Wagenhals
insist that a dynamics model is essential to the creation of an executable model (shown in
). Figure 16
55
To address this, the TCT thread mentioned above was used to complement the
AOC architecture. The TCT thread was created by MITRE with the help of existing data
from the AOC architecture encyclopedia. Activities relevant to TCT were reused in OV-
5 of the TCT architecture. In some cases, activities were created in TCT that represented
an aggregate of detailed level AOC activities (Vittori, 2003). Essentially, the TCT and
AOC architectures share much of the same data, were built by the same modeler, and
describe common systems (although at different levels of detail). Therefore, it is not
breaking with the spirit of the research to supplement the AOC architecture with the TCT
architecture. To portray the OV-6a, Popkin chose to use IDEF3 notation – resulting in a
product that is somewhat of a cross between a Rules Model and an Event-Trace diagram
(see discussion in 2.2.1). This presented benefits and challenges, both of which will be
commented on later in this document.
3.7 Setting up the Case Study
The following sections describe how the data from the Popkin encyclopedia was
used, to populate the SEAS war file. The processes laid out by Levis and Wagenhals
provided a good basis on which to proceed, but would need to be modified and amended
to account for the specific needs of SEAS, and for an architecture that did not contain all
of the recommended products (SV-7, for example). Therefore, this was an iterative
process; data was collected that was thought to be relevant based on Capt DeStefano’s
requirements as a SEAS modeler, Capt DeStefano would model what he could and
request more data to fill in the gaps (DeStefano, 2004). The process would then repeat
itself until the SEAS scenario was populated to a satisfactory level of detail. In the end, a
56
satisfactory method was developed as outlined below. Specifics on how the method was
used in the case study are provided in Chapter 4.
Based on the previous section “Structure of SEAS,” the data was divided into
three sections (general attributes, communications, and orders) based on the natural
differences in the type of data required by SEAS. In all cases, the solution sought was
the least complicated one that would extract as much of the needed data as possible. For
example, if the data could be extracted from an existing product like an OV-3 or SV-6,
every effort was made to modify those products rather than create a new product. The
benefit of creating a new product is that it could be specifically tailored to exchange data
with SEAS. The drawbacks are an increased level of complication and more potential for
error. As this was a proof of concept, a decision was made to use existing products,
however manual this proved to be. The process used for each section is described below.
3.7.1 Mapping the General Attributes
As mentioned above, SEAS requires some general information about units being
modeled. This includes mass, number of personnel, what subordinate units are assigned,
and other attributes. One might think that some of this information could be found in the
AV-1 because it is seen as a general overview. The AV-1 is meant to be a general
overview of the architecture, not the system itself, and this information is indeed not
present. The OV-4 Organizational Relationships Chart is the most likely place that this
information might exist, but it does not get down to the level of individual people. In fact,
there seems to be no DoDAF product that would indicate the number of people required
to man a particular system. As a result, we had to look elsewhere for that information.
For a mobile unit, attributes like speed and altitude are needed – these may be
57
found or derived from the SV-7 System Performance Parameters Matrix. Most SV-7s,
however, focus on detailed level sub-system performance parameters rather than the top
level parameters that SEAS is interested in. This may present a challenge in the
collection of the general attributes from an architecture.
Another key piece of the general attributes is the weapons and sensors assigned to
the weapons system. This is much more relevant to modeling a platform agent like an F-
15 rather than a unit agent like an AOC. The most likely place to find this information
would be in the SV-1 System Interface Description. The missiles, bombs, radar, etc that
are attached to a platform should be modeled in the architecture as system nodes.
Mapping the command hierarchy in SEAS was another task in the case study.
The obvious source for this information would be the OV-4. For a DoD architecture, the
OV-4 should provide a good overview of the chain of command, which is exactly what
SEAS needs. The OV-2 also contains information that is valuable in this regard, but the
OV-4 presents it in a manner that is more relevant to what SEAS expects. Figure 17
depicts the recommended method for mapping general system attributes to SEAS based
on an architecture. The “External” block represents information derived from sources
other than the architecture.
58
SEASSV-7 ***** UNITS *****
Unit “USAF_CAOC” Speed 0
OV-4 Unit “F15_1” Unit “F16_1” Deploy Delay ? Bodies ? Weapon “n/a” SV-1 External
Figure 17 Mapping General Attributes from a DoDAF Architecture to SEAS
3.7.2 Mapping the Communications
Agents communicate in SEAS using specific communications devices. In a
typical SEAS war file, all the communications devices are defined in a sub-section
towards the end of the file. Therefore, multiple agent descriptions in the main body of
the code then can refer to one or more communications devices which have already been
defined. Communications devices are defined with several attributes including maximum
range, delay, reliability, and available channels. There are also ways to limit the
communications device to certain types of messages (like target sightings) and its ability
to both send and receive. Thus, it can be seen that not only information about the
communications medium is required, but also descriptions of the type of information
passed.
Details on the communications mediums can be found in the SV-2 Systems
Communication Description. Information on the type of information passed would be
stored in the OV-3 Information Exchange Matrix or SV-6 System Data Exchange Matrix.
59
These products should be combined into a spreadsheet in order to provide the simulation
modeler with a single, consistent document. For example, the table below is a sample
SV-6 for a generic system.
Table 2 Sample SV-6
Interface System Data Exchange
Format Size Classification …
System 1 to System 3
Orders Text 128k Unclass
System 3 to System 1
Task concurrence
Text 64k Unclass
System 4 to System 2
Status Voice n/a Unclass
…
This table may provide the necessary information about the data being transferred but
does not allude to the medium of the transfer. For that, the SV-2 is required. The figure
below is a sample SV-2 diagram from the DoDAF.
60
Figure 18 Sample SV-2 (DoDAF Volume II, 2003:5-13)
Using the information from the SV-2, the SV-6 can be amended as shown below.
Table 3 Modified Sample SV-6
Interface System Data Exchange
Format Size Classification Communication
System 1 to System 3
Orders Text 128k Unclass 2-way comm. Links
System 3 to System 1
Task concurrence
Text 64k Unclass 2-way comm. Links
System 4 to System 2
Status Voice n/a Unclass 1-way comm. Link
It is important to note that the examples above are abbreviated (the SV-6 example in the
DoDAF has 33 columns). A relatively complete SV-6 combined with SV-2 should
provide all the information needed to model system communication in SEAS. Because of
61
their similar nature, sometimes the OV-3 can be substituted for the SV-6. Either way,
both products need to be examined when preparing this spreadsheet as they each may
have relevant information. This is discussed further in Chapter 4, which gives the results
of this methodology for our case study. Figure 19 summarizes the methodology for
communications mapping from DoDAF products into agent based simulation.
N O D E A
Local A rea N et
Sy ste m 1 Sy stem 2
Sy stem 3 Sy stem 4
Sy stem 5
E X T ER N A LC O N N E CT IO N(O U T SID E T H EN O D E S O F IN T E RE ST)
CO N N EC T IO NTO N O D E B
C O N N E C TIO NT O N O D E B
C O N N E CT IO NO N O D E C
T w o -W ayC om m unicationsLinks
O ne-W ayC om m unica tionsL in k
SEAS***** UNITS ***** Unit “USAF_CAOC” Comm “TacAirT” T
SV-2 ***** COMM DEVICES ***** Comm “TacAirT” Check with OV-3
(if SV-6 was used) and combine
Max_Range 5000 SV-6 or Message_Type 1
External Modes 1 OV 3
Figure 19 Mapping Communications from a DoDAF Architecture to SEAS
3.7.3 Mapping the Orders
As discussed previously, the “orders” section of an agent’s description give the
agent its ability to make logical decisions regarding its behavior on the battlefield. This
is a key concept, and is the main factor that allows SEAS and other agent based
simulations to model the effects of C4ISR. This section will describe our efforts to
author agent orders in SEAS based off of the AOC architecture, and specifically the Time
Critical Targeting thread. Like before, this thesis will focus on the capture of data from
the architecture while Capt DeStefano’s thesis will describe how the data was used.
62
A similar problem was tackled by Wagenhals et al in their paper mentioned
previously (Wagenhals et al, 2000). Recall that Wagenhals based their executable model
on the activities in the OV-5 which became transition blocks in a Colored Petri Net.
With that done, a means was needed to describe the guard conditions that allowed tokens
to pass through the arcs that connect places to transitions. These guard conditions, of
course, amount to the rules described in the OV-6a. Since SEAS does not inherently use
transition blocks that correspond to Operational Activities like the CPN, a hard look at
how the activities in the TCT thread would be represented was needed.
By necessity, the modeling of operational activities in SEAS would be less
structured than the modeling of activities in a CPN. As explained previously, SEAS
contains default behaviors for particular types of agents like airplanes and vehicles. In
some cases, these default behaviors automatically perform a desired activity. For
instance, an airborn sensor in SEAS will automatically try to detect enemies within its
detection range. In other cases, the default behavior of an agent might prompt activities
that do not match the architectural model. To prevent this, the SEAS modeler may write
TPL code in the orders section that overrides the default behavior. Giving the SEAS
modeler the best architectural information to do this without overwhelming him with
superfluous data is the challenge in this section.
The solution lies in a combination of two important architectural products (much
like the communications). By merging the OV-5 and OV-6a into a single text based
document, the SEAS modeler has access to important data that is a step closer to the
desired TPL code. The OV-5 can be put in this format using reporting functions built
63
into Popkin. The OV6a, in IDEF3 format (see 2.2.1), will require much more human
intervention.
One method to convert the OV-6a would be to try to go directly from the IDEF3
to the Tactical Programming Language (TPL code) used by SEAS. There are two major
reasons why this method was not employed. First off, this method is less flexible. The
results would only be relevant to SEAS. While this may work for this particular case
study, the goal of the research is to be applicable to agent based simulation in general.
Second, this method would require extensive manual intervention by someone who is
closely familiar with IDEF3 and has widespread knowledge of TPL coding practices.
Instead a more flexible and feasible method is proposed: translate the IDEF3 diagrams
into an interim “pseudocode” that can be understood by any software developer. From
there, the developer can translate the pseudocode into TPL code or any other language.
This is the method employed in the case study. This thesis will focus on the
interpretation of the IDEF3 to pseudocode while Capt DeStefano’s thesis explains the
translation into TPL code (DeStefano, 2004:3.3.2).
To illustrate this method, the sample IDEF3 diagram from section 2.1.1 is
revisited in Figure 20.
64
Figure 20 Example of IDEF3 diagram (Mayer et al, 1995:40)
Envision Figure 20 as the OV-6a Rules Model for an activity called “Select Contractor.”
For this activity, a new proposal would be the input and an awarded contract would be
the output. Pertinent policies and regulations would act as controls to the process. In
IDEF0 format (the preferred standard for OV-5 Activity Models) the activity would look
like Figure 21.
Policy Law
Proposal Select Contractor
Contract
A1
Figure 21 Sample OV-5 diagram for "Select Contractor"
The task at hand is to put this information in a format that is closer to SEAS TPL
code and more conducive to automation. To do this, the OV-5 is broken down into its
parts and the OV-6 is translated into pseudocode. Figure 20 shows that after the proposal
65
is evaluated it can be accepted or rejected. For this example we assume that the criteria
for rejection is that the cost is outside the budget. The lack of explicit criteria at decision
junctions in an IDEF3 can be troublesome, as these are essentially the operational rules.
With this assumption, the pseudocode for the first part of the diagram would read:
Evaluate Proposal IF (cost > budget) THEN Reject Proposal Else …..
Proceeding to the latter half of the diagram, the OR junction indicates that the proposal
may be accepted for the core contract work or accepted for the options. Following this,
the contract would be awarded. The process in pseudo code is as follows:
(Accept Proposal for Core Contract) OR (Accept Proposal for Options) OR ((Accept Proposal for Core Contract) AND (Accept Proposal for Options))
Award Contract shows the two interpretations combined into one document. This is the
process followed in the case study.
Figure 22
Figure 22 Example of OV-5 Report + Pseudocode from OV-6a
Pseudocode for Activity 1 Evaluate Proposal IF (cost > budget) THEN Reject Proposal Else (Accept Proposal for Core Contract) OR (Accept Proposal for Options) OR ((Accept Proposal
for Core Contract) AND (Accept Proposal for Options)) Award Contract
Activity 1: Select Contractor Description: The process used by the company to select the contractor for a new project Inputs: Proposal – contains the cost, schedule, and technical information as proposed by the
potential contractor Outputs: Contract – the awarded contract Controls: Policy – Company contracting policy Law – Federal, State, and Local regulations
66
Another area to consider when mapping activities is overall system performance.
In order to accomplish this, performance parameters would be needed for the systems that
performed each activity. These would essentially amount to time delays, which can be
coded into the orders sections in SEAS. These individual time delays would add up and
manifest themselves in the results of the simulation runs – overall system performance
could then be analyzed. The DoDAF provides for this with SV-7, and Popkin provides a
dialog for this information to be recorded for each System Element. Therefore, the SV-7
is an important product for mapping both the general attributes and the orders.
One remaining issue is the role of OV-1 in the creation of the orders. Because the
OV-1 is a representation of the concept of operations (CONOPS), and the orders are
really a manifestation of the CONOPS, it seems natural that the OV-1 would play a key
role in the creation of the order. Practically, however, there probably isn’t anything to be
learned from the OV-1 that can’t be gathered from the above mentioned products. In fact,
Levis maintains that prior to creating any OV products you must have a CONOPS in
some form. Therefore, you expect an OV-1 to summarize many of the things the OV-5
and OV-6 products are saying in detail. As we know, OV-1 is a very top level document,
usually ornate with graphics, that provides a broad overview of how the system will be
employed. Therefore, we would recommend the OV-1 be viewed and understood as a
first step in the process.
The diagram below shows the process used to take architectural data to create the
orders section for an agent.
67
SEAS***** UNITS *****Unit “USAF_CAOC”
OrdersF15_tgts[0] = “T80”F15_tgts[1] = “SA6”While me_>Status == 2
Locate F15_PmassEn . . . External
Monitor for Movement
Monitor Targ et/Tar ge tSta tu s
P roject target movement
OTarget Moni toring
XSignif icant Movem en t Yes/No
T arget Updat e
Target Vec tor
Target Coordi nates
Track Data
Sta tic Target
AMTI/G MTI
OV-5 Report + pseudo code based
off OV-6
OV-5 Report + pseudo code based
off OV-6
OV-5
OV-6a
OV-1
conopsconopsconops
SV-7
Figure 23 Mapping Orders from a DoDAF Architecture to SEAS
3.8 Summary
The methodology outlined above explicitly describes how to use data from a
DoDAF based architecture to populate an agent based simulation such as SEAS. The
effort was divided into three parts: mapping the general attributes, communications, and
orders. The diagram below gives an overall summary of where certain DoDAF products
can be used in a SEAS war file.
68
DoD Architecture Framework Version 1.0
***** UNITS *****Unit “USAF_CAOC”
Speed 0Unit “F15_1”Unit “F16_1”Deploy Delay ?Weapon “n/a”
OrdersF15_tgts[0] = “T80”F15_tgts[1] = “SA6”While me_>Status == 2
Locate F15_PmassEn....
Comm “TacAirT”
******COMM DEVICES******Comm “TacAirT
Max_Range 5000Message_Type 1Modes 1
Moni tor for Movement
Moni tor T arg et/Tar ge tSta tu s
P roject target movement
OTarget Moni toring
XSignif icant M ovem en t Yes/ No
T ar get U pdat e
Tar get V ec tor
Target Coordi nates
Track Data
Sta tic Target
AMTI/G MTI
OV-5 Report + pseudo code based
off OV-6
OV-5 Report + pseudo code based
off OV-6
OV-5
OV-6aOV-1
conopsconopsconops
SV-7SV-7
OV-4OV-4
OV-3OV-3
N O DE A
Local Ar ea N et
Sy st em 1 Sy ste m 2
Sy st em 3 Sy st em 4
S y ste m 5
E X T E R N A LCO N N E C T IO N(O U T SI DE T HENO D E S O F IN T E R E ST )
C O NN E C T IO NT O N O D E B
C ON N E CT IO NT O N O DE B
C ON NE C T IO NT O N O DE C
Two-WayC ommunic at ionsL inks
One-W ayC ommunic at ionsL ink
SV-2
N O DE A
Local Ar ea N et
Sy st em 1 Sy ste m 2
Sy st em 3 Sy st em 4
S y ste m 5
E X T E R N A LCO N N E C T IO N(O U T SI DE T HENO D E S O F IN T E R E ST )
C O NN E C T IO NT O N O D E B
C ON N E CT IO NT O N O DE B
C ON NE C T IO NT O N O DE C
Two-WayC ommunic at ionsL inks
One-W ayC ommunic at ionsL ink
N O DE A
Local Ar ea N et
Sy st em 1 Sy ste m 2
Sy st em 3 Sy st em 4
S y ste m 5
E X T E R N A LCO N N E C T IO N(O U T SI DE T HENO D E S O F IN T E R E ST )
C O NN E C T IO NT O N O D E B
C ON N E CT IO NT O N O DE B
C ON NE C T IO NT O N O DE C
Two-WayC ommunic at ionsL inks
One-W ayC ommunic at ionsL ink
SV-2
Check with SV-6 and combine
Check with SV-6 and combine
ExternalSources
X YXZ
XY
YX YX
Z
XY
YX YX
Z
XY
Y
SV-1
Figure 24 Mapping DoDAF Products to Agent Based Simulation
69
IV. Results and Analysis
4.1 Results
A key portion of this research is the evaluation of the DoDAF’s ability to support
agent based simulation. While some of this evaluation was conducted via a subjective
review of the Framework and of SEAS, the findings were validated by the more objective
case study. Because of this, the results of the case study will be presented below and
general conclusions regarding the Framework will be enunciated in Chapter Five.
4.1.1 General Attributes
Unlike the orders and communications, the procedure for mapping the general
attributes could not be exhaustively validated by the case study. Information from the
AOC architecture was not available to populate attributes like number of people in an
AOC. On the other hand, information in the architecture that would normally be used in
SEAS was not relevant to our particular scenario. An example of this is the SV-1
provided by the AOC architecture which contained information on individual AOC
subsystems. This information was at too low a level to be of any use to a campaign level
model like SEAS.
The fact that the case study was unable to verify the methodology for the general
attributes is much less of an issue than it would be for the orders, for example, as the
orders are at the heart of agent based simulation. Furthermore, it is clear which sections
of the architecture would provide this information (SV-7 for performance aspects, OV-4
for command structure). In the absence of an OV-4, the OV-2 showed that virtually all
70
systems report to the AOC. This is essentially how the command structure was modeled
in the original war file, so little modification was needed.
4.1.2 Communications
As previously stated, SEAS needs to know who is talking to whom, what
information is being exchanged, and how it is being exchanged. The “how” includes
such information as the means, frequency, capacity, and susceptibility to jamming. For
this case study, two methods were evaluated to extract this information from the
architecture.
The first method is based off the SV-6 System Data Exchange Spreadsheet shown
below. One look at this spreadsheet will reveal why it would make perfect sense to use
for mapping communications. Each row represents a different system data exchange
(essentially the information being exchanged – this is listed inside the parentheses) and
they are grouped by the interface name (the “who is talking to whom”). The “how” is
then enunciated by the different columns such as Media, Format, Protocols, and
Size/Units.
Table 4 System Data Exchange Spreadsheet (AOC Architecture, 2003:SV-6)
INTERFACE NAME
SYSTEM DATA EXCHANGE CONTENT MEDIA FORMAT PROTOCOLS SIZE/UNITS …….
TBMCS/IRIS to DCGS/DCGS TBMCS/IRIS to DCGS/DCGS (ACO) AOC TBMCS/IRIS to DCGS/DCGS (ATO Changes) AOC
While this would seem to be a perfect match for this case study, several problems
arose while trying to use this method. The first issue is based on the fact that System
Data Exchanges (SDEs) only represent data exchanged between automated systems.
71
Information Exchanges (IEXs), on the other hand, represent all information exchanged,
including between people (recall that the Information Exchange Matrix is represented by
OV-3). We, of course, are interested in all the information exchanged, not just those
between systems. This may not be important in this particular case study as we are
looking at the battlespace at a very high level where everything from Global Hawk and
Fighters as well as Special Operations Forces are looked upon as “systems.” In fact,
there is almost a one to one mapping of Operational Nodes to System Nodes, and most of
the discrepancies can be explained by aggregation (for example, the OV had a
Fighter/Bomber node while the SV has a Fighter node and Bomber node). Nevertheless,
it is important to make sure all information exchanges are included. Fortunately, this is
made easy in the SV-6 spreadsheet provided by the AOC architecture. The spreadsheet
has two tabs, one labeled SV-6(SV) (referred to previously) and SV-6(OV). The SV-
6(OV) is essentially the OV-3 mapped to its corresponding System Data Exchanges. One
could use this spreadsheet supplemented by the extra columns of the SV-6(SV) when
applicable. This method ultimately failed for two reasons. First, it was unclear where
communications information is found for information exchanges that did not correspond
to an SDE. Second, the information expected in the “Media, Format, Protocols, and
Size/Units” columns was typically nonexistent or useless. Clearly another method was
needed.
Knowing that Information Exchanges were the key elements on which to base the
communications mapping, we shift focus to the OV-3. The OV-3 for the AOC
architecture lists all the information exchanges for a particular “Needline.” For instance,
the Needline “CAOC to Fighter/Bomber” contains the following information exchanges:
72
Operations Direction & Guidance, Mission Change Orders, Critical Aircrew Information,
etc. As it was intended to only model the top level effects of certain areas of the Kosovo
scenario, secondary needlines like “AOC to Mobility Aircraft” were not considered and
were hidden in the modified spreadsheet. Each information exchange is linked with its
sending and receiving node and activity, which is why the product is referred to as a
matrix. The primary interest, however, is simply the “who” and “what” information
provided in the first few columns. Based on this, an additional column was added for the
benefit of SEAS to classify the IEX as an “Order”, “Location”, or “Other.” Recall from
section 3.2 that Orders, Locations, and Other variables are the three types of messages
passed over communications lines in SEAS (for more detail, see DeStefano’s section
3.3.1.1). To get the “how” information, other products would be needed. The first
preference would be to check the SV-6. As we have seen in this case, however, the
information is lacking. The next place to look, naturally, would be the SV-2 System
Communications Interfaces Diagrams. The drawback here is that the SV-2 is solely a
diagram, rather than a spreadsheet that can easily be parsed and pasted alongside
Needlines in the OV-3. Therefore, at least for now, we had to fall back on a highly
manual method that involved opening the SV-2 diagram corresponding to every Needline
in the OV-3. Based on the diagram, three additional columns were added to aid in the
transition to SEAS:
1) Comm Overview: usually drawn from a description of one of the comm
nodes, such as “Theater Deployable Comm”
2) Comm Details: taken from the comm connection lines, for instance “UHF
Satcom”
73
3) Bandwidth and other info: as much technical information as possible. In
some cases the SV-2 will enunciate the throughput such as “128 kbs”
Fortunately, an SV-2 diagram existed for every Needline relevant to the Kosovo scenario.
Therefore, we were able to fill in the “Comm Details” column for every Needline. The
information in the other columns was much more hit and miss. The table below is a
highly abbreviated version of the table sent to the SEAS modeler to map the
communications.
Table 5 Modified Information Exchange Matrix (AOC Architecture, 2003:OV-3 with SV-2)
NEED LINE INFORMATION EXCHANGE
SENDING ACTIVITY
RECEIVING ACTIVITY
Orders Location Other
Comm Overview
Comm Details
BW or other info
CAOC to ASOC
Dynamic Targeting Execution Orders
CAOC-4.5.2.9-Execute engagement option (Engage) CAOC-4.5.2.9.2-Issue execution orders
ASOC-3-Execute Direct Air Support Missions
orders Theater Deployable Comm
SHF or EHF SATCOM
256 kbs
JSOTF toCAOC
Battle Damage Assessment
JSOTF-3-Manage Joint Special Ops Assessment
CAOC-5.3.3.2-Assess weapon effectiveness CAOC-5.3.3.2.1.1-Perform Non-lethal Weapons Assessment (NWA) CAOC-5.3.3.1-Assess battle damage CAOC-3.1.4-Review BDA CAOC-3.3.5.9-(see 3.3.3)
other uhf and/or shf SATCOM
unk
This begs the following question: is it permissible to assume that every
information exchange under a particular Needline uses the same communication channel?
Probably not, given that some SV-2 diagrams show multiple communicationsroutes,
74
some better suited for data and some more likely to be use for voice. Unfortunately, there
is nothing in the architecture to indicate which IEXs use which path. For the purpose of
this case study, we used common sense and sometimes outside research to supplement
the architecture in these cases.
4.1.3 Orders
To model the eleven activities in the TCT thread various styles of reports were
evaluated in Popkin System Architect. The report that proved most relevant to the task at
hand was the Activity Model Report under the C4ISR Reports function (under the
“Tools” menu). This allowed the creation of a Microsoft Word document, complete with
table of contents, which listed all 11 activities, their ICOM arrows, and any
accompanying text descriptions. Unfortunately, the activities did not come out in the
proper order, and had to be rearranged by hand. Despite this, the report served its
purpose, which really was to provide a place to interpret the rules model near its parent
activity (as described in the next paragraph). Standing alone, this report would suffice to
make the SEAS modeler aware of the activities conducted in the TCT thread and what
information is passed between them. Still missing are the rules that dictate completion of
an activity, which should be obtained from the OV-6a. Figure 25 shows the report for
Activity 6: Conduct Dynamic Assessment of Target, excerpted from the full report.
75
6 Operational Activity: TCT-Determine target significance/urgency (Track) [Within OV-5 Diagram 'TCT-Level 1'] Glossary Text: Utilizing track data and other target information, C2 Warriors determine if the target/target set is threatening and/or fleeting, and estimate target availability, i.e., how long the target will remain susceptible to attack. From 2005 C2 Constellation 3.2.5.2 and CAOC-4.5.2.7 ICOM line: Air Track (J3.2) Output: going to TCT-Validate target/target set (Target) as input ICOM line: Current Intelligence - Dynamic Assessment/Target Status Input: coming from <offpage> ICOM line: Current Intelligence - Target Classification Input: coming from TCT-Define target/target set (Fix) as output ICOM line: Current Intelligence - Target Identification Input: coming from <offpage> ICOM line: Doctrine, Policy, LOAC, SROE, ROE Control: coming from <offpage> ICOM line: Dynamic Target Nomination Output: going to <offpage> ICOM line: Dynamic Targeting Execution Direction and Guidance Control: coming from <offpage> ICOM line: JMSNSTAT Input: coming from <offpage> ICOM line: Land (Ground) Point/Track (J3.5) Output: going to TCT-Validate target/target set (Target) as input ICOM line: Reattack Recommendation Output: going to TCT-Nominate engagement option (Target) as input ICOM line: TRKREP Output: going to TCT-Validate target/target set (Target) as input ICOM line: Track Management (J7.0) Input: coming from <offpage>
Figure 25 OV-5 Activity Report Generated by Popkin SA for TCT Activity 6
Translating the Operational Rules set was a much more time consuming and,
unfortunately, a highly manual step. Had the rules been presented in Structured English
(basically IF-THEN-ELSE pseudocode), it would be a one step process where the SEAS
modeler simply would translate the given OV-6a into SEAS. Popkin, however, has
chosen the IDEF3 format as its sole OV-6 diagram (and has labeled it “OV-6a
76
Operational Rules Model”). As described in Chapter 2, the IDEF3 format allows the
viewer to deduce sequencing information from the diagram and is useful in this respect.
Consequently, elements of the rules model (OV-6a) and the Event-Trace description
(OV-6c) are shown in this diagram. This aggregation is done at the expense of both
products – probably more so for the rules model, which is especially unfortunate for this
research. The challenge, therefore, is to interpret the operational rules for each activity
based on these diagrams. As a result, the translation from Popkin to SEAS would have to
involve at least two steps. As discussed in Chapter 3, our solution was to use a
pseudocode translation of the OV-6a as an intermediate stage.
The TCT architecture has an OV-6a diagram for each of its 11 activities. This
makes sense because, as we have learned, each leaf (detailed level) activity should have
its own rules set. Therefore, for each activity the corresponding IDEF3 diagram was
translated into pseudo code. Included here is an example of how this is done from
Activity 6: Conduct Dynamic Assessment of Target. The IDEF3 diagram is show below
in (it is chopped into two pieces to better fit the format of the page). The
context of Activity 6 is immediately after a target (or target set) is found and fixed. The
top diagram (first half) shows an XOR junction (depicted by an X - refer to 2.2.1 for a
quick IDEF3 tutorial, if needed) indicated one of the two paths can be taken, but not both.
The resulting “target update” is then put through four simultaneous analyses – we know
all four must be conducted because of the AND junction, and we know it is simultaneous
due to the parallel arrangement of the process blocks. The following XOR junction
indicates another decision point – in this case, “Is the target time critical?” If it passes the
TCT test, another decision point is presented, asking the question “Is this the initial attack
Figure 26
77
on the target?” If so, the target needs to be validated. If not, we proceed with the
engagement.
Monitor for Movement
Monitor Target/TargetStatus
Project target movement
OTarget Monitoring
XSignificant Movement Yes/No
Target Update
Target Vector
Target Coordinates
Track Data
Static Target
AMTI/GMTI
Analyze DynamicTargeting ExecutionDirection and Guidance
Determine TargetSignificance(Value/Effect)
Monitor Target ofInterest for StatusChange
Pass target to ATOPlanners
Analyze Threat fromTarget
Nominate as a dynamictarget (TCT)
Determine targetwindow of vulnerability(urgency)
OReview established target lists
XInitial Attack Yes/No
X
Attack Decision
&
Synthesize Results
XTCT Determination Yes/No
&Target Significance Analysis
Go To OV-6a, 8.1
Go To OV-6a, 7.1
Target SignificanceTarget Analysis
First Strike
TCT Nomination
Reattack Recommendation
TOI Info
Significance/Urgency Results
TCT No
TCT Yes
Target Urgency
Direction and Guidance Compliance
Target Threat
Direction and Guidance
Target Analysis
Current Intelligence
Target Update
Figure 26 IDEF3 diagram for "Conduct Dynamic Assessment of Target" (TCT 2005 Architecture, 2003:OV-6a)
Most of the IDEF3 diagrams are of a similar level of complexity and typically
about twice as long as this one. It can be seen then that IDEF3 notation is fairly easy to
understand and can convey important information regarding the dynamic nature of the
system. When trying to extract precise rules from this diagram, however, much of the
details are left to the imagination. The text below is the pseudo code translation of the
78
above IDEF3 diagram. To aid the SEAS modeler, it was written into the Activity Model
report directly after the automatically generated text for each activity.
PSUEDO CODE FOR ACTIVITY 6 – BASED OFF IDEF3 DIAGRAM: IF Significant Movement of target Then Monitor Target/Target Status Project Target Movement Target Vector = . . . .. ? Else Monitor for Movement Analyze Threat from Target (is the target closing on Friendlies or Fleeing?) Analyze Dynamic Targeting Ex Direction and Guidance (does this agree with the commander’s requirements?) Determine target window of vulnerability (urgency) Determine target significance – partly based on above findings If it is determined to be a TCT based on the above info Then IF this is the first strike attempt on this target Then Goto Activity 7 (Validate Target/Target set) Else Goto Activity 8 (Nominate engagement option) Else Pass target to ATO Planners Monitor Target of Interest for Status Change
While the IDEF3 did a good job of showing where the decision points and other junctions
were, it generally lacked the information to say which path should be followed or what
the entry/exit conditions were for the processes. For example, what constitutes
significant movement of the target, or how do you determine target value? These are
ambiguous for a reason – they are qualitative decisions being made by commanders on
the spot and are not easily quantifiable. Another challenge was how to deal with the OR
junction (not really used in the above example). If an OR junction precedes several
parallel processes, at least one of these processes needs to be completed prior to moving
on. Therefore, it is not known for sure which processes will happen and which will not.
In a real AOC, theses paths are determined on a case by case basis based on the
79
individual situation. The rules that dictate these decisions in real life are not shown here,
however, and will be difficult to speculate on.
Another problem had to do with modeling overall system performance. As
explained in Chapter 3, the SV-7 would be the correct source to find this information.
Unfortunately, none of the information was completed in the current version of the AOC
architecture. According to the lead developer for the architecture, collecting this detailed
data may be the next step taken toward creating their own executable model for an AOC
(Surer, 2003). Their model will be a detailed representation of the internal business
processes that go on in an AOC. Our model, on the other hand, is only interested in
performance at the highest level. It would have been advantageous to have some general
performance numbers from SV-7 to use for this case study.
4.1.4 The Modified SEAS War File
Using the process outlined above, the SEAS war file was successfully modified to
incorporate data from the AOC architecture. The existing AOC function already in the
war file was essentially overhauled to reflect details in the architecture, especially the 11
activities outlined in the Time Critical Targeting key thread. While successful, the
process was by no means seamless or automatic. Furthermore, some of the information
required by SEAS was not available in the AOC architecture. These and other issues will
be laid out more specifically in the following pages.
The most important changes to the war file were made in the “Orders” section for
the AOC and its subordinate units. The goal was to make sure that each of the 11
activities in the TCT activity model were being properly modeled. When sequencing
issues were encountered, the OV-6a diagram was referenced. When questions arose
80
regarding operational rules, the pseudocode was referenced. The original intent was to
have the report created by merging elements of the OV-5 and OV-6a be a stand alone
document upon which the orders in SEAS could be based. This would have been a good
first step toward automation of the process. The reality is that a text report, no matter
how well done, can not effectively replace the diagram it was based on. This is not to say
that the pseudocode was not useful. The fact that the pseudocode was used as well as the
diagrams it was based upon proves its value in the process, albeit not stand alone.
Two important changes to the war file done to achieve concurrence with TCT
architecture are the modeling of Activity 1, the generation of a Dynamic Target Watch
List (DTWL) and Activity 4, the adjustment of theater ISR (Intelligence, Surveillance,
and Reconnaissance) assets to support operations. For Activity 1, Capt DeStefano made
changes to the way target priorities were assigned by the AOC. The changes better
reflected the process that was modeled in the TCT architecture, which was based on the
DTWL. The new method based the targeting priorities in the DTWL on specific types of
targets, rather than broad categories of targets.
Changes made to account for Activity 4 were also significant. The original war
file did not provide for the dynamic tasking of ISR assets. ISR assets like the Global
Hawk UAV would only patrol in a user specified orbit. In accordance with the
architecture, changes were made that allow the Global Hawk to be tasked outside of its
initial area of operations. For example, if another ISR asset (like a reconnaissance
satellite) identifies a potential target with a “Probability of Identification” (PId) that is
less than the criteria needed to attack, the AOC now sends the Global Hawk to that
location. The Global Hawk will then attempt to make a positive identification on the
81
target for at least one orbit around the new site, after which it may be reassigned. The
TPL code written to accomplish this is printed in DeStefano’s thesis, along with a
complete accounting of the 11 activities in the model (DeStefano, 2004:3.3.2). For each
activity, the method of implementation is outlined; whether it was coded in for the
purpose of the study, was already in the war file, was already inherent to SEAS, or was
not implemented. In the cases where an activity could not be implemented in this case
study, a solution is proposed.
In the case of the communications, the existing war file already captured the
information accurately. If the war file had to be created from scratch, the spreadsheet
would have supplied enough information to create a communications structure from the
ground up, with one exception (DeStefano, 2004:3.3.1). Capt DeStefano used the
spreadsheet provided based on the OV-3 and SV-2 to verify this. The format of the
spreadsheet was improved by grouping reciprocal needlines (like “CAOC to Bomber”
and “Bomber to CAOC”) under the same worksheet so the SEAS programmer can
determine if a message is send only, receive only, or both. With this change made, the
procedure proved to be a sound way to map the communications. The exception noted
above refers to the fact that the architecture does not specify which communications path
(if there is more than one) is used by each information exchange or system data exchange.
This issue was explained at the end of section 4.1.2. This is critical information when
attempting to model the effects of a communications failure. It is important to know
what information exchanges would cease during the communications disruption, but
there is currently no way to determine that.
82
As explained previously, very little information was used from the architecture for
the general attributes. Where information could not be provided, the existing attributes in
the war file were used.
4.1.5 Using the New War File
With the Kosovo war file freshly modified to reflect the AOC and TCT
Architectures, Capt DeStefano then performed a comparison study using the ability to
reduce civilian casualties (“ethnic cleansing”) as his primary Measure of Effectiveness
(MOE). The purpose was to determine if the war file that was modified based on the
architecture produced different results than the baseline war file provided by SMC’s
Transformation Directorate (SMC/TD). This is not to say that favorable results for the
Blue side using the modified war file indicated success – our intent is not to create a
“new and improved” AOC. The objective of the research is to investigate architectures
as an information source for agent based simulation. The two models are both looking at
the same “real world” AOC, but have been built using different sources. An AOC
modeled from a different source than the baseline could more realistically depict adverse
effects and actually reduce the Blue force’s overall effectiveness. The reason for the
study conducted below is simply to identify the effects of changes made as a result of this
research.
To reduce variation and provide means for a statistical comparison in the results,
100 simulation runs were conducted. The length of simulation time evaluated was one
week, as early exploratory studies showed that little combat occurred after that period of
time. Because most of the changes in the war file came from the TCT architecture, the
results for the modified war file are labeled as “TCT” (DeStefano, 2004:4.2).
83
The results indicated a small reduction in the amount of Kosovar civilian
casualties over the baseline. Figure 27 shows how this reduction is distributed among the
total simulated time. There is no shift in the distribution, but the overall amount was
reduced with statistical significance (using a 90% Confidence Interval).
1 611 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96
TCT_Kovovar_Cart
Baseline_Kosovar_Cart
TCT_Kovovar_House
Baseline_Kosovar_House
0
10
20
30
40
50
60
Cou
nt (o
ver a
ll re
plic
atio
ns)
Time(h)
Baseline vs. TCT: Kosovar Agents Killed
TCT_Kovovar_Cart
Baseline_Kosovar_Cart
TCT_Kovovar_House
Baseline_Kosovar_Hou
Figure 27 Comparison of Baseline vs. Modified War File - Kosovar Casualties (DeStefano, 2004:4.3)
A statistically significant change occurs in the number of deaths of other Red and
Blue agent types, as well. This is shown by DeStefano by means of a “paired t-test,” the
results of which are outlined in Table 6. Note that the 100 runs are broken into 5 groups
of 20 replications (DeStefano, 2004:4.3). In it’s last column, the table states which war
file’s results would be favorable to the Blue side (for example, less Blue F-15’s died in
84
the TCT war file, so the TCT war file is listed as favorable). If the difference was not
statistically significant, “N/A” is listed.
Table 6 Paired T-Test Results for "All Agents Killed" (DeStefano, 2004:4.3)
The most significant difference between the baseline and TCT results are in the
number of Blue sorties required to achieve the above results. While the number of Red
casualties increased, the number of Blue sorties decreased dramatically in the TCT war
file. This indicates that the changes made to the war file based on the TCT Architecture
significantly improved the sortie effectiveness for the Blue side. Figure 28 shows the
reduction in sorties as a function of simulation time.
85
1 611 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96
101
TCTBaseline
0
100
200
300
400
500
600N
umbe
r sor
ties
over
100
re
plic
atio
ns
time (h)
Baseline vs. TCT Number of Sorties Flown
TCT Baseline
Figure 28 Sorties Flown - TCT vs. Baseline War File (DeStefano, 2004:4.3)
Accordingly, the above results lend credence to the conclusion that the
methodology laid out in , , and is an effective way to create
SEAS TPL code based on a DoDAF architecture. The TCT war file showed significant
and meaningful improvements in Blue force effectiveness, suggesting the modifications
made that were based on the architecture did make an impact.
Figure 17 Figure 19 Figure 23
4.2 Analysis
The case study has shown that our methodology can be used for its intended
purpose. Given that, we now examine some challenges that are inherent with the process,
as well as compare these results to DoDAF recommendations.
86
4.2.1 Missing Pieces
In this case study, the AOC and TCT architectures were unable to provide 100%
of the information required by SEAS. When encountered, these “missing pieces” had to
default to existing data in the war file or find other sources. In the case of the
communications mapping, it was disappointing to find that information regarding
bandwidth and protocols were unavailable. A column was provided for this information,
and it may be filled in at a later date, however. This would be quite beneficial, as the
information is crucial to modeling realistic communications in SEAS.
There was also some guesswork involved in creating the orders. The inherent
problem is that sometimes when a decision or judgment is to be made by a human in the
AOC, the OV-6a simply states “commander’s discretion,” or something similar. This is
of little benefit to SEAS, which really needs to reproduce the logical sequence that the
commander goes through to make the decision, and then needs to make that decision
itself. This is a key benefit of agent based simulation, and to fully reach its potential the
details about the decision process are required. Another key shortfall was inherent to the
way the IDEF3 model was created. When simultaneous processes exist, the IDEF3 does
a good job of showing their parallel nature. The problem is after an OR split, where one
or more of the following paths can be taken, there is no indication of the rules that dictate
which path is taken in which situation. Furthermore, the entry and exit criteria for each
process are loosely defined, at best. Overall, the IDEF3 does what it was intended to do
by providing the viewer a good idea of the sequence of processes that go on in an AOC
for Time Critical Targeting. Nevertheless, more information is needed to provide a
realistic depiction in agent based simulation and minimize the guesswork.
87
Although we were unable to use data from the architecture for the general
attributes, it is possible to identify potential “missing pieces.” It is anticipated that top
level parameters (like the ones required by SEAS) may be hard to find in a typical SV-7,
which often focuses on detailed level subsystem performance. Another potential trouble
area may be the OV-4, for the same reason. For example, had an AOC OV-4 focused
only on the internal command structure of an AOC, rather than showing a top level
command structure, it would not be of much value to SEAS.
4.2.2 Potential for Automation
One of the goals of this research is to find ways to make modeling a weapon
system in SEAS a more streamlined process. Even if we were to show that 100% of the
information needed to do this was available in a DoD architecture, it wouldn’t be
worthwhile if the process took an excessive amount of time. Thankfully, the process
does not take very long, especially if the modeler already knows how to interpret DoDAF
products (like understanding IDEF0, for instance). The process used in this research is
highly manual, however, as shown above. Two of the main reasons for this are that
oftentimes diagrams need to be interpreted by the modeler and information from several
products must be fused. The former reason was certainly the case for the IDEF3 diagram,
which needed to be translated into pseudocode for the orders mapping. Therefore, the
orders are the one area that would be virtually impossible to automate without a major
change in the way Popkin depicts its OV-6a.
Automation of the general attributes mapping is definitely feasible, being that
most of these are simple numeric parameters. The difficulty will be matching the right
parameters in the architecture to the top level parameters SEAS expects. A spot check by
88
the modeler will be essential to ensure an “apples to apples” comparison. Furthermore, it
is unlikely that a typical SV-7 will contain top level parameters like “speed” or “ceiling”
but will most likely be focused on the detailed parameters of subsystems. To deal with
this, one may have to build a more traditional executable model to output these higher
level parameters (like the EAMA work being conducted by MITRE). This would be time
consuming and add another layer of complexity, however, and is probably overkill for the
purposes of most studies done using SEAS. When not explicitly available in the
architecture, obtaining the parameters by traditional means is probably the most efficient
solution.
The communications mapping also has potential for automation. The method
used here, which is based on the OV-3 needline, is already in spreadsheet form. The
difficult step would be transferring communications channel data from the SV-2, which is
in diagram form. Extracting the data should be an easy task for someone familiar with
reporting in Popkin. Like the general attributes, however, it will be important to give the
results a sanity check to ensure the right communications channel is matched to the right
needline.
4.2.3 Comparison to DoDAF’s “Architectural Uses” Figure 2-2
Knowing now which DoDAF products are most applicable to agent based
simulation, a comparison is warranted between these findings and Figure 2-2 in DoDAF
Volume II “Architectural Products by Use” – provided here in Appendix B. The Figure
breaks up the potential uses into four categories: Planning Programming Budgeting
Execution System (PPBES), JCIDS, Acquisition, and Operations. Most would argue that
M&S should be used in all these categories; for the purpose of this analysis, however,
89
only the Analysis of Alternatives subsection will be considered. There should be little
doubt about the importance of M&S at this stage, especially top level simulation with
quick turn around times like SEAS.
Overall, the predicted products required for an AoA in the DoDAF figure
matches up well with the findings in this research. Products like the OV-5 and OV-6,
which are essential for describing agent behavior in SEAS, are listed as “highly
applicable.” The SV-1, a key diagram for identifying weapons and sensors attached to
the platform, is also listed as “highly applicable.” OV-3 and SV-2, which are each
partially responsible for mapping the communications, are listed as “partially applicable.”
The one major discrepancy is the OV-4 which should be used to construct the command
structure in M&S. The DoDAF lists the OV-4 as usually not applicable. The OV-4 may
not apply to detailed level executable models that focus on the inner workings of an
isolated system, but it is important for a campaign model or a mission level model like
SEAS.
90
V. Conclusions and Recommendations
5.1 Conclusions
When fully implemented, the products defined by the DoD Architectural
Framework (DoDAF) should provide most, if not all, of the information necessary to
model a weapon system in an agent based simulation. Most architectures will not contain
every product defined in the DoDAF, however. This research has identified eight
important products, shown in Figure 24, needed to describe a weapons system for agent
based modeling.
Though the process used in this case study was primarily manual, potential exists
for automation. If the CADM is implemented as planned and is accepted by architectural
tool vendors we can look forward to using XML as a common language between
architectures and simulation. In the meantime, minor changes to the way architectures
are built may make the process much friendlier to automation (see Recommendations
below).
Analysis using Modeling and Simulation is important at all stages of system
development. Furthermore, it has been shown that agent based simulation improves the
ability to model the vital effects of C4ISR systems. Architectures have been permanently
adopted by the DoD to help develop and implement C4ISR intensive systems. Despite
their common goals, the development of systems architectures and the modeling of the
same system in simulation remain, for the most part, completely separate processes
accomplished by different organizations. As a remedy, this research has shown that
architectures have the potential to provide all the information needed for an agent based
91
model, and a methodology has been outlined. This should encourage a closer
collaboration between the system architect and the analyst. The result will be a better
simulation model that is consistent with the system’s actual architecture. It should also
reduce redundant work and result in faster turn around times for analysis at all stages of
development and implementation. This, in turn, will allow the system architects to
improve designs up front, and even allow operators to refine CONOPS prior to delivery.
This cycle that feeds back into the architecture will greatly enhance the DoD’s ability to
acquire and utilize C4ISR systems in the future.
5.2 Recommendations
While the DoDAF provides ample products to document system details, it is
unclear where broad, general information on the system should be stored. We
encountered this problem when looking for information like “number of people,” and
“deploy time.” The DoDAF should provide guidance on where this information should
be stored, such as in a subsection of the OV-4. This was the only real issue encountered
with the Framework – the rest of the needed information was accounted for, and this was
largely verified by the case study. As shown in Chapters 3 and 4, the case study has
uncovered many ways in which the DoDAF might be used more efficiently to support
agent based simulation. These recommendations are explained below. For
recommendations on how the simulation world might better interface with DoDAF
architectures, see DeStefano’s thesis (DeStefano, 2004:5.4).
5.2.1 Recommendations for Architecture Developers
Most of the architectures evaluated for this research aspire to be executable and
support military worth analysis, at least as a secondary purpose. Based on the experience
92
gained from doing this in the case study, we have identified a number of things system
architects can do during the development of the architecture that will greatly enhance the
architecture’s executability in agent based simulation.
While all eight products shown in are important, the OV-5 and OV-6a
are critical to be able to exploit the advantages of agent based simulation. These are the
products that allowed us to model the logical decision making processes that occur in an
AOC. This then allows key advantages like information superiority to play a role in the
simulation. Therefore, we recommend having an OV-6a written in Structured English
that defines the rules for each leaf activity in OV-5 as a prerequisite to making the
architecture executable. Whenever possible, the architect should attempt to model the
rules used by decision makers in the AOC, rather than just saying “commander’s
discretion.” It is important to know exactly what guidelines the commander uses for his
discretion when modeling in an agent based simulation.
Figure 24
The construction of an SV-7 System Performance Parameters Matrix is often a
dreaded task for a systems architect as it involves tracking down (or defining, in the case
of a to-be architecture) hundreds, if not thousands, of detailed level sub-system
performance specifications. While these may be useful for other reasons, SEAS is only
concerned with top level performance information like speed, ceiling, average
communications delay, etc. Consequently, having an abbreviated, high level SV-7 in
place before attempting to map the architecture into agent based simulation will save the
analyst a lot of leg work.
Many architectures contain products that are only partially completed. For
example, the AOC Architecture had an OV-3 and SV-6 that contained no information on
93
the frequency or protocol of the information and data exchanged. In order to accurately
map communications in SEAS, this information is vital. We recommend that it be
completed for every needline in the OV-3. In addition to this, the communications path
(as illustrated in SV-2) taken by each system data exchange (and in some cases,
information exchange) should be clearly enunciated. This is essential if the effects of
communications disruptions are to be modeled.
5.2.2 Recommendations for Tool Vendors
While the method Popkin chose to depict the OV-6a is beneficial in some respects,
it seems to neglect the main point of the product: enunciating the operational rules for
each leaf activity. We would recommend a simple text dialog that is available only for
leaf activities that allows the rules to be entered in Structured English. The OV-6a could
then be generated as a report, or even overlaid on the leaf activities in the OV-5 diagram.
We also recommend that tool vendors pursue CADM compliance in future
version of their products. Having a common data model will greatly increase the
potential for versatility of a tool that aides in the transfer of data from architectures to
M&S.
5.2.3 Recommendations for Further Research
Due to limitations in the architectures we used for our case study, and to the scope
of the Kosovo SEAS model, we were not able to validate the use of all the products in
. By repeating this case study with a different architecture and war file,
information from the SV-7, OV-4, and SV-1 could be used, thus validating their role in
the process. This follow on work could also focus on the automation of the process. It is
highly recommended that whoever undertakes this have some background in computer
Figure 24
94
programming, as much of the automation would involve manipulating report generation
scripts.
While this research focused on the transition from architecture to M&S, it is
important that changes made to the model are reflected back into the architecture.
Research could be done on the transition from M&S back to architectures, or simply
improving the synchronicity between the two.
If the DoDAF is approved and the CADM is utilized by tool vendors,
architectures in the future will be shared in XML format. Theoretically, the XML
architecture would be in a much more convenient form for importing into simulation than
the current standards. Future work could focus on an automated XML to SEAS interface.
95
96
Appendix A: DoDAF Products list (from section 2.2 of Volume II)
Appendix B: DoDAF’s Architecture Products by Use (from section 2.2 of Volume II)
97
Bibliography
Air Force Institute of Technology (AFIT). Course Catalog 2003-2004. http://www.afit.edu/information/catalogs/catalog%2003-04.pdf
AOC Weapon System Block 10.1 Architecture, Version 2.0. The MITRE Corporation,
Hampton, VA. 1 October 2003.
Cares, Jeffrey. The Use of Agent-Based Models in Military Concept Development, 2002 Winter Simulation Conference.
Dam, Steven; Willis, James. Defensible C4ISR Architectures Require the Application of Good System Engineering Practice. Systems and Proposal Engineering Company, Marshall VA, 2002.
Dandashi, Fatma. DoD Architecture Framework Overview. Briefing slides, ASD (NII) DoD CIO, October 2003.
Davis, Paul K. Analytic Architecture for Capabilities-Based Planning, Mission-System
Analysis, and Transformation. RAND, 2000.
Department of the Air Force. Air Force Policy on Enterprise Architecting. Washington: HQ USAF, 6 August 2002.
Department of the Air Force. Air Force Enterprise Architecture Framework. AF Chief Architect’s Office, 6 June 2003.
Dickerson, Charles; Soules, Steve. Using Architecture Analysis for Mission Capability Acquisition 2000
Department of Defense. DoD Business Transformation Brief. Washington: 2003. http://dod5000.dau.mil/DOCS/V5DoDBusinessTransformationBrief.ppt.
Department of Defense, USD (A&T) Memorandum: Strategic Direction for a DoD Architecture Framework. 23 February 1998.
Department of Defense. C4ISR Architecture Framework Version 2.0. Washington: December 1997
Department of Defense. Procedures for Interoperability and Supportability of Information Technology (IT) and National Security Systems (NSS). DoD Instruction 4630.8. Washington: 2 May 2002
Department of Defense. The Defense Acquisition System. DoD Directive 5000.1. Washington: May 12, 2003
98
Department of Defense. Operation of the Defense Acquisition System. DoD Directive
5000.2. Washington: May 12, 2003
DeStefano, Gregory V. Agent Based Simulation SEAS Evaluation of DoDAF
Architecture. MS thesis, AFIT/GOR/ENS/04M-05. School of Engineering and Management, Air Force Institute of Technology (AU), Wright-Patterson AFB OH, March 2004
Fishwick, Paul A. Using XML for Simulation Modeling, Proceedings of the 2002 Winter
Simulation Conference.
Federal Information Processing Standards (FIPS) Publications. http://www.itl.nist.gov/fipspubs/by-num.htm
Griffin, Allison; Lacetera, Joe; Tolk, Andreas. C4ISR/Sim Technical Reference Model Study Group Final Report, 2003
Gonzales, Dan; Moore, Lou; Pernin, Chris; Matonick, David; Dreyer, Paul. Assessing the Value of Information Superiority for Ground Forces – Proof of Concept, RAND, 2001
Headquarters US Air Force. Architecture Course. Class handouts. Arlington, VA, August 2003
Handley, Holly; Zaidi, Zainab; Levis, Alexander. The Use of Simulation Models in Model Driven Experimentation, System Architectures Laboratory, C3I Center, George Mason University, 2000
IEEE STD 1471-2000
Levis, Alexander; Wagenhals, Lee. C4ISR Architectures: I. Developing a Process for C4ISR Architecture Design, System Architectures Laboratory, C3I Center, MSN 4D2, George Mason University, July 2000
Levis, Alexander. Class handouts, SENG 640, Systems Architecture. Center For Systems Engineering, Air Force Institute of Technology, Wright-Patterson AFB OH, Winter Quarter, 2003.
Popkin Software. Class handouts, System Architect C4ISR Course. Instructed by Preston
Rogers, 9-13 September, 2003.
Maier, Mark; Rechtin, Eberhardt. The Art of Systems Architecting, Second Edition, CRC Press, Boca Raton, 2002
99
Mayer, Richard; et al. Information Integration for Concurrent Engineering (IICE)
IDEF3 Process Description Capture Method Report. Knowledge Based Systems Incorporated, College Station, TX, September 1995.
Nimz, Arthur. The C4ISR Architecture Framework and its Impact on the System Engineering Process. Presentation Slides – INCOSE Delaware Valley Chapter, November 2000.
Pawlowski, Tom, Paul Barr, and Steve Ring. Executable Architecture Methodology for Analysis. The MITRE Corporation, September, 2003.
Rechtin, Eberhardt. Systems Architecting, Creating and Building Complex Systems, Prentice-Hall, Inc., New Jersey, 1991
Ring, Steve. Activity-Based Methodology for Integrated Architectures. Presentation Slides – the MITRE Corporation, September 2003 Sowell, Kathie. The C4ISR Architecture Framework: History, Status, and Plans for
Evolution. The MITRE Corporation, McLean, Virginia, 2000 Surer, Scott. The MITRE Corporation, Bedford MA. Personal Correspondence.
December 2003. TCT 2005 Architecture, Version 1.0. The MITRE Corporation, Hampton, VA. June
2003. Tolk, Andreas; Hieb, Michael. Building & Integrating M&S Components into C4ISR
Systems for Suporting Future Military Operations, Position paper for the 2003 International Conference on Grand Challenges for Modeling and Simulation, August, 2002
Tighe, Thomas R. Strategic Effects of Airpower and Complex Adaptive Agents: An
Initial Investigation. MS thesis, AFIT/GOA/ENS/99M-09. School of Engineering and Management, Air Force Institute of Technology (AU), Wright-Patterson AFB OH, March 1999
Klügl, Franziska; Oechslein, Christoph; Puppe, Frank; Dornhaus, Anna. Multi-Agent
Modeling in Comparison to Standard Modelling, AIS’2002, F.J Barros and N. Giambiasi (Eds.) SCS Publishing House, pp 105-110, 2002
Kuck, Inara. Warfare Simulation: Status and Issues for Space – An addendum, Air
Force Research Laboratory Directed Energy Directorate, April 2003
100
Vittori, Jay. The MITRE Corporation, Hampton VA. Personal Correspondence. December 2003.
Wagenhals, Lee; Shin, Insub; Kim, Daesik; Levis, Alexander. C4ISR Architectures: II.
A Structured Analysis Approach for Architecture Design, System Architectures Laboratory, C3I Center, MSN 4D2, George Mason University, July 2000
Weber, Robert H. The Aerospace Corporation, El Segundo CA. Personal
Correspondence. June 2003 – March 2004. Wohlstetter, Albert. Theory and Opposed-System Design, RAND D(L)-16001-1, January
1968 Zachman, John. Concepts of the Framework for Enterprise Architecture, Zachman
International, 1997
101
102
Vita
Captain Andrew Zinn graduated from the University of Portland in 1999 with a
B.S. in Mechanical Engineering. Lt Zinn participated in AFROTC and competed in the
SAE Mini Baja competition while at UP. Upon his commissioning, Lt Zinn was assigned
to the Aeronautical Systems Center at WPAFB as a developmental engineer. While at
ASC, he oversaw the testing, delivery, and sustainment of six types of aircrew training
devices (flight simulators) for the T-6A Texan II primary trainer. In August 2002, he
entered the Graduate School of Engineering and Management, Air Force Institute of
Technology. Upon graduation, he will be assigned to the Space and Missile Center’s
Transformation Directorate at Los Angeles AFB, CA.