Background
• DISCOM is a tool to record, playback, & print-out Distributed Interactive Simulation PDUs -- a UDP datagram broadcasted on the network
• DISCOM was rewritten from a structured approach to an Object-Oriented approach a year ago.
• DISCOM presents a unique opportunity to analyze quality attributes from both a structured approach as well as an object-oriented approach
DISCOM Explanation
Quality Attributes to Analyze
• Performance/Efficiency
• Maintainability
• Reliability
• Testability
• Reusability
Performance/Efficiency
Definition:• The response time, utilization, and throughput behavior of
the system. As well as fulfilling its purpose without waste of resources
Hypothesis:• Structural Approach is marginally faster.
Measurement Approach:• VTune Performance Analyze both copies with a log file.• Analyze cost of dynamic allocation • Observe memory usage
VTune Performance Analyzer
VTune Results
Module ProceduralTime (μs)
OOTime (μs)
Speedup
Packet Processing 1,479,967 5,842,532 -3.95X
File Writing 277,978 573,210 -2.06X
NetworkListener
261,850 425,221 -1.62X
Graphics 13,168,507 20,231,334 -1.54X
Other 7681627 11,563,422 -1.51X
Sleep Time 33,322,979 17,887,733 N/A
Main 56,192,908 56,523,452 N/A
Total Exec 22,869,929 38,635,719 -1.69X
Table 1. Performance Metrics
1 minute playback file consisting of ~10000 packets/59000 PDUs
With more effort…
The cost of dynamic allocation, C, is given by:
)(* DEPTH
dc ssNC
where N is the number of calls, depth is the number of contained objects that must be created when the class is created, sc and sd are the service time to create the object and to destroy the object respectively.
The cost of context switching, C, is given by:
)(* DEPTH
uw ttNC
where N is the number of calls, depth is the number of contained objects that must be created when the class is created, tw and tu are the winding time and the unwinding time to push to the call stack and pop from the call stack respectively.
Efficiency Metrics
Statistic Procedural OO Factor
Memory Median
10,402K 13,650K -1.31X
ExtendedDurationMemoryUsage
10,402K 24,232K* -2.33X
Table 2. Efficiency Metrics
* May be indicative of memory leak
1 minute playback file consisting of ~10000 packets/59000 PDUs Extended = 1 minute-playback looped 10x
Performance/Efficiency Conclusion
• Performance: – Close to 2X slow-down in using OO-code
• Efficiency: – OO requires more memory. – Probably even more so in a pure-OO language
(like Java) where everything must be instantiated and no concept of pointers
– More prone to hazards (excessive dynamic allocation of objects)
Definition: • The extent to which software facilitates updates to satisfy
new requirements
Hypothesis:• Object-Oriented approach is more easily maintainable
Measurement Approach:• Track resolution time for Discrepency Reports• Track change time for Change Requests• Track prevalance of changes from affected file-listings• Measure cyclomatic complexity
Maintainability/Modifiability
Cyclomatic Complexity
• Cyclomatic Complexity directly measures the number of linearly independent paths through a program's source code.
• Equivalent to CFG Paths (ifs, loops, etc)
• WMC is OO-metric that rolls-up CCs in class
• Use a Tool Called CodeAnalyzerPro to measure CC, WMC, & SLOC
Code Analyzer Pro
Modifiability MetricsParadigm Max
CCMAXWMC
SLOC
UI Procedural 92 N/A 8643
OO 14 93 5235
DIS Procedural 147 N/A 42423
OO 17 185 23563
Total Procedural 147 N/A 51066
OO 17 185 28798
Table 4. Modifiability Metrics
Modifiability Metrics (Cont.)
Paradigm #DRsTotalTime (Hrs.)
AvgResolu
tionTime
Procedural 24 94.5 3.94 Hours
OO 11 29.2 2.65 Hours
Table 4. Discrepency Metrics
Paradigm #CRsTotalTime (Hrs.)
AvgChang
eTime
Procedural 29 204.0 7.03 Hours
OO 19 98.2 5.17 Hours
Table 4. Change Metrics
Modifiability Conclusion
• OO paradigm appears to be more understandable by way of lower method Cyclomatic Complexity and SLOC count
• Change Request / Discrepency Report tracking information seems to support this conclusion
Reliability
Definition:• It can be expected to perform its intended
functions satisfactorily
Hypothesis:• Reliability growth is steeper with OO-Design
methodology
Measurement Approach:• Track Discrepancy Reports• Reliability Modeling of event simulation log files
using CASRE to generate reliability profiles
Reliability Growth Modeling Setbacks
• Reliability is generally very high with many hidden problems that are not recorded.
• Furthermore, I could not find meaningful unclassified data-set for procedural paradigm and while I could have gone ahead with evaluation of OO-Reliability growth, there would have been nothing to compare it to.
Reliability Metrics
Paradigm # DRs #Runs Avg Run Time
#DRs/Minute
Procedural
8 15 52.4 0.01018
OO 5 17 53.9 0.00055
Table 3. Reliability Metrics
Paradigm # DRs
Severity
Average
(1-5)
Avg Resolution Time (Hrs from DR
Entry)
Procedural
8 3.2 73.5
OO 5 2.2 34.2
Table 4. Reliability Growth Metrics
Reliability Conclusion
• OO exhibited higher reliability growth after integration deployment
• Isn’t conclusive proof to support a clear determination on the Reliability attribute
• Exception handling may help with discrepency resolution as well as severity (try/catch)
Testability
Definition:• The extent that software facilitates the establishment of
acceptance criteria and supports evaluation of its performance
Hypothesis:• From a class-level unit-testing standpoint OO design
prevail. From a system level or for debugging issues a structural design.
Measurement Approach:• Measure cyclomatic complexity• Perform unit-testing and document qualitatively the ease
Cyclomatic Complexity Revisited
Paradigm Max CC MAXWMC
SLOC
UI Procedural 92 N/A 8643
OO 14 93 5235
DIS Procedural 147 N/A 42423
OO 17 185 23563
Total Procedural 147 N/A 51066
OO 17 185 28798
Testability Qualitative Assessment
Paradigm Unit TestHooks
DebuggingEase
Procedural HARD [4 unit test hooks inserted --
challenging]
Moderate [Difficult with multiplicity of objects, Easy with 1
object]
OO EASY [10 unit test hooks quickly easilyinserted]
Moderate[Due to several paths]
Testability Conclusion
• Object-Oriented Paradigm is better suited for testability. – More logical coherence– Less paths/more functions for easier unit-test
hook insertion– Less Cyclomatic Complexity within functions– Debugging ease similar across paradigms.
Reusability
Definition:• The likelihood a segment of source code can be
used again to add new functionalities with slight or no modification
Hypothesis:• OO approach is more reusable
Measurement Approach:• Observe opportunities available and opportunities
taken for reuse at both the object level and and the system level
Reusability Approach
• The first step at evaluation was to survey both reuse opportunities available and reuse opportunities taken in the labs.
• Next, reusability was gauged by qualitatively mashing a simplistic Create Entity PDU into both an application with an existing DIS interface as well as one without. This was done utilizing the code from both the procedural application as well as the object-oriented application.
• Measurement was qualitative primarily from a time perspective as SLOC modified is not necessarily indicative of the ease of reuse.
Reusability Metrics
Paradigm Reuse OpsRealized/Year
DifficultyExist App
DifficultyNew App
Procedural 0.2 MED MED/EASY
OO 1 HARD EASY
Table 5. Reusability Metrics
Reusability Conclusion
Existing Applications:
For applications with existing interfaces reusability is moderately smoother if one begins with code from the procedural paradigm. This is due primarily, in my opinion, to ease of understandability. With Object-Oriented Code, one has to understand 2 designs, both the source and the target, in order to adapt it to fit the needs.
New Applications:
For applications without existing interfaces reusability is smoother if one begins with the Object-Oriented Paradigm due to the enhanced modularity and portability of the approach.
References
• Software Performance AntiPatternsConnie U. Smith, Lloyd G. Williams
• Indicators of Structural Stability of Object-Oriented Designs: A Case Study
Mahmoud O. Elish and David Rine• Has Object-Oriented Programming Delivered?
Greg Goth• A Controlled Experiment in Maintenance Comparing
Design Patterns to Simpler Solutions
Lutz Prechelt, Walter F. Tichy,
Questions/Comments