system testing in the avionics domain - elib.suub.uni...

434
System Testing in the Avionics Domain von Aliki Ott Dissertation zur Erlangung des Grades einer Doktorin der Ingenieurwissenschaften – Dr.-Ing. – Vorgelegt im Fachbereich 3 (Mathematik & Informatik) der Universit¨ at Bremen im Oktober 2007

Upload: others

Post on 26-Oct-2019

29 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

System Testing in the Avionics Domain

von Aliki Ott

Dissertation

zur Erlangung des Grades einer Doktorin derIngenieurwissenschaften

– Dr.-Ing. –

Vorgelegt im Fachbereich 3 (Mathematik & Informatik)der Universitat Bremenim Oktober 2007

Page 2: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Datum des Promotionskolloquiums: 18.12.2007

Gutachter: Prof. Dr. Jan Peleska (Universitat Bremen)Prof. Dr. Bernd Krieg-Bruckner (Universitat Bremen)

Page 3: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Zusammenfassung

In der Vergangenheit bestanden Flugzeugsysteme aus (weitgehend) unabhangigen Steuerelemen-ten und dazugehorigen Aktuatoren und Sensoren, die jeweils fur ihre spezifische Aufgabe im Sys-tem entwickelt wurden. Um bei neuen Flugzeugentwicklungen den gestiegenen Anforderungenund dem erweiterten Funktionsumfang gerecht werden zu konnen, basieren moderne Flugzeug-systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das eine Kombinationaus verschiedenen Komponenten zulasst: fur den nicht sicherheitskritischen Bereich konnen vor-handene, nicht fur die Luftfahrtindustrie entwickelte Komponenten verwendet werden; fur sicher-heitskritische Flugzeugfunktionen wird Integrated Modular Avionics (IMA)-Technologie einge-setzt. Um die IMA-Architektur modular, offen und sehr flexibel zu halten, basiert sie auf gemein-sam nutzbaren, standardisierten Plattformen (den IMA-Modulen) und standardisierten Netztech-nologien, die beide speziell fur die Anforderungen in der Luftfahrtelektronik – aber unabhangigvon den Aufgaben in spezifischen Flugzeugsystemen – entwickelt wurden. Fur das jeweilige Flug-zeugsystem bedeutet dies, dass vormals aufgabenspezifische Steuerelemente durch mit anderenSystemen gemeinsam genutzte IMA-Module ersetzt werden. Da diese Veranderungen auch Aus-wirkungen auf den gesamten Entwicklungs- und Prufprozess haben, mussen neue Ablaufe defi-niert und passende Methoden und Werkzeuge entwickelt werden.

Die vorliegende Arbeit betrachtet die oben skizzierte Entwicklung aus zwei Blickwinkeln: Zumeinen gibt der Systems-Engineering-Teil eine Ubersicht uber die Ablaufe und Methoden im Allge-meinen, wie sie bei der Entwicklung und dem Testen von Flugzeugsystemen Anwendung finden,und beschreibt zudem, wie sich die Technologien und die Prozesse – speziell im Bereich System-testen von Flugzeugen – verandert haben. Zum anderen vertiefen zwei Fallstudien die zuvor be-schriebenen Testaktivitaten in zwei neu identifizierten Testbereichen.

Im Systems-Engineering-Teil dieser Arbeit wird beschrieben, wie heutige Flugzeugsysteme ent-wickelt werden und wie sich dies mit der Entwicklung hin zu IMA-Technologie verandert hat. Da-bei werden jeweils zwei Perspektiven (die ”Systemsicht“ und die ”Prozesssicht“) gewahlt, um eineumfassende Betrachtung des Themas Systemtesten in Flugzeugen zu ermoglichen und außerdemdie konzeptuelle Basis fur die Fallstudien zu liefern. Fur die ”Systemsicht“ fasst die vorliegen-de Arbeit das Wissen uber Flugzeugsysteme bezuglich Architekturmodelle, Redundanzkonzepte,verwendete Typen von Plattformen und Netztechnologien zusammen und beschreibt, wie sich die-se verandert haben. Der Schwerpunkt liegt dabei auf IMA-basierten Systemen und den speziellenEigenschaften von IMA-Modulen. Fur die ”Prozesssicht“ werden die Entwicklungs- und Testakti-vitaten auf allen Ebenen (von einzelnen Komponenten bis hin zum Flugzeug als Ganzes) betrachtetund die Unterschiede aufgezeigt, die sich aus der Verwendung von unterschiedlichen Architektur-modellen oder der Nutzung verschiedenen Plattform ergeben. Fur IMA-basierte Systeme werdendabei zwei neue Testbereiche diskutiert, die beide unabhangig von spezifischen Flugzeugsyste-men sind: das Testen von einzelnen IMA-Plattformen und das Testen des Kommunikationsflussesin einem Netz aus IMA-Modulen.

Die Fallstudien im zweiten Teil dieser Arbeit vertiefen jeweils einen moglichen Ansatz furdiese beiden Testbereiche. Die erste Fallstudie beschreibt eine Testsuite fur das automatisier-te Testen von einzelnen IMA-Modulen ohne Anwendungssoftware. Die Testsuite ermoglicht es,konfigurations- und anwendungsunabhangig die Einhaltung des in der ARINC 653-Spezifikationerlaubten Verhaltens zu prufen und die Konfigurierbarkeit des IMA-Betriebssystems zu verifizie-ren. Besonders eingegangen wird in der vorliegenden Arbeit auf die Verwendung von generischenVorlagen fur Testspezifikationen, die nach der Instantiierung fur verschiedene (Test-)Konfigura-tionen als separate Testfalle ausgefuhrt werden konnen. Außerdem werden weitere Details zur

iii

Page 4: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

iv

praktischen Umsetzung diskutiert. Die Anwendung der Testsuite im Forschungsprojekt VICTO-RIA sowie die in der Arbeit vorgenommenen Untersuchungen bestatigen, dass das in der Testsuiteumgesetzte Vorgehen gut fur automatisierteHardware-in-the-loop-Tests von IMA-Plattformen ge-eignet ist und zudem signifikant dazu beitragt, eine – im Vergleich zum manuellen Testen – hohereTestabdeckung zu ermoglichen; ein Ergebnis von dem insbesondere auch bei Regressionstests pro-fitiert werden kann.

Die zweite Fallstudie stellt einen Ansatz zum Testen des Kommunikationsflusses in einem Netzaus konfigurierten IMA-Plattformen vor, der es gestattet, alle moglichen Anwendungsinteraktio-nen zu testen, die von der Spezifikation des IMA-Moduls sowie der Netzkonfiguration (d.h. ei-ner bestimmten Kombination von Modulkonfigurationen) erlaubt werden. Im Rahmen dieser Ar-beit wird dabei insbesondere ein Algorithmus entwickelt, der all diese Testfalle generieren kann.Es wird weiterhin untersucht, wie diese Testfalle (nach vorgegebenen Kriterien) priorisiert wer-den konnen, so dass die zur Verfugung stehende Testausfuhrungszeit fur die wichtigen Testfalleverwendet werden kann. Die in der Arbeit dargelegte Bewertung dieses Ansatzes bestatigt dieNotwendigkeit eines automatisierbaren Generierungsalgorithmus und zeigt dabei auch, dass nocheffektivere Priorisierungsfunktionen erforderlich sind – insbesondere fur realistische (d.h. rechtkomplexe) Netzkonfigurationen –, fur deren Ausgestaltung sie Hinweise gibt.

Page 5: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Abstract

Traditionally, avionics systems consisted of several independent or loosely coupled special-to-purpose controllers which were each developed by the respective system supplier. To deal withthe more demanding requirements and additional feature requests for new aircrafts, current avion-ics systems are based on the framework of integrated modular electronics which allows to usecommercial-of-the-shelf products for less critical functions and uses integrated modular avionics(IMA) technology for safety-critical avionics functions. This integrated avionics architecture ismodular, open and highly-flexible. It is based on common standardized platforms (the IMA mod-ules) and standardized aircraft networking technology. For the avionics systems, this means thatmost special-to-purpose controllers are replaced by shared IMA platforms which are developedindependently of the respective systems. Since this evolution affects the entire development, newprocesses and means for the development and the verification and validation of avionics systemsare required.This thesis addresses this evolution from two angles: Firstly, a systems engineering part introducesthe general processes and approaches to be considered when developing and testing avionics sys-tems and describes how the technology and the development processes have evolved, particularlywith respect to system testing in avionics. Secondly, two case studies detail the described generaltesting activities during system integration for two newly identified testing areas.In the systems engineering part, this thesis describes how avionics systems are currently developedand discusses the effects of evolving towards integrated modular avionics. To provide an encom-passing reflection of system testing in avionics and deliver the conceptual foundation for the casestudies, both are addressed from a “system point of view” and a “process point of view”. For theformer, this thesis assembles the knowledge about avionics systems with respect to architecturemodels, redundancy concepts, used platform types, and common aircraft networking technologiesand describes how these have evolved. The focus is on IMA-based systems and the specific char-acteristics of IMA platforms. For the latter, this thesis introduces the development and testingactivities from aircraft level to equipment level and elaborates on the differences arising from thechosen avionics architecture model and the used platform types. For IMA-based systems, it re-veals two new testing areas which are both independent of specific avionics systems: Testing ofsingle IMA platforms and testing the communication flow in a network of IMA platforms.One possible approach for each of these two testing areas is addressed by case studies in the secondpart of this thesis. The first case study describes a test suite for automated testing of bare IMAplatforms which allows to verify the behavioral compliance with the ARINC 653 specification andwith the configurability of the IMAmodule’s operating system, independently of a specific moduleconfiguration or the behavior of specific system applications. In particular, this thesis elaborates onthe test suite’s approach of providing generic test specification templates which can be instantiatedfor many different module configurations and discusses the further implementation details. Theimplementation experience and the final assessment confirm that the approach is well suited forautomated hardware-in-the-loop testing of bare IMA platforms and contributes to a significantlyhigher test coverage (compared to manual testing within the same execution time) – also for eachround of regression testing.The second case study presents an approach for testing the communication flow in a network ofconfigured IMA platforms in order to verify that all interactions allowed by the IMA platformspecification and the respective combination of IMA platform configurations are possible. Thethesis elaborates on an algorithm for test data generation and investigates means for prioritizingthose test cases to limit the test execution time. The final assessment of the approach confirmsthe necessity of an automatable test case generation algorithm but also shows that more effectivepriorization functions are essential for realistic, i.e., very complex, networks of IMA platforms.

v

Page 6: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

vi

Page 7: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Preface

When developing a new aircraft, the airframer encounters various – partly contradictory – demandsand requirements: The airlines expect new features and other state-of-the-art equipment for im-proved passenger comfort and increased flight crew support but also to reduce life cycle cost bygetting an economic and maintainable aircraft for reasonable acquisition cost. The regulation au-thorities expect that the aircraft and all its systems and components are developed and tested incompliance with the certification requirements. The airframer wants to reduce development costand to shorten the time to market without affecting the aircraft’s quality and safety. In addition,there are many different, mostly safety-critical and complex systems on-board an aircraft whichare required to perform and control the functionality needed for a safe start, flight and landing aswell as for passenger comfort and leisure. While some of these requirements can be dealt with byimproving existing components (e.g., more efficient engines, reduced component weight), otherscan only be handled by a combination of means. This includes using new types of components andnetworking technology, benefiting from technologies and products developed for other markets,introducing a new aircraft architecture, and improving and adapting the processes and methods forsystem development as well as verification, validation, and certification.

Although this evolution can be observed for each new aircraft development, three major evolutionphases can be distinguished for the development of avionics systems: In the beginning, aircraftsused a network of several separate and independent controllers which were each only connected tothe respective system’s sensors and actuators. The controllers were special-to-purpose platformswhich used analog communication links for connecting to the peripheral equipment. This indepen-dent avionics architecture model evolved to a so-called federated avionics architecture model werethe controllers were loosely coupled for information exchange and some analog communicationlinks were replaced with digital ones. In the latest phase, to support a more flexible architecture,aircraft development is currently based on the framework of integrated modular electronics (IME)which allows to use commercial-off-the-shelf (COTS) products for less critical functions and oth-erwise uses Integrated Modular Avionics (IMA) technology for safety-critical avionics functions.In the IMA architecture model, most special-to-purpose controllers are replaced by common stan-dardized platforms (so-called IMA modules) that usually host applications of several systems. Forinter-system communication, either IMA module-internal communication or standardized aircraftnetworking technology with guaranteed bandwidth and high availability is used. The advantagesof IMA are manifold: Shared computing resources need less space allocation, reduce the overallweight and might even lower the power consumption. State-of-the-art standardized communi-cation technology can reduce the required wiring while still ensuring guaranteed bandwidth andhigh availability and providing a well-defined application-layer communication protocol. Stan-dardized shared platforms can be developed independently of the systems which avoids redundantplatform development effort and costs for each system. In addition, maintenance for the airlines issimplified because only few different types of platforms have to be held available.

Despite these advantages, the use of IMA also has a major impact on the development and verifica-tion and validation processes and means: For the development process, this includes new aircraftarchitecture concepts, new approaches for functional grouping of aircraft functions into systems,

vii

Page 8: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

viii PREFACE

new responsibilities for developing the avionics platforms and the system software, and new meth-ods applied for design and requirements definition. For the verification and validation process, themain impact comes from the use of shared computing resources and the new allocation of respon-sibilities within the development process. The former requires the development of new means forensuring the partitioning (i.e., the isolation of the systems hosted on the same platform) and theavailability of the specified communication capacity. The latter requires to develop new processes,methods, and tools for handling the partial shift of the hardware/software integration testing foreach subsystem from the supplier side to the integrator (i.e., the aircraft manufacturer). Moreover,in case of problems and system malfunction, new processes and methods have to be available tocomplement the fault localization because problems can be caused by “components” delivered bydifferent suppliers: For example, by the shared resources’ hardware or software, the allocation ofresources, the system software, the system design, or the network configuration.

Objective

This thesis addresses the aforementioned topics in two ways: Firstly, it describes – from a sys-tems engineering point of view – the development and the verification and validation (V&V) ofavionics systems and addresses in more detail the interdependencies of IMA-based avionics sys-tems. Moreover, it introduces the new processes and methods to be considered when developingand testing systems based on integrated modular avionics. Secondly, the thesis details two testapproaches that support fault localization and increase the confidence in IMA-based systems bytesting the IMA platform and the configured communication network, respectively, independentlyof any system-specific application software. This means that this thesis combines one system en-gineering part which compiles information about processes and avionics systems and one part withdetailed considerations concerning the system test approach for avionics systems using integratedmodular avionics. Thus, the systems engineering part with its overview provides the requiredfoundation to understand and evaluate the pros and cons of the introduced new testing approaches.The work presented in this thesis has partly been carried out in the research projects VICTORIAand KATO.

Overview

This thesis is divided into four parts as it is depicted in Fig. 1: Part I provides an introductionto avionics systems and presents the processes and methods for system development and testing.This introduction is then further detailed in Part II for avionics systems using integrated modularavionics. Part III presents as case studies the details of the two testing approaches, the approachfor automated bare IMA module testing and the approach for testing a network of IMA platforms.Part IV discusses the lessons learned, evaluates the presented approaches and concludes with anoverall summary. The first two parts pertain mainly to the systems engineering part of this thesiswhich creates a consistent terminology, coherent process descriptions and the conceptual basis forfurther detailing testing approaches in avionics. Thus, Part I and II provide the required foundationto understand the pros and cons of the new testing approaches addressed in the latter two parts.

In the introduction to avionics systems (Part I), Chapter 1 addresses the general developmentprocess for avionics systems and also discusses the impact of the chosen architecture and theused platforms. For this purpose, the chapter introduces the characteristics of avionics systems,compares the main architecture models and basic redundancy concepts, provides a comparisonof different platform types, and briefly discusses different aircraft networking technologies. Thischapter is complemented by the verification and validation considerations in Chapter 2. Afterbriefly subsuming methods for verification and validation, this chapter focuses mainly on testing

Page 9: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

ix

Chapter 8Evaluation of the Test Suite forfor Automated IMA Platform Testing

Part II – Avionics Systems using Integrated Modular Avionics

Part I – Introduction to Avionics Systems

Chapter 1 Chapter 2

Chapter 5Chapter 4Chapter 3

Chapter 9

Chapter 10

Testing a Network of IMA PlatformsEvaluation of the Approach for

Approach for Testing a Network ofAutomated Bare IMA Module Testing

Conclusion

Introduction to IMA Platforms System Architectures using IMA Testing of Systems based on IMA

Testing of Avionics SystemsDevelopment of Avionics Systems

Part IV – Lessons Learned

Part III – Case Studies

Chapter 6 Chapter 7

IMA Platforms

Figure 1: Overview of the Thesis

and introduces the general approach for aircraft integration and the respective testing activities.For testing, the test design documents and the selection of test cases and test procedures are anessential part. In particular, the specification techniques used in the test design documents lead todifferent restrictions regarding test data generation, test execution, and means for test evaluationand error diagnosis. Since this chapter focuses on testing as one of the most promising implemen-tation verification methods that allow extensive automation, different specification techniques areassessed with respect to automated testing.Based on the introduction in the first part, Part II provides further details for avionics systemsusing integrated modular avionics (IMA). Chapter 3 introduces IMA platforms as the key part ofIMA-based system architectures and describes their hardware characteristics as well as the IMAoperating system API specified in the ARINC 653 specification. Since IMA platforms shall beusable for many different purposes and, therefore, have to be highly configurable by nature, thischapter also presents the IMA module configuration tables as they are used in this thesis anddiscusses configuration responsibilities and configuration change management and their impacton developing application software for IMA platforms. To provide a deeper understanding onthe usage of IMA platforms, Chapter 4 then presents the characteristics of IMA-based systemarchitectures. With these details at hand, the required change of the system engineering processes– and particular of the system integration and testing approach – is further detailed in Chapter 5.In addition, the chapter elaborates on two newly introduced testing areas, testing single IMAplatforms and testing a network of IMA platforms, and discusses the requirements of test benchesfor such tests.Part III comprises two case studies that address the aforementioned newly introduced testing areas.Chapter 6 describes a test suite for automated testing of bare IMA platforms which allows to verifythe behavioral compliance with the ARINC 653 specification as well as the configurability of theIMA module’s operating system, independently of a specific module configuration or the behaviorof specific system applications. In particular, the chapter elaborates on the test suite’s approach ofproviding generic test specification templates which can be instantiated for many different moduleconfigurations and discusses the details for implementing the approach. For this purpose, it alsodescribes the environment for automated test execution and test evaluation.Chapter 7 presents an approach for testing the communication flow in a network of IMA plat-

Page 10: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

x PREFACE

forms in order to verify that all interactions (as allowed by the IMA platform specification and therespective combination of IMA platform configurations) are possible. The chapter elaborates onan algorithm for test data generation and also investigates means for handling (i.e., reducing orsorting) the generated test cases if the generation results in too many test cases to be executed inan automated way within the provided testing time. For assessing the characteristics of the gener-ation algorithm, a prototype implementation and a set of example networks have been developed.Furthermore, the chapter discusses implementation aspects of the approach concerning automatedtest execution and test evaluation.Part IV elaborates on the lessons learned and evaluates the case studies presented in the previouspart. Chapter 8 critically analyzes the approach for automated bare IMA platform testing andassesses the presented test suite implementation. It also compares the approach with the ARINC653 Conformity Test Specification. For the second case study, Chapter 9 assesses the describedapproach for testing a network of IMA platforms and discusses possible improvements. Finally,Chapter 10 contains the conclusion from the findings obtained in this thesis and discusses futurework.This thesis is based on an extensive literature study. Throughout the thesis, the main documentsused within each section and suggestions for further reading are summarized in a references para-graph at the end of the respective section.

Contributions

This thesis – particularly the systems engineering part – creates a consistent terminology and co-herent process descriptions and, where necessary, adapts the system and process descriptions tothe IMA concept. To support the understanding of processes and workflows, graphical abstrac-tions accompany the textual descriptions. This thesis is compiled using various publications abouttesting, avionic systems, systems engineering, and software engineering for real-time systems,among others. The respective documents include standards, guidelines, recommended practices,books, papers and conference material. Each of them usually focuses mainly on one single aspect– but not necessarily in a coherent way with respect to terminology and process descriptions inother publications. Moreover, many publications are focusing on the most popular examples (e.g.,the flight control system) or describe avionics systems and development and V&V processes be-fore IMA was in use. This thesis combines the coherent documentation of existing work with owncontributions (besides the summary) to approaches for testing IMA-based systems.The first case study about automated bare IMA module testing describes the (implementation)concepts of a test suite that has been designed and implemented to verify the compliance of theIMA platform in an automated way. Using an (almost) fully automated test suite allows to achievea higher degree of test coverage (compared to manual testing within the same execution duration),eliminates many error-prone and time-consuming tasks, and enables to repeat the test execution(e.g., for regression testing) with only minor manual interactions. The designed test suite usesgeneric parts wherever possible. This comprises a generic test application which behaves as com-manded by external test specifications, test specification templates that abstract from concrete con-figuration data and can be instantiated for several (appropriate) test configurations, the rule-basedgeneration of IMA platform test configurations, and a tool chain that instantiates the particular testprocedures of the test suite and supports preparation of the IMA platform under test. The work pre-sented in this thesis has partly been carried out in the research project VICTORIA. In this project,the author and her colleagues at the research group Operating Systems and Distributed Systemsat Universitat Bremen (Department for Mathematics /Computer Science) have cooperated withthe other project partners, particularly, Verified Systems GmbH and Airbus Deutschland GmbH.Thus, several people have contributed to the respective findings and results. Concerning this the-sis, the author has significantly contributed to the design and implementation of the described test

Page 11: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

xi

suite which can be embedded in the available testing environment (i.e., the tool chain and the testsystem used within the context of the VICTORIA project). This thesis compiles a comprehensivedescription of the implementation concepts, problem solutions and achievements. Moreover, itassesses the approach and the implemented test suite and compares it with other approaches.

This thesis also includes a second case study that describes an approach for testing the communi-cation flow in a network of configured IMA platforms. This approach addresses a test step that hasbeen newly introduced for IMA-based system architectures to support the integration of severalsystems which share the same set of IMA platforms. Performing this network integration testing,the multi-system integrator is able to show – independently of the real systems behavior – thatall inter-system communication is possible (within the range of the module configurations and themodule’s specification) and that the network provides its characteristics under any legal operationconditions (also in case of maximum load). The approach developed in this thesis has a strongfocus on automation: In addition to discussing considerations for automated test preparation, testexecution, and test evaluation, it introduces an algorithm that generates for the specific networkconfiguration all possible communication behavior schedules, i.e., all possible test cases. More-over, this thesis also suggests means for focusing the test range in an automated way. As a result,the otherwise too many test cases are reduced and the remaining ones prioritized according towell-defined rules so that their execution within the given testing time becomes more realistic.

Page 12: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

xii

Page 13: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

xiii

Acknowledgments

This thesis reflects the research I have carried out from May 2001 to October 2005 in the researchgroup Operating Systems and Distributed Systems headed by Prof. Dr. Jan Peleska at UniversitatBremen (Department for Mathematics /Computer Science) and has been finished in the last yearswhile I was already working in Helsinki (Finland). This thesis would not have been possiblewithout the support and (direct and indirect) contributions from many people and I would liketo express my gratitude to all of them for their help, their advice, their encouragement and theirpatience.

First and foremost, I would like to thank my supervisor Prof. Dr. Jan Peleska for his guidanceand good feedback. The research carried out at his research group and the teaching courses that Icould actively participate in provided an excellent environment for my professional and personaldevelopment. In particular, I am thankful for the possibility to gain detailed insight into the fieldof safety critical system development, verification, validation and testing (with focus on testingof safety-critical systems in the avionics domain). I also appreciate that I got the chance to applymy acquired knowledge in “real world” projects at Verified Systems International GmbH. Manythanks also to Dr. Cornelia Zahlten for this opportunity.

Furthermore, I would like to thank Prof. Dr. Bernd Krieg-Bruckner for taking the role of the secondreviewer, quite a challenge given the volume of this thesis.

I am glad that I could hold my defense in December 2007, thanks to both reviewers providing theirstatements very quickly and finding a time slot on short notice even in the always-busy end-of-yearperiod.

I also would like to thank all my former colleagues within the research group Operating Systemsand Distributed Systems for the good collaboration and inspiring conceptual discussions. In par-ticular, I owe many thanks to my office mate and friend Dr. Stefan Bisanz for the countless hourswe have spent together – not only in our nightly sessions – in discussing of concepts, algorithms,and approaches, designing practical assignments, sharing other teaching-related responsibilities,and sitting next to each other writing our theses.

The work presented in this thesis – particularly in the first case study – has partly been car-ried out in the research projects VICTORIA and KATO. In these projects, my colleagues and Ihave closely cooperated with the other project partners, particularly, Verified Systems GmbH andAirbus Deutschland GmbH. I have appreciated the good collaboration in conceptual discussionsand in our implementation efforts which is both reflected in the resulting test suite implementa-tion. In particular, I would like to thank Christof Efkemann, Hans-Jurgen Ficker, Dr. Oliver Meyer,Stefan Walsen (Verified Systems International GmbH) and Dirk Meyer (Universitat Bremen) andthe colleagues at Airbus Deutschland GmbH which participated in the VICTORIA project.

I also would like to thank Prof. Dr. Ina Schieferdecker (TU Berlin) for her comments on thegeneration algorithm in the second case study.

The work on this thesis – as it has been started in Bremen and continued in the last years herein Finland – had influenced my life for quite a long time and I am grateful for the support of myhusband. I would like to thank Jorg Ott for (proof-)reading and commenting on my thesis and forhis advice and encouragement.

Finally, I would like to thank my parents for encouraging me over the last years and, particularly,for supporting my decision to study computer science at a time when this was not so “hip”.

Espoo, December 2007

Page 14: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

xiv

Page 15: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Contents

Preface vii

Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

I Introduction to Avionics Systems 1

1 Development of Avionics Systems 3

1.1 Development Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Characterization of Avionics Systems . . . . . . . . . . . . . . . . . . . . . . . 7

1.3 Avionics Architecture Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.4 Redundancy Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.5 Categorization of Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

1.5.1 COTS Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

1.5.2 Special-to-purpose Platforms . . . . . . . . . . . . . . . . . . . . . . . . 21

1.5.3 IMA Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

1.5.4 Peripheral Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

1.5.5 Operating System Requirements . . . . . . . . . . . . . . . . . . . . . . 23

1.6 Categorization of Aircraft Networking Technology . . . . . . . . . . . . . . . . 24

1.6.1 ARINC 429 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

1.6.2 ARINC 629 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

1.6.3 ARINC 664 and AFDX . . . . . . . . . . . . . . . . . . . . . . . . . . 28

1.6.4 Controller Area Network (CAN) . . . . . . . . . . . . . . . . . . . . . . 31

1.6.5 Application-specific Busses and Protocols . . . . . . . . . . . . . . . . . 32

1.7 Development Process – Architecture and Platform Dependance . . . . . . . . . . 33

xv

Page 16: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

xvi CONTENTS

2 Testing of Avionics Systems 35

2.1 Overview of Verification and Validation Methods . . . . . . . . . . . . . . . . . 36

2.2 System Test Approach for Avionics Systems . . . . . . . . . . . . . . . . . . . . 39

2.2.1 Platform Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

2.2.2 Application Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

2.2.3 System Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

2.2.4 Multi-System Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 43

2.2.5 System Integration Testing . . . . . . . . . . . . . . . . . . . . . . . . . 43

2.2.6 Ground Tests and Flight Tests . . . . . . . . . . . . . . . . . . . . . . . 43

2.3 Test Design Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

2.3.1 Test Case and Test Procedure Selection . . . . . . . . . . . . . . . . . . 44

2.3.2 Specification Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . 46

2.4 Test Data Generation, Test Execution and Test Evaluation . . . . . . . . . . . . . 54

2.4.1 Test Data Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

2.4.2 Test Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

2.4.3 Test Evaluation and Error Diagnosis . . . . . . . . . . . . . . . . . . . . 57

II Avionics Systems using Integrated Modular Avionics 59

3 Introduction to IMA Platforms 61

3.1 IMA Platform Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

3.1.1 IMA Platform Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . 62

3.1.2 IMA Operating System and the ARINC 653 API . . . . . . . . . . . . . 63

3.2 Configurability of IMA Platforms . . . . . . . . . . . . . . . . . . . . . . . . . 70

3.2.1 Module-level Configuration Tables . . . . . . . . . . . . . . . . . . . . . 72

3.2.2 Partition-level Configuration Tables . . . . . . . . . . . . . . . . . . . . 75

3.2.3 Configuration Responsibilities and Change Management . . . . . . . . . 79

3.3 Developing Application Software for IMA Platforms . . . . . . . . . . . . . . . 81

4 System Architectures using Integrated Modular Avionics 85

4.1 IMA Architecture at the Aircraft Level . . . . . . . . . . . . . . . . . . . . . . . 85

4.1.1 Analysis of a Sample Aircraft Architecture . . . . . . . . . . . . . . . . 87

4.2 IMA Architecture on System Level . . . . . . . . . . . . . . . . . . . . . . . . . 92

Page 17: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

CONTENTS xvii

5 Testing of Systems based on Integrated Modular Avionics 955.1 System Test Approach for IMA . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.1.1 Platform Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965.1.2 Application Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975.1.3 Platform Tests with Application Software . . . . . . . . . . . . . . . . . 985.1.4 System Integration /Multi-System Integration . . . . . . . . . . . . . . . 985.1.5 System Integration Testing . . . . . . . . . . . . . . . . . . . . . . . . . 995.1.6 Ground Tests and Flight Tests . . . . . . . . . . . . . . . . . . . . . . . 99

5.2 Testing Single IMA Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . 995.2.1 Testing Bare IMA Platforms . . . . . . . . . . . . . . . . . . . . . . . . 1005.2.2 Testing Configured IMA Platforms . . . . . . . . . . . . . . . . . . . . 102

5.3 Testing a Network of IMA Platforms . . . . . . . . . . . . . . . . . . . . . . . . 1035.3.1 Testing a Network of Configured IMA Platforms with Test Applications . 104

5.4 Test Bench for IMA Platform Testing . . . . . . . . . . . . . . . . . . . . . . . 1075.4.1 Test Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085.4.2 Test System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

III Case Studies 117

6 Automated Bare IMA Platform Testing 1196.1 Test Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

6.1.1 Minimum Standard Behavior of the TA Processes . . . . . . . . . . . . . 1236.1.2 Communication Protocol between TA and Test Specifications . . . . . . 1256.1.3 Minimum Configuration Requirements . . . . . . . . . . . . . . . . . . 127

6.2 IMA Test Execution Environment . . . . . . . . . . . . . . . . . . . . . . . . . 1306.2.1 CSP Environment for Commanding . . . . . . . . . . . . . . . . . . . . 1306.2.2 CSP Environment for Semi-Automated Testing . . . . . . . . . . . . . . 1396.2.3 CSP Environment for AFDX, ARINC 429, CAN, Analog and Discrete I/O 1406.2.4 CSP Environment for the Communication Flow Scenario . . . . . . . . . 1406.2.5 General CSP Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

6.3 IMA Configuration Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426.3.1 Configuration Data Generator . . . . . . . . . . . . . . . . . . . . . . . 1446.3.2 Configuration Data Parser . . . . . . . . . . . . . . . . . . . . . . . . . 154

6.4 IMA Test Specification Template Library . . . . . . . . . . . . . . . . . . . . . 1646.4.1 Test Procedure Templates . . . . . . . . . . . . . . . . . . . . . . . . . 1646.4.2 Example for Partitioning Testing . . . . . . . . . . . . . . . . . . . . . . 1686.4.3 Example for Intra-Partition Communication Testing . . . . . . . . . . . . 172

6.5 Test Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1766.5.1 Load Generation Environment . . . . . . . . . . . . . . . . . . . . . . . 1766.5.2 Test Instantiation Environment . . . . . . . . . . . . . . . . . . . . . . . 180

6.6 Test Execution and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836.7 Classification of the Observed Errors . . . . . . . . . . . . . . . . . . . . . . . . 184

Page 18: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

xviii CONTENTS

7 Approach for Testing a Network of IMA Platforms 1877.1 Communication Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

7.1.1 Manual Generation of an Example . . . . . . . . . . . . . . . . . . . . . 1927.1.2 Characteristics of Communication Schedules . . . . . . . . . . . . . . . 1977.1.3 Representation of Communication Schedules . . . . . . . . . . . . . . . 198

7.2 Generation Algorithm for Communication Schedules . . . . . . . . . . . . . . . 2027.2.1 Description of the Generation Algorithm . . . . . . . . . . . . . . . . . 2027.2.2 Influencing and Handling the Generation Algorithm’s Results . . . . . . 204

7.3 Considerations for Implementing the Approach . . . . . . . . . . . . . . . . . . 2107.3.1 Approaches for Test Execution using Communication Schedules . . . . . 2107.3.2 Approaches for Test Evaluation . . . . . . . . . . . . . . . . . . . . . . 217

7.4 Assessment of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2207.4.1 Prototype Implementation of the Generation Algorithm . . . . . . . . . . 2207.4.2 Example Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2217.4.3 Assessment Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

7.5 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

IV Lessons Learned 241

8 Evaluation of the Test Suite for Automated Bare IMA Platform Testing 2438.1 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2438.2 Comparison with Other Approaches . . . . . . . . . . . . . . . . . . . . . . . . 249

8.2.1 Comparison with ARINC 653 Conformity Test Specification . . . . . . . 249

9 Evaluation of the Approach for Testing a Network of IMA Platforms 2539.1 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2539.2 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

10 Conclusion 261Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

V Appendices 267

A CSP 269A.1 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269A.2 Channels and Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269A.3 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

A.3.1 Modeling Sequences of Events . . . . . . . . . . . . . . . . . . . . . . . 271A.3.2 Modeling Branching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271A.3.3 Modeling Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272A.3.4 Modeling Concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . 272A.3.5 Modeling Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272A.3.6 Modeling Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . 272A.3.7 Additional Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273

A.4 Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273A.5 Sequences and Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273

A.5.1 Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273A.5.2 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274

Page 19: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

CONTENTS xix

B IMA Test Execution Environment – Examples 275

B.1 CSP Type Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275

B.1.1 CSP Types for Commanding of API Calls . . . . . . . . . . . . . . . . . 275

B.1.2 CSP Types for Communication Flow Scenario . . . . . . . . . . . . . . 280

B.2 CSP Channel Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

B.2.1 CSP Channels for Commanding of API Calls and Scenarios . . . . . . . 281

B.2.2 CSP Channels for Communication Flow Scenario . . . . . . . . . . . . . 295

B.3 CSP Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296

B.3.1 CSP Macros for Commanding of API Calls . . . . . . . . . . . . . . . . 296

B.3.2 CSP Macros for Communication Flow Scenario . . . . . . . . . . . . . . 318

B.3.3 General CSP Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320

C IMA Configuration Library – Examples 325

C.1 Configuration TEMPLATE01 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325

C.2 Configuration Config0001 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332

C.2.1 Configuration Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332

C.2.2 Resulting Configuration Tables . . . . . . . . . . . . . . . . . . . . . . . 332

C.2.3 Resulting Test Relevant Configuration Data Extracts . . . . . . . . . . . 336

C.3 Configuration Config0040 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344

C.3.1 Configuration Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344

C.3.2 Resulting Configuration Tables . . . . . . . . . . . . . . . . . . . . . . . 347

D IMA Test Specification Template Library – Examples 361

D.1 Partitioning Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361

D.1.1 TO PART 003/test procedure template 03 . . . . . . . . . . . . . . 361

D.2 Intra-Partition Communication Tests . . . . . . . . . . . . . . . . . . . . . . . . 381

D.2.1 TO PARCOM 002/test procedure template 01 . . . . . . . . . . . . 381

E Communication Schedule Examples 387

E.1 Example of a Communication Schedule Set . . . . . . . . . . . . . . . . . . . . 387

E.2 Example of a Sorted Communication Schedule Sequence . . . . . . . . . . . . . 391

F Results of the Generation Algorithm’s Prototype Implementation 395

G Acronyms for Systems and System Components 401

Bibliography 403

Page 20: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

xx

Page 21: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Part I

Introduction to Avionics Systems

1

Page 22: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das
Page 23: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Chapter 1

Development of Avionics Systems

The development process for an aircraft begins at aircraft level and the specification and designbecomes more detailed and comprehensive in each step. Finally, it results in the development ofmany different hardware and software items which have to be assembled in the following integra-tion steps. The development process is accompanied by a verification and validation process usedto check the correctness of each refinement step and to show that the decisions of the developmentprocess have been implemented correctly. Different life cycle models are used to describe an ap-proximation of the actual systems engineering process – each emphasizing different aspects of theprocess. When selecting one or a small subset of possible ones, it has to be considered that (a)in systems engineering and especially in aircraft systems engineering, many different engineeringdisciplines are involved which use different terminology and (b) many thousand engineers poten-tially even from different countries are assigned to the different task which have to be carried out.One particular aim of the life cycle model representing the aircraft development process is there-fore to support the engineers in reducing the likelihood of misunderstanding because of differentterminology by setting up a glossary and by explicitly documenting the requirements, design deci-sions, interfaces, and test plans. Of course these documents are also necessary for the certificationactivities to be passed before the aircraft can enter service.

One well-known graphical representation of the systems engineering process applied for aircraftdevelopment is depicted in Fig. 1.1. This representation follows the basic ideas of the so-calledV-Model ([V-Modell-XT]) which essentially addresses software development which is just onepart of a typical systems engineering process. The ‘V’ is used to emphasize the relationshipbetween the development and the verification and validation (V&V) process by relating specificdevelopment documents with their associated verification and validation activities. Further, the‘V’ stresses the top-down approach of the development process and the bottom-up approach ofthe V&V process. Its major disadvantage is that it may lead people to understand the life-cycle asfirst development and then – with the final implementation at hand – the verification and validationactivities. This separation is obviously not favorable since late validation and verification of high-level requirements and design can result in late and expensive changes to these documents andtheir associated lower-level documents and thus finally to the product itself. However, this is notintended by the graphical representation as a ‘V’. Other life-cycle models may overcome this lackof temporal order of activities but focus less on the level of detail and particular documents andactivities.

The basic life cycle model shown in Fig. 1.1 identifies the different levels of detail (aircraft, do-main, system and equipment level) and assigns the typical requirement and design documents (onthe left hand side) and the associated V&V activities (on the right hand side). Although iterative(feed-back) cycles within one level and between different levels are not depicted, the V shall notnecessarily imply a strict sequential order of development steps. Particularly, some V&V activitieslike test planing, writing of test specifications, and generation of verification-specific hardware or

3

Page 24: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

4 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

software have to be performed in parallel to the development of the systems and their hardwareand software to avoid unnecessary delays. Also the close connection between the requirement anddesign documents of the development process and the test plans and test specifications used for theverification activities are not particularly emphasized, but will be addressed later when discussingtesting of avionics systems (Chap. 2) which is one important part of the V&V activities.

Aircraft Functional RequirementsSystem Top-Level Aircraft Requirements

Purchaser Technical Specification (PTS)

System Description Document (SDD)

System Requirements Document (SRD)

Functional Requirement Document (FRD)Top-Level System Requirements Document

Equipment PTSSystem/Equipment Development

Production

Equipment TestsQualification Tests

System Tests

Multi-System Tests

System Integration TestsGround TestsFlight TestsRoute ProvingAircraft Certification

Domain Level

System Level

Equipment Level

Aircraft Level

Developm

ent Process

Verification&ValidationProcess

Figure 1.1: Systems engineering model focusing on the different detail levels and listing the asso-ciated documents and activities.

In this chapter, the development of avionic systems shall be considered focusing on the left handside of the ‘V’ and the respective life cycle model sketched above. The general system devel-opment process is discussed in the following section (Sect. 1.1). The characteristics of avionicssystems are considered thereafter (Sect. 1.2). Avionics systems can follow different architecturemodels and redundancy concepts. Therefore, the most-known architecture models are comparedin Sect. 1.3 and the basic redundancy concepts are considered in Sect. 1.4. Different architecturemodels also imply the use of different types of platforms and the necessity of different network-ing technology. A comparison of different platform types and their characteristics is provided inSect. 1.5. A categorization of aircraft networking technologies is then considered in Sect. 1.6. Theinformation gathered in these sections applied to the system development process described in thefollowing subsection (Sect. 1.1) is then shown more clearly in the end of this chapter (Sect. 1.7).The right hand side of the ‘V’ – in particular testing as one of the important parts of the V&Vactivities – is then addressed in the next chapter (Chap. 2).

1.1 Development Process

The starting point for the development process is the identification of aircraft-level functions, func-tional requirements and functional interfaces and the specification of the physical aircraft architec-ture. Typical aircraft functions (as for example listed in [ARP4754], p. 69) include flight control,ground steering, aircraft aspects of air traffic control, automatic flight control, cargo handling, en-gine control, collision avoidance, ground deceleration, environmental control, passenger comfort,communication, guidance, navigation, and passenger safety. Based on the aircraft-level docu-ments, the aircraft functions are functionally grouped into so-called domains and the related sys-tems are identified. The process for selecting an appropriate functional grouping out of the rangeof possible groupings is often complex and controversial because implementation constraints andfailure effects have to be considered. In the following, we consider a typical generic functionalgrouping used for civil aircraft programs based on the domain structure chosen in the VICTORIAproject ([VICTORIA]):

Page 25: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.1. DEVELOPMENT PROCESS 5

• The flight control domain is composed of systems concerned with flight control and guid-ance and slat/flap control which get inputs directly from the pilot’s sidestick operations.

• The engine domain comprises the systems which are concerned with electronic engine con-trol and engine health management.

• The cockpit domain comprises the systems concerned with the management and control ofthe cockpit displays used for in-flight and airport navigation.

• The cabin domain subsumes the systems controlling the cabin status with respect to airconditioning, ventilation, pressurization, fire and smoke detection as well as slides and doorsmanagement.

• The utilities domain includes the systems controlling the fuel flow, landing gears, steeringand braking.

• The energy domain is concerned with the management and control of the power generationand distribution functions on-board the aircraft.

• The on-board information systems (OIS) domain is concerned with on-board maintenanceand aircraft condition monitoring using information from the cockpit and cabin domain andprovides information (e.g., the airport navigation database) to systems in other domains.

• The passenger and crew member communication and entertainment services (PCES) do-main is concerned with communication and entertainment services for the passengers andcrew members. This includes crew-related services (e.g., support for passenger comfort andinformation, crew communication, cabin management) as well as passenger-related services(e.g., in-flight entertainment, e-mail and Internet access).

Based on the documents subsuming the aircraft- and domain-level design decisions and require-ments, the architecture of each system is determined and the respective system requirements areexplicitly linked to the aircraft-level requirements. The system architecture establishes the sys-tem’s structure and boundaries and defines the interfaces between the system’s items and to theother systems. As before, more than one candidate architecture may be considered. The selectionof an appropriate system architecture is supported by assessing the implications with respect tosafety requirements, possible error propagation, and business requirements (e.g., resulting weight,maintainability). Within the system-level documents, the equipment of the respective system isidentified which comprises the hardware (controllers, sensors, actuators) as well as the neces-sary software (operating systems, applications, firmware). For each such item, detailed technicalspecifications and design documents are developed on equipment level which form the basis forsoftware and hardware development.

From closer examination of the development activities described above, it is apparent that the air-craft conceptually comprises several domains which each (conceptually and physically) consist ofseveral systems which may be partitioned into subsystems. These (sub)systems are again com-posed of hardware equipment which are the physical items installed in the aircraft and the associ-ated software. This conceptual structure is depicted in Fig. 1.2 where the aircraft is functionallydivided into two domains D1 and D2. Each domain again is divided into two systems (S 11 and S 12for domain D1, S 21 and S 22 for domain D2). Each system again consists of two equipment items,for example, a controller hardware platform and accompanying system software. Each systemand each piece of equipment is specified in separate documents with each document interactingwith the respective higher level document. This means, for example, that in Fig. 1.2 there are4 system-level requirements and design documents (i.e., 4 SRDs and 4 SDDs) and 8 EquipmentPTS documents. Note that the aircraft level and the domain level are separated conceptually whilethe other levels also represent physical separation (i.e., different hardware or different software).

Page 26: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

Therefore, aircraft and domain level are depicted in the ‘V’ of Figures 1.1 and 1.2 with the samebackground color while system level and equipment level are represented by different backgroundcolors.

AircraftLevel

DomainLevel

SystemLevel

EquipmentLevel

S11 S12 S21 S22

E111

E121E122

E211E212E221E222

E112

D1 D2

Aircraft

Figure 1.2: Systems engineering model detailing Fig. 1.1 by showing the separation into differentdomains, systems and equipment components.

Note that the aforementioned functional grouping means that each system can be assigned onlyto one domain although parts of its functionality may also be provided by other systems whichboth may contribute to the same aircraft function. The aim of each partitioning step is to reducecomplexity in such a way that the documents remain manageable despite more details being added.The activities are supported by explicit traceability links between high- and low-level requirementswhich are also necessary for measuring the test coverage during the later V&V activities.Furthermore, the description of the development process is simplified in such a way that it man-dates a federated architecture with strict assignment of equipment items to systems and domains.In fact, the development process is influenced by architectural considerations, the required / chosenredundancy concept, platform characteristics and political decisions (e.g., this domain is devel-oped here and the others there1). To understand the impact of architecture, redundancy conceptand platform type these topics are examined in the following sections: Section 1.3 examines differ-ent avionics architecture models, Sect. 1.4 considers different redundancy concepts, and Sect. 1.5compares different platform types. The effects are then summarized in Sect. 1.7 at the end of thischapter.

References[ARP4754] deals with the system development processes of highly-integrated or complex avia-tion systems and elaborates on how to show compliance to a regulator. For the development ofthe software part, [DO-178B] describes the guidelines to be followed for development and certi-fication. The requirements and guidelines for the system design are also addressed in [ABD200].[VICTORIA] describes the domain structure followed in this thesis as well as the processes andguidelines followed in the European research project VICTORIA.

1For the Airbus A380, the development of major parts (wings, cockpit, cabin, landing gears) are built in France,United Kingdom, Germany and Spain. The systems of the respective domains are developed and integrated undersupervision of Airbus Germany, Airbus France and Airbus UK. The transport of the aircraft parts to the final assemblylocation in Toulouse is one of the logistical problems which had to be solved.

Page 27: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.2. CHARACTERIZATION OF AVIONICS SYSTEMS 7

1.2 Characterization of Avionics Systems

The development process described in the previous section did not focus in detail on a specifictype of system. Nevertheless, avionics systems are typically safety-related hard real-time reactivesystems with different level of criticality. In the following, the characteristics of avionics systemswill be defined.

Safety-related systems. Safety-related systems are characterized as systems whose failure canresult in direct, and possibly very serious, harm to one or more people. Obvious examples ofsafety-related systems are nuclear reactor control systems, automotive engine management orbraking system, and most avionics systems (e.g., flight management system). Depending on theconsequences of a system failure, the level of safety criticality is related to the severity of thepotential accident. For civil aircraft systems, RTCA /DO-178B ([DO-178B], p. 7) suggests thefollowing categories for the software part of avionics systems which can be applied similarly todescribe failures of the systems themselves: catastrophic, hazardous / severe-major, major, minor,no (safety) effect. Based upon the contribution of a system to a potential failure condition, it isexpected that the development and V&V activities comply with the corresponding failure condi-tion category. Therefore, each system is often assigned a so-called development assurance level(DAL) ranging from DAL A to DAL E where level A corresponds to most failure critical andlevel E to least failure critical. This means, a level A system failure would cause or contribute toa catastrophic failure of the aircraft system.Note that the terms safety-related system and safety-critical system are often used synonymously.In this thesis, safety-critical system usually implies a system of high criticality in contrast to safety-related system that may also refer to systems of low criticality.

Real-time systems. Real-time systems can be categorized as hard and soft depending on theirtime constraints. Using the definition of Storey ([Sto96], p. 330), hard real-time systems have timeconstraints such that the response of the system to a given stimulus must always occur within agiven amount of time. Soft real-time systems have time constraints such that the mean responsetime, over a defined period, is less than some specified maximum value.Most avionics systems are hard real-time systems. The requirement to validate all hard timingconstraints imposes many restrictions on the design and the implementation of the applicationsoftware as well as on the architecture of the platform.

Reactive systems. Reactive systems continuously interact with and control (parts of) their en-vironment in order to perform user-required services. The interaction partners are other systemsor directly controlled equipment like sensors and actuators. Typically, deadlines for reactions arederived from the required responsiveness of the sensors and actuators monitored and controlled bythe system which shows the close relation of reactive systems to real-time systems.

Fault Tolerance, Fault Avoidance, Fault Removal, Fault Detection. The objective of faulttolerance is to design a system in such a way that allows the continuation of service in presence offaults and prevents that faults result in system failure. Fault tolerance can be achieved by choosingan appropriate system architecture and redundancy concept. Thus it can provide some protec-tion against random hardware faults and some forms of systematic errors, but usually does notreduce the impact of specification errors. The number of specification errors can be reduced byapplying fault avoidance techniques. To achieve high dependability, it is essential that the devel-opment process is followed in a way that ensures the best possible interaction between the chosensystem architecture (and redundancy concept), the various platforms used within the respectivearchitecture, the application software using the services provided by the respective platforms, and

Page 28: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

8 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

the environment which provides certain information and behaves in a specified way. The aim isto build ‘perfect’ (i.e., faultless) hardware and software components. Fault avoidance is coupledwith fault removal methods, i.e., the verification and validation techniques applied to detect pos-sible faults. Some systems also provide techniques for on-the-fly fault detection in order to locatethe source of any error and thus to minimize the effects of hardware and software faults. This canbe achieved by fault tolerant systems. In general, it is not possible to avoid all hardware and soft-ware faults. Nevertheless, it is desirable to design fault tolerant hardware platforms and to applyadequate fault avoidance and fault removal methods while specifying hardware and software.

In the following sections, factors influencing the development of safety-critical hard real-timesystems by means of a specific architecture (Sect. 1.3) or a redundancy concept (Sect. 1.4) or aspecific platform (Sect. 1.5) are discussed in more detail. Fault removal by verification and valida-tion techniques with a focus on testing is discussed in the next chapter (Chap. 2). Fault detectionmechanisms in general are not discussed in detail, but considered in Chap. 3 when providing adetailed overview of one specific platform type and its operating system services.

ReferencesAvionics systems are characterized in all guideline and standard documents referring to the de-velopment and certification of avionics systems (e.g., [ARP4754], [DO-178B], [ABD200]). Real-time and safety-critical systems in avionics and other areas are also defined in [Pel96], [Liu00],[Coo03], [Sto96], [Sto99], among others.

1.3 Avionics Architecture Models

The system architecture defines how the system assembles its subsystems and the related equip-ment and thus the internal and external interfaces. As we have seen when discussing the char-acteristics of hard real-time safety-critical systems (i.e., of most avionics systems), the systemarchitecture has a major impact on

• the real-time and fault tolerance capabilities (together with the chosen redundancy concept),• the applicability of fault avoidance and fault removal techniques (subject to the chosen plat-form), and

• non-functional factors like maintainability, resulting weight of hardware and wiring, and theconsequences for power consumption, cooling, etc. (considering as well the chosen platformtypes and the redundancy concept).

The selection process for an appropriate system architecture has to consider all these factors.Before evaluating the aforementioned characteristics in detail, we discuss the evolution of archi-tecture models (as represented in the literature for example by [Fil03] and [Rus99]).

Different generations of avionics architecture models can be distinguished:

1. Independent avionics architectures are composed of separate and independent controllers2which are each connected by point-to-point wiring to their dedicated sensors and other pe-ripheral equipment. Each control system (i.e., controller with its sensors) realizes one func-tion. The different control systems are not interconnected and thus cannot share sensor

2The term controller is used to refer to (hardware) platforms with application software executed on the platform.The term platform refers to a computing system (i.e., hardware) and – if applicable – associated operating systemsoftware. Sect. 1.5 provides a categorization of different platforms.

Page 29: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.3. AVIONICS ARCHITECTURE MODELS 9

information. As a consequence, fault containment is inherent since fault propagation fromone to another system is not possible. This architecture model represents the first generationof avionics architectures.An example of an independent architecture model is depicted in Fig. 1.3.

2. Federated avionics architectures are those in which each system has a dedicated controller3and the controller hardware is not shared by different systems. The controllers are looselycoupled to the controllers of other functions, thus fault propagation from function to functionis only possible by interaction which can be detected and tolerated by the software. Thisarchitecture model is the second generation of avionics architectures. It has evolved due tothe need to combine information from different functions in order to provide the expectedfunctionality.Figure 1.4 shows an example of a federate architecture model.

3. Integrated avionics architectures (also known as Integrated Modular Avionics) have beendeveloped to create a modular, open, fault-tolerant and highly-flexible architecture for dig-ital avionics. Here, standardized computer systems provide a common platform for hostingseveral functions. These so-called IMA modules are connected by data busses. Since allfunctions hosted on one module share the computing resource and the memory of the re-spective platform, fault propagation has to be avoided by partitioning mechanisms. Bynow, different generations of integrated avionics architectures can be distinguished: the firstones were applied in Boeing 777 for certain selected functions (see [Mor01], [But05], and[MS03], p. 334f, among others) and the latest ones in the currently developed Airbus A380on aircraft level (i.e., for most functions) (see [MS03], p. 336, among others). Apart fromthe different application levels, further differentiators are existent. The first generations ofIMA used non-standard or proprietary backplanes or data busses and closed architectureswith single suppliers for the modules, while the newest generation focuses on open archi-tectures4, i.e., standardized data busses, and standardized modules from different suppliers.Furthermore, the first solutions were backplane-based integrated racks (also called cabi-nets) that integrated distinct so-called line replaceable modules (LRM) that either providedata processing, I/O or power supply. Today’s IMA modules substitute the cabinet solutionby providing a general purpose avionic controller called Core Processing and I/O Module(CPIOM) (see e.g., [Die04b], [Die04a]) which use an external power supply unit.Fig. 1.5 depicts an integrated modular avionics architecture.

The three different architecture models are depicted in Fig. 1.3 (independent architecture), Fig. 1.4(federated architecture) and Fig. 1.5 (integrated modular avionics). Each figure shows an aircraftarchitecture consisting of two domains domain 1 and domain 2. Domain 1 assembles three sys-tems (system 1 is green, system 2 is pink, system 3 is light blue), domain 2 one system (system 4 isorange). Each system is connected to its specific peripheral equipment. Interconnection betweenthe systems is depicted in blue: in the federated architecture model the systems are connected bypoint-to-point communication whereas in the integrated modular avionics architecture the IMAmodules are connected by a switched data bus. Not shown is the communication between appli-cations hosted on the same IMA module. Naturally, the systems are not connected in the indepen-dent architecture model. The platforms hosting the application software within the independentarchitecture (Fig. 1.3) and the federated architecture (Fig. 1.4) may be different (e.g., special-to-purpose, standardized commercial, commercial-off-the-shelf), while the integrated modular avion-ics architecture (Fig. 1.5) ideally uses only one type of platform to host the application software –standardized IMA modules. This representation shall not restrict the use of different types of IMA

3Note that for achieving fault tolerance this controller can be redundant.4Open architecture in this context means (see [MS03], p. 332) “a system composed of components with well-defined

interfaces between the components conforming to standard interface specifications”.

Page 30: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

10 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

domain 1 domain 2

platform2

platform3

platform1

platform4

Figure 1.3: Example for an Independent Avionics Architecture

domain 1 domain 2

platform2

platform3

platform1

platform4

Figure 1.4: Example for a Federated Avionics Architecture

modules to take into account the different needs with respect to I/O and computing capabilities.However, it shall emphasize the use of standardized platforms – aiming at repeated use of eachtype. As a matter of fact, these figures do not represent real aircraft architectures since for manydifferent reasons hybrid variants of architecture models are used (e.g., non IMA modules in anIMA architecture).

We observe that, conceptually, there are two separate evolution processes: The evolution of thearchitecture models and the technological evolution. Typical avionics architecture models haveevolved from an architecture of almost completely independent systems via an architecture of dis-tributed systems to a centralized architecture. At the same time, the platforms and the peripheralequipment have been further developed resulting in enhanced processor capabilities and increasedmemory capacities in combination with advances in operating systems and computer languages,digital data busses and high-bandwidth communication links, major advances in microelectronics.The resulting advanced peripheral equipment has also contributed to more distributed architecturessince more functionality (e.g., additionally transformation from analog signals to digital messages)can be provided by these smart components.These two development processes – advanced technology encouraging decentralization on the

Page 31: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.3. AVIONICS ARCHITECTURE MODELS 11

IMA Module

IMA Module

IMA Module

domain 1 domain 2

Figure 1.5: Example for Integrated Modular Avionics

one hand, and conceptually more centralized architectures on the other hand – seem to contra-dict5: Instead of distributing the functions across more platforms (e.g., to take advantage of theimproved communication means), several systems share computing and memory resources of cen-tralized modules (using an IMA architecture). To understand this evolution, we have to evaluatethe pros and cons of the technological development against consideration of functional, safety andnon-functional requirements. For the remainder of this section, we want to look at the differentarchitecture models from this aspect. This comparison is a preparation for the second part of thisthesis focusing on IMA platforms (Chap. 3) and IMA architectures (Chap. 4). It shall help tosubstantiate the advances of IMA architectures but shall also show what new and sometimes criti-cal factors have to be considered. Although a comparison of different generations of architecturemodels applied to modern aircrafts which use current technology platforms is somehow artificial,the pros and cons of each architecture type become clear. Moreover, new aircrafts do not necessar-ily rely on newest architecture models (i.e., integrated modular avionics) for highly safety-criticalsystems since the safety implications may outweigh the advantages.

Note that the given analysis of the system architectures focuses on the architectural aspects andneither on the redundancy concept nor on the specific characteristics of the platforms or the appli-cation software.6

In the beginning of this section, we have mentioned shortly the many different factors for evaluat-ing the different architecture models. In detail, these are mostly related to the following safety andnon-functional requirements:

1. The type of platforms which are typically used for this architectural model and – in addition– the type of platforms which can also be considered.

2. The type of communication links which are and can be used for this architectural model.

3. The provisions of the architecture for fault tolerance and avoidance of fault propagation.

4. The time, cost and responsibilities for development of the systems (including certificationactions) and system and multi-system integration.

5On closer inspection, the conceptual differences between a more federated (i.e., more distributed) architecture andthe integrated avionics architecture (i.e., the more centralized one) are not significant as stressed in [Rus99], p. 3.

6These aspects are discussed in more detail in the following sections (Sect. 1.4 and Sect. 1.5).

Page 32: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

12 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

5. The maintenance costs and the maintainability (including the relation to the factor mean-time-to-repair).

6. The upgradeability in case of technological advances.

7. The extensibility for future enhancements with respect to the increasing complexity of thesystems (including adaptation to new communication requirements and to additional com-puting power).

8. The allocated space and the power consumption of the system’s platforms.

9. The environmental requirements (e.g. cooling, humidity) of the platforms for proper func-tioning.

10. The weight for wiring and for the platforms themselves.

The evaluation of these criteria is compiled in Table 1.1, Table 1.2, Table 1.3, and Table 1.4. Thetable representation allows, on the one hand, to compare the different architecture models for eachcriterion (horizontal reading) or, on the other hand, to assess for a specific architecture model thedifferent criteria (vertical reading).

Page 33: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.3. AVIONICS ARCHITECTURE MODELS 13

EvaluationCriteria

IndependentAvionics

Architecture

FederatedAvionics

Architecture

IntegratedModularAvionics

typesof

used/usable

platforms

(seeSect.1.5forgeneral

prosandcons)

Typicallybespokespecial-to-purposeplatformsareused.Itisalsopossibletouse

commercial-off-the-shelf(COTS)orstandardizedplatforms,iftheycomplytothesys-

tem’sfunctionalandsafetyrequirements.

Typicallystandardizedspecial-to-purpose

platformslikeIMAmodulesforsafety-

criticalfunctions,COTSplatformsareus-

ableforlesscriticalfunctions,bespoke

special-to-purposeplatformsmayalsobe

used.

typesof

used/usable

communication

links

(seealsoSect.1.6)

Generallyalltypesofanaloganddigitalcommunicationlinksanddatabussescanbeused.Forhistoricalreasons,thefirstgeneration

ofarchitecturemodelsfocusedonanalogcommunicationlinkswhilenowadaysfastdigitalcommunicationlinks(withdifferent

protocols)areavailable.

provisionsforfaulttoler-

ance

Faultpropagationfrom

system

tosystem

isimpossiblebecausethesystemsarenot

connected.

Faultyinteractionbetween

platformandsensor/actuatorcanbede-

tectedandtoleratedby

theapplication

software.

Thesystemsareexecutedonseparateplat-

formsandthusfaultpropagationfromsys-

temtosystem

isonlypossiblebycom-

municationofdatawhichcanbedetected

(and

thentolerated)by

theapplication

software.

Systemsofdifferentcriticalitymayshare

thecomputingandmemoryresourcesof

asingleplatformandthusfaultpropaga-

tionhastobeavoidedbytemporaland

spatialpartitioningmechanisms.Inaddi-

tion,faultpropagationispossiblebyinter-

actionviathecommunicationlinkswhich

requiresfaultdetectionmechanismsonap-

plicationsoftwarelevel.

Hardwareredundancyprovidesprotectionagainstrandomhardwarefaults.Softwareredundancybymeansofdiversedevelopment

mayreducesystematicsoftwarefaults(seealsoSect.1.4).

Table1.1:Evaluationofdifferentarchitecturemodels(part1)

Page 34: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

14 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

EvaluationCriteria

IndependentAvionics

Architecture

FederatedAvionics

Architecture

IntegratedModularAvionics

time,costandresponsi-

bilitiesfordevelopment

andcertification(includ-

ingverificationandval-

idation)andsystem

and

multi-systemintegration

Bespokeplatformsofthedifferentsystemsaretypicallydevelopedindependentlyfrom

theothersbytherespectivesystemsupplierwhoisusuallyresponsiblefordeveloping

theplatform,theapplicationsoftware,andthesensorsandactuators.WhenusingCOTS

orstandardizedplatforms,theplatforms,theapplicationsoftware,andtheperipheral

equipmentaretypicallydevelopedbyvariouscommercialsuppliers.

Synergeticeffectsbetweendifferentsystemsuppliersareunlikely,i.e.,development,val-

idationandverification,andcertificationoftheplatformshavetobeperformedbyeach

supplierindependentlyandthuscostortimecannotbesavedforplatformdevelopment.

Systemintegrationisperformedatthesystemsupplier’sside;multi-systemintegration

istheresponsibilityofthedomainintegratorortheaircraftmanufacturer.

IMAmodulesaredevelopedbysuppliers

typicallydifferentfrom

thesystem

sup-

pliers(whichprovidetheapplicationsoft-

wareandsensors/actuators).COTS

orstandardizedplatformsareusuallydevel-

opedbycommercialsuppliersproducing

alsoforthenon-avionicsmarket.

TestingandcertifyingeachtypeofIMA

module(ideallyonlyafewdifferenttypes)

istheresponsibilityoftheplatformsup-

plierandisperformedforeachtypeonce

forthewholeaircraft(althoughitmightbe

usedfordifferentsystemsanddomains).

Applicationtestingandcertificationisin-

dependentandperformedbythesystem

supplierforitsrespectiveapplicationsoft-

ware.

Systemintegrationandmulti-systeminte-

grationarestronglycoupledbecauseap-

plicationsoftwareofdifferentsystemsis

integratedonthesameplatform.Theinte-

grationcanonlybycarriedoutbythesys-

temordomainintegrator–usuallytheair-

craftmanufacturer–incooperationwith

theapplicationsuppliers.Costandtime

fordeveloping,maintaining,andconfigur-

ingthetestrigshasthusbecomethere-

sponsibilityoftheaircraftmanufacturer.

Table1.2:Evaluationofdifferentarchitecturemodels(part2)

Page 35: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.3. AVIONICS ARCHITECTURE MODELS 15

EvaluationCriteria

IndependentAvionics

Architecture

FederatedAvionics

Architecture

IntegratedModularAvionics

maintainability

Ifdifferentspecializedhardwareisusedforeachsystemithastobeinstockformainte-

nanceateachmaintenancestationwhichgeneratesenormouscostandmayincurlogistic

problemssincearound100differentplatformsarehostedinanaircraft.

Theuseofstandardizedplatformsand

additionalon-boardmaintenancesystems

simplifiesmaintenance.Ateachmainte-

nancestationonlyveryfewdifferenttypes

ofplatformshavetobeinstock.

upgradeabilityincaseof

technologicaladvances

Upgradesofaplatformortheapplication

softwareonlyaffecttherespectivesystem.

Interferencewithothersystemsisimpos-

sible(exceptfor,e.g.,electromagneticin-

terferenceofcourse).Usuallycertification

ofthewholesystemhastoberepeated.

Upgradesmayaffecttherespectivesys-

temandtheinterfacedsystems.

Re-

certificationforthewholesystemandpo-

tentiallyforsomeothersystemshastobe

performed.

UpgradesofIMAmodulesshallhaveno

effectontheapplicationsoftwareandthus

onlyIMAmoduletestingandcertification

andhardware/softwareintegrationtesting

hastobeperformed.

extensibilityforfuture

enhancements

Enhancementsmayincreasethecomplex-

ityoftheapplicationsoftwareand/orthe

numberofinterfacedequipment.

The

firstmayresultinadditionalplatforms

leadingtoamoredistributedarchitecture.

Bothcanaffectonlythedesignofthe

respectivesystem’sarchitecture.

Inter-

dependenceonlyoccurswithrespectto

powerconsumption,size,weight,etc.Re-

certificationforthewholesystemmaybe

required.

Ifanewsystemisaddedaseparatearchi-

tecturedesignisrequiredandthenewplat-

forms,sensorsandactuatorsareinstalled

intotheaircraft(interdependencewithre-

specttopowerconsumption,size,weight,

etc.hastobeconsidered).

Enhancementsmayincreasethecomplex-

ityoftheapplicationsoftwareand/orthe

numberofinterfacedequipmentand/or

theinterfacestoothersystems.Thefirst

mayresultinadditionalplatformslead-

ingtoamoredistributedarchitecture.The

othersaffecttheintra-systemandtheinter-

system

communicationnetworkresulting

inadditionalwiresandfurtherspaceallo-

cation.Re-certificationforthewholesys-

temandtheaffectedsystemsmaybere-

quired.

Ifanewsystemisaddedthenewplatform,

sensors,actorsandthenewcommunica-

tionlinkshavetobeconsideredandthus

affectallconnectedsystems(interdepen-

dencewithrespecttopowerconsumption,

size,weight,etc.hastobeconsidered).

Additionalsystem’sapplicationsoftware

mightbeaddedtoaplatformbyconfig-

uringanadditionalpartitionwithoutaf-

fectingthespacialortemporalcharacter-

isticsofothersystemshostedbytheplat-

form(providedthatappropriatespatialand

temporalspareshavebeenplannedinad-

vance).Nore-certificationoftheother

systemsisrequired.

Addingnewplatformshasthesameaffect

asfortheotherarchitectures:possiblein-

terferencewithrespecttopowerconsump-

tion,size,weight,etc.

Table1.3:Evaluationofdifferentarchitecturemodels(part3)

Page 36: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

16 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

EvaluationCriteria

IndependentAvionics

Architecture

FederatedAvionics

Architecture

IntegratedModularAvionics

size,powerconsumption

Each

system

requiresseparate

space

andpowerforits

platformsandsen-

sors/actuatorsindependentoftheaverage

utilizationoftheprocessororthesensors.

Supplywithnormalandessentialpower

isonlynecessaryforhighlysafety-critical

systems.

Each

system

requiresseparate

space

andpowerforits

platformsandsen-

sors/actuatorsindependentoftheaverage

utilizationoftheprocessororthesensors.

Sharingsensorinformationmayreduce

theoverallnumberofsensorsandthusthe

allocatedspaceandthepowerconsump-

tion.

Supplywithnormalandessentialpower

isonlynecessaryforhighlysafety-critical

systemsandthoseprovidinginformation

formorecriticalsystems.

Severalsystemsshareacommonplatform

whichreducestheneedsforspaceand

powerconsumption.

IMAallowsbetter

resourceutilizationandusesspaceand

powermoreeconomically.

Supplywithnormalandessentialpower

maybenecessaryformostplatformssince

theplatformmayhostsystemsofdifferent

criticality.

environmentalrequire-

mentsoftheplatform

Eachplatformcontributestotheglobalwarmingoftherespectivecompartmentandthusneedsairconditioning/cooling.When

comparingthearchitecturesforsystemsofthesamecomplexity,IMAneedsasmallernumberofplatforms.

Permanentcompliancewiththeenvironmentalrequirementsisextremelyimportantfor

platformsexecutinghighlysafety-criticalsystems.

SinceIMAallowsthatsystemsofdif-

ferentcriticalitysharethesameplatform,

keepingtheenvironmentalrequirementsis

evenmorestringenttoavoidthesimulta-

neouslossofcriticalanduncriticalsys-

tems.

weightofthesystem

Theweightdependsonthenumberofplatforms,thuswhencomparingthearchitecturesforsystemsofthesamecomplexity,IMA

needslessplatformsandthusreducestheoverallweight.

Point-to-pointcommunicationbetween

theplatformsandthesensors/actuators

resultsinmuchwiring

andthusmuch

weightforthewiringitself.

Sharingofsensorinformationand(proba-

bly)theuseofadvanceddatabussesallow

toreducethewiringandthustheresulting

weight.

Sharingofsensorinformationand(proba-

bly)theuseofadvanceddatabussesallow

toreducethewiringandthustheresult-

ingweight.Inaddition,communication

betweensystemshostedonthesameplat-

formisperformedviathebackplane(fur-

therreductionofwiring).

Table1.4:Evaluationofdifferentarchitecturemodels(part4)

Page 37: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.4. REDUNDANCY CONCEPTS 17

ReferencesThe evolution of the avionics architecture models described in this thesis follows mainly [Fil03],[Rus99], and [SPC03]. Details about the different architecture models in general are also pro-vided in [MS03] and – with a focus on integrated modular avionics architectures – in [LR03] and[Mor01]. The characteristics of system architectures for all types of safety-critical systems arealso considered in [Sto96].

1.4 Redundancy Concepts

Redundancy is an important concept to achieve fault tolerance and thus to meet the associatedsafety requirements. Each form of redundancy requires that additional elements are introducedinto the system design to tolerate failures of other elements. Four basic types of redundancy aredistinguished:

• Hardware redundancy means to use additional hardware. It is the most common methodof achieving fault tolerance against random hardware faults. Hardware redundancy can bestatic, dynamic or hybrid. Static redundancy concepts mask faults by some form of votingmechanism which compares the output of the parallel redundant components. Dynamic re-dundancy approaches focus on fault detection (instead of fault masking) and provide standbyor backup systems which become active after detecting a hardware fault in the currently ac-tive component. The reconfiguration causes a disruption of the system which is shorter whenthe standby systems are running in parallel to the ‘active’ one (hot standby). Hybrid redun-dancy combines the fault masking of static redundancy systems with the fault detection ofdynamic redundancy and thereby avoids transient faults which might be unacceptable.The additional hardware used for redundancy can be duplicated identical components ordiverse components. The latter reduces the probability of systematic hardware design faults(so-called common-mode faults) but requires independent development processes7 for thedifferent components which may be very time consuming and costly (depending on the typeof components used, diversity of COTS may be cheaper). Thus, development of diversehardware is only applied for highly critical systems.From an architectural point of view, any form of hardware redundancy means that addi-tional weight, increased power consumption, extra cooling, etc. has to be considered – andweighted up against the expected benefits.8A more detailed overview of these forms of hardware redundancy (including different votingtechniques and fault detection techniques) is provided in [Sto96], p. 131–144.

• Software redundancy means executing additional software to detect or tolerate faults.Since execution of duplicated (i.e., identical) software modules only reduces the likelihoodof transient faults but does not protect against software design faults, software redundancyis usually associated with separate design and development of the additional software com-ponent. Again, the resulting costs are only acceptable for very critical systems.The execution of the additional software does not necessarily mean that additional hardwarehas to be added. Nevertheless, the additional computing and memory needs have to be con-sidered and often result in additional hardware components.Software fault tolerance techniques are discussed in more detail in [Sto96], p. 144–148.

7Note that each such development process is based on the same system specification. As a consequence, errors inthese specifications will affect each diversely developed design.

8For the Airbus A380, the weight of the aircraft and the power consumption are major factors in guaranteeing theadvertised range and kerosene consumption per flight mile to the prospective customers.

Page 38: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

18 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

• Information redundancy means providing additional information to detect or toleratefaults. Examples are parity bits, checksums, error detection or correction codes. Anotherform is the use of algorithms to transform data and create multiple copies.Information redundancy can be implemented using hardware or software techniques.

• Temporal (time) redundancy means to spend time in addition to that required for the non-redundant execution. In particular, this can include to repeat calculations and then comparethe results to detect transient faults.Hardware and software techniques can be used to implement temporal redundancy.

Summarizing, each form of redundancy allows to tolerate or detect specific faults but may involvehigh costs for development and verification (especially when requiring dissimilar components) andduring operation (in particular considering weight, power consumption). For saving developmentcosts, it is preferred to tolerate faults, particularly random faults, on HW level using non-diverseadditional hardware, but apply extensively fault avoidance and fault removal methods at softwarelevel to avoid and detect software design and implementation errors. Development or operationalcosts are less important matters for most (highly) safety-critical systems. As a consequence, theseuse mixtures of the above described redundancy techniques to provide fault tolerance against arange of possible faults. Some examples are listed in the following:

• Hardware redundancy with duplicated platforms and duplicated wiring to tolerate and detecthardware faults. For example, the IMA modules and the AFDX network for inter-modulecommunication are set up redundantly within the Airbus A380. Also the Fire and SmokeDetection System in the Airbus A380 is designed as a hot-standby hardware redundantsystem. Usually, the redundant systems are also physically distributed on-board the aircraftto avoid total loss of the redundant components in case of fire or water in one of the avionicscompartments.

• Hardware redundancy with duplicated input sources – either by using several identicalsensors or by using diverse sensors – to protect against single-point failures (see [Sto96],p. 132). For example, it is common practice to have more than one fire and smoke sensor ineach compartment and mechanisms to deal with diverse status messages of sensors locatedin the same compartment.

• Software redundancy – in particular in the form of N-version programming – is appliedfor highly critical applications to achieve software fault tolerance against software design(and implementation) errors. Such an application is the primary flight control system in theAirbus A330/340 ([Sto96], p. 145).

• Combinations of hardware and software redundancy are used to combine the benefits ofboth concepts. A prominent example is described in [Sto96], p. 152: the computer systemof the NASA space shuttle where redundant COTS computers with hardware fault detectionmechanisms have been combined with 2-version programming of the flight control system.

• Information that is critical for continuing safe flight may be duplicated using informationredundancy techniques. For example, detected smoke or fire in the Airbus A380 is com-municated to the cockpit by a specific discrete signal as well as via AFDX (the latter alsoallows to communicate additional information). This is also a form of hardware redundancy(redundant hardware for the communication link) although different communication meansare used.

References[Sto96] and [Sto99] elaborate in detail on the different redundancy concepts. [AIR5428] addressesredundancy considerations in avionics systems, particularly integrated avionics systems.

Page 39: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.5. CATEGORIZATION OF PLATFORMS 19

1.5 Categorization of Platforms

In this thesis, the term platform refers to computer hardware and its operating system (OS) soft-ware and drivers. A platform can be used to execute different pieces of application software. Thiscombination of application software and a specific platform is often called a controller. The com-ponents of a platform – on one hand the hardware and, on other hand, the OS and the drivers – canbe either standardized or unstandardized and either special-to-purpose or commercial off-the-shelf(COTS). These terms are defined as follows:

• Standardized means that a reference specification exists which describes the hardware char-acteristics or the operating system API. Standards are prepared by a specific working groupand released by a standardization authority, e.g., ARINC, IEEE, RTCA. Standards can beimplemented by different suppliers.Unstandardized means that the component has not been developed according to such a stan-dard. Many so-called standard operating systems belong to this group, e.g., Linux, Unix,Windows.9

• Commercial-off-the-shelf (COTS) is hardware or software which is readily available at themarket and has not been developed for a particular application area.Special-to-purpose or bespoke components on the other hand are developed for a specificapplication (tailored development). The application area (i.e., the purpose) can be a spe-cific aircraft type (e.g., a specific fire and smoke detector only to be applied in the AirbusA380), execution of real-time applications (e.g., a real-time operating systems (RTOS)) orsystems with specific requirements (e.g., avionics systems and automobile systems withspecific safety and environmental requirements, industrial PCs with specific environmentalrequirements).

In the previous sections – in particular in Sect. 1.3 – the following types of platforms and equip-ment have been mentioned:

• COTS platforms,• special-to-purpose platforms,• IMA modules (i.e., standardized special-to-purpose platforms), and• peripheral components, i.e., sensors, actuators, so-called smart components.

In the following, we will discuss these different platform types, provide examples and evaluate therespective characteristics. The evaluation shall consider the following criteria:

• development of the platform (time and costs);• verification, validation and certification (including answering the question who is responsi-ble for these tasks);

• consideration of fault avoidance and fault removal methods;• consideration of functional requirements;• consideration of reliability, availability, system integrity, data integrity, maintainability;9Although different implementations exist for Unix operating systems, they are not all alike and in particular do

not implement a common reference specification which has been agreed by some sort of committee. POSIX providesspecifications for parts of the operating system but does not comprise all parts of the OS.

Page 40: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

20 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

• conformance to environmental aspects (i.e., to temperature, shock, vibration, humidity, wa-ter penetration, electromagnetic compatibility, etc.);

• conformance to size and weight restrictions for hardware components, and memory restric-tions for software components; and

• conformance to interface restrictions.

1.5.1 COTS Platforms

COTS platforms are composed of a commercially available hardware component and an oper-ating system (e.g., Unix, Linux, Windows, real-time Linux). In general, COTS components aredeveloped for the mass market without considering the particular requirements of real-time safety-critical systems. This means on one hand that COTS components are relatively inexpensive andtypically available on demand which reduces the time and cost risks with respect to the devel-opment of the component. On the other hand, COTS components are typically not developedfollowing a development process needed for fault avoidance and are further not prepared for veri-fication, validation and certification as requested by the respective avionics standards. In particular,this information is often regarded as proprietary by the company developing the component. Asa consequence, verification and certification remains with the component supplier. In addition,properties like reliability, availability, system integrity, data integrity and maintainability are oftennot considered adequately for the use in avionics. In particular, maintainability raises many im-portant questions with respect to longevity and stability of COTS components: how long will thehardware be available unchanged for maintenance?, how is it possible to control the change of aCOTS component?, will the component be available for the typical life time of an aircraft (i.e., for20–50 years)?.

Considering the functional requirements, COTS components are not developed to fulfill the spe-cific functional requirements of the given system and consequently have less or more features,some of which may surprise and may not even been described in the respective specification doc-uments. In particular, unstandardized components have this problem.

Also, COTS hardware may not be designed to cope with the environmental demands of avionicssystems which impacts the reliability and availability of the component. For example, the temper-ature ranges, shock and vibration, or electromagnetic compatibility (EMC) that a desktop PC hasto cope with are much less demanding than the environment on board an airplane.

Furthermore, COTS components have been developed without any particular restrictions regardingsize and weight of the hardware component or memory for the software component. For avionics,aerospace, automobile and telecommunication, these factors may be crucial.

At last, the interfaces needed for avionics may not be available in COTS components or the con-nector may not comply with the avionics standard (e.g., ARINC 600).

Examples for COTS hardware components are (unstandardized) standard PCs or PC-basedcomponents, standardized PC-based components like VME or PC/104, or programmable logiccontrollers (PLCs). In particular, the standardized components are meant for deployment in avion-ics or aerospace (see [Coo03], p. 12, [Sto96], p. 253–268): they are robust, wide-temperature-range boards and more reliable and dependable than other COTS. For PLCs, the TUV Germanyoffers a certification service for hardware and OS (i.e., the firmware) ([Sto96], p. 255) based ondetails provided by the vendor.Although COTS hardware components have the disadvantages listed above, they have been used,e.g., in the space shuttle ([Sto96], p. 152), and US military aircrafts ([Tal98], p. 50). COTS hard-ware is also used on board the Airbus A380 for less critical systems. An example for the relatively

Page 41: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.5. CATEGORIZATION OF PLATFORMS 21

short life cycle of COTS components compared to that of an aircraft system is described in [Fil03](p. 4) showing the production of the chosen processor was stopped prior to first operational use ofthe aircraft.In general, COTS hardware components will be mainly used at device level, e.g., processing ele-ments, memories, and less at board level. For example, the PowerPC processor is widely used insafety-critical embedded systems.

Examples of COTS operating system components. For standard PCs and VME, operatingsystems like Linux, Unix, Windows and also more special-to-purpose real-time operating systemsare available. The latter includes – but is not limited to – RTLinux (by Finite State MachineLabs, Inc.), LynxOS (by LynuxWorks), VxWorks (by Wind River), VRTX (by Mentor Graphics),CsLEOS RTOS (by BAE SYSTEMS Controls), INTEGRITY-178B (by Green Hills Software),and Valid-653 (by Validated Software Corporation). The latter three also support the ARINC 653-1API as used for IMA modules. For PLCs, the operating system kernel is provided as firmware.Application examples for PLCs in safety-critical systems are given in [Sto96] (p. 261–267).VxWorks is used in the Mars exploration rovers and several other spacecrafts and intended tobe used in Boeing’s 7E7. VRTX has been used for the Hubble space telescope.

1.5.2 Special-to-purpose Platforms

Special-to-purpose or bespoke platforms have been developed and optimized for the requirementsand needs of a particular system. This means that the component (either hardware or operatingsystem) contains only the required functional features as documented in the related specificationand design documents. Further, it is possible to define exactly which standards have to be adheredto for the development and verification and validation process which simplifies the certificationof the component (compared to the difficulties when certifying COTS components). The tailoreddevelopment also allows to consider and conform to environmental aspects, size and weight re-strictions, interface requirements, etc.

Nevertheless, consideration of all these aspects is time-consuming and costly and the high riskfor not developing the component in time cannot be ignored and may lead to costly delays in theoverall aircraft development.

Furthermore, the airline’s dependence on the platform supplier is significant since due to the lackof standardization it is not possible to get the platform from another supplier. Appropriate contractscan contain this problem.

Examples for special-to-purpose platforms are manifold since in the past – using federated ar-chitectures – this type of platform has been preferred. The CIDS board in most of the newer Airbusairplanes is a bespoke platform.

1.5.3 IMA Modules

Integrated Modular Avionics platforms are standardized special-to-purpose platforms which com-ply with the respective ARINC standards: The IMA module’s connectors and hardware dimen-sions comply with ARINC 600, it provides communication interfaces according to ARINC 664,ARINC 429, or ARINC 629, and the operating system API to be used by the application softwarecomplies with ARINC 653-1. However, other hardware characteristics (i.e., number of processors,memory size, etc.) are not standardized since technological progress is assumed to be faster thanthe standardization process and what seem reasonable today is probably outdated tomorrow.

Page 42: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

22 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

IMA modules combine the advantages of COTS platforms and unstandardized bespoke platforms:Different suppliers can offer certified IMA modules complying to the respective standards whichhave been developed according to the requirements of the respective standards. For example,compliance of the operating system with ARINC 653-1 can be shown by the test suite defined inARINC 653-3. Furthermore, the IMA approach with its clearly specified API allows concurrentdevelopment of the platforms and the applications to be running on it.

Avionic data busses like ARINC 664, ARINC 429 and ARINC 629 are discussed in the followingsection (Sect. 1.6). IMA platforms and their operating system API are considered in more detailin Chap. 3. Testing of IMA modules is investigated in Chap. 5 and detailed by a case study in part3 of this thesis.

Examples. Different generations of IMA modules have been used so far (see [MS03], p. 334–336): the first IMA modules applied in the Boeing 777 were developed before the first version ofthe ARINC standards were released. As a consequence, the modules followed a closed architectureand were obtained from a single supplier. The second generation of IMA modules was partiallystandardized and deployed in the Boeing 777 and Honeywell EPIC. The current generation ofIMA modules complies with the aforementioned standards and is used in the Airbus A380.

1.5.4 Peripheral Equipment

Peripheral equipment comprises sensors and actuators which are connected to the data bus by adata concentrator. Some of them – so-called smart components – can have a degree of intelligenceand interface directly to the digital bus. Peripheral equipment in general can be commercial-off-the-shelf, special-to-purpose or a combination of both and thus combine the pros and cons asdescribed above. In the following, we will briefly discuss the advantages of smart peripheralssince the trend goes from conventional actuators and sensors to their smart successors. A moredetailed discussion of peripheral equipment, however, is outside the scope of this thesis.

Smart Peripherals are system peripheral devices which provide considerable more function-ality than traditional ones and are connected to the digital data busses. Therefore, they are alsocalled remote data concentrators (RDCs). The advantages of smart components in contrast to con-ventional ones are manifold (see also [Moo01], p. 33-8): Smart components provide considerablymore functionality, but by using advanced microcircuits they occupy only almost the same spaceas conventional ones. Further, the interface to the peripheral’s environment is simplified becausethe data are processed at the ‘point of action’ which allows to aggregate information and thus re-duce the communication flow between peripheral equipment and controller. This pre-processingof the smart components includes conversion from analog data to digital data which makes thecommunication less error prone. Fault localization is also simplified since the smart componentcan use on-the-fly built-in fault detection mechanisms. Further, smart components support ar-chitectural flexibility because the data is available everywhere in the aircraft via the data busses.The installation complexity can additionally be reduced by using the existing data bus and its re-spective (probably standardized) connectors (instead of direct wiring between the peripheral andcontroller). Thereby, the smart components contribute significantly to the reduction of wiring andweight. Additionally, the interface of the smart components can be standardized without posingrestrictions on the internal technology of the peripheral. But one should also take into accountthat this leads to more complex components which probably increases the cost and time effort fordevelopment, verification and certification. Furthermore, the resulting architecture depends on thereliability of the data bus connecting several peripherals with the controller – loss of one data busmeans loss of all connected peripherals. This risk can be reduced by using redundant busses (withall consequences on wiring and weight).

Page 43: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.5. CATEGORIZATION OF PLATFORMS 23

Examples. Smart peripherals are especially used in most systems where sensors and actuatorsare distributed over large parts of the aircraft: doors and slides management, fuel gauging system,flight attendant panels, etc. For example, the doors and slides management system as describedin [Diehl04] consists of a central management controller and distributed smart peripherals whichare located at the passenger doors, emergency exits, cargo doors, bulk cargo doors, avionics bayhatches and mobile crew rest hatches. The central management controller (which is running asan application on an IMA module) is connected to the smart components via a triple redundantCAN bus. In the fuel gauging system described in [Moo01], p. 33-9, a central gauging controlleris located in the avionics bay and connected to smart fuel gauging modules (which are in factintegral tank-wall connectors) via ARINC 629 or ARINC 429.

1.5.5 Operating System Requirements

In the previous sections, we have discussed (real-time) operating systems in general without con-sidering the specific requirements for application in the avionics domain. In accordance with[Kro04] and [Rus99], operating systems to be used in avionics systems have to consider the fol-lowing criteria:

• Real-time capabilities shall ensure predictable timing behavior by providing real-timeclocks and timers for the scheduler and the application software.

• Partitioning shall provide protection against fault propagation from one function to another.This mechanism is necessary for separation of functions (e.g., on standard platforms) aswell as for multilevel criticality isolation (e.g., on IMA platforms).

– Time partitioning considers deterministic scheduling of different partitions and deter-ministic scheduling of different threads within one partition.

– Space partitioning refers to memory management and protection which includes staticallocation of code, maximum stack sizes, fixed sizes and locations for memory heap.Its memory protection includes memory access rights (e.g., to treat code different fromstack area). Memory protection can also be supported by the processor hardware.

• Communication mechanisms shall allow data flow between partitions and communicationwith external platforms or peripheral equipment.

• Health monitoring shall detect illegal access to memory, ports, and the computing resourceby working as a watch dog in the time and space domain. In case of errors, configuredactions can be taken.

• Configurability shall allow to define for the RTOS the scheduling of the partitions, the mem-ory allocation for each partition, and the handling of specific errors, among others.

The selection of an appropriate operating system – whether COTS or special-to-purpose – for agiven hardware platform is important for the overall characteristics of the platform. Further, itdetermines the possibility to verify and certify the platform in conformance with the applicablestandards (e.g., DO178B). One factor which additionally has to be considered is the processorof the hardware platform: For many complex embedded computers systems of today a PowerPCprocessor is used whose memory management unit (MMU) can support the implementation of thememory protection mechanism.Different approaches for partition-supporting RTOS are listed in [Kro04] (p. 13):

• The POSIX specification defines the basic set of services to implement portable applicationswhich includes in the current version real-time capabilities, threads, and networking. POSIXprovides C and Ada bindings.

Page 44: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

24 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

• The ARINC 653 specification defines an API for implementing avionics applications to beexecuted concurrently on the same platform. In addition to the API, a model is providedwhich defines the general behavior of RTOSs conforming to ARINC 653. ARINC 653defines C and Ada language bindings.

• The Ravenscar profile is an Ada profile which restricts the set of available Ada services tothose which are necessary to implement high-integrity efficient real-time systems withoutendangering the overall safety. The Ravenscar profile is a de-facto standard which has beenwidely accepted – also by the ISO standards body. Further information about the Ravenscarprofile can be found in [Bur].

For the rest of this thesis, we will focus exclusively on operating systems conforming to ARINC653. ARINC 653 has been inspired by the ideas of POSIX though ARINC 653 and POSIX arenot compatible, i.e., applications are not portable without changes. Also, the ARINC 653 Adabinding and the Ravenscar profile are incompatible. Nevertheless, portability between POSIX andARINC 653 C binding or between Ravenscar profile and ARINC 653 Ada binding, respectively,is not necessary for avionics applications since they are typically developed based on the specificrequirements of an aircraft and are not developed to make them available on the market.

ARINC 653 and the API services to be provided by such operating systems, the configuration as-pects and the approach for developing application software for ARINC 653 platforms are providedin detail in Chap. 3.

ReferencesCOTS platforms and their usage in avionics is discussed in [Sto96], [Tal98], [Coo03], among oth-ers. Application examples for a COTS real-time operating system are addressed in [Avi04]. Thehardware and software of IMA platforms are described, for example, in [VICTORIA], [MS03],[ARINC653P1-2], and [ARINC653P3]. Application examples of remote data concentrators inavionics systems can be found in [AV04f]. [Rus99] and [Kro04] address the requirements ofoperating systems to be used in avionics.

1.6 Categorization of Aircraft Networking Technology

The network on board an aircraft connects the controllers with their peripheral devices and alsoenables inter-controller communication. For federated architectures, the network can be dividedinto two layers which use different networking technologies: The upper layer in the networkinghierarchy refers to controller-to-controller communication networks whereas the lower layer con-siders the communication network between controller and peripherals. Typically, there exists onelower layer network per system since the sensor information is not shared directly (but the sys-tem’s main controller may provide the information in an abstracted way). In integrated modulararchitectures, the networking hierarchy is partially dissolved. Different tasks of one system aswell as different systems can be co-located on the same IMA module, i.e., intra-system and inter-system communication cannot be distinguished if the internal communication means of an IMAmodule are used.

This mixing of networking layers introduces new authentication, performance, safety, and securityrisks. When choosing an appropriate networking technology to be used for an IMA architecture,one has to consider both the architectural level and the platform level. With respect to the net-working architecture, it is recommended to divide the upper layer into different networks whichare connected and separated by appropriate means. Considering the platform level, it has to be

Page 45: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.6. CATEGORIZATION OF AIRCRAFT NETWORKING TECHNOLOGY 25

shown that communication of one partition of an IMA module10 does not affect the configuredcharacteristics of communication links belonging to the other partitions of the same IMA module.The architectural considerations will be discussed briefly in the following. Verification of platformcharacteristics is addressed in detail in Chap. 5.

Standards like [ARINC664P1-1] recommend that the aircraft computing network is divided intofour sub-networks. These sub-networks are called aircraft domains in [ARINC664P1-1]. Thisdefinition of the term aircraft domain differs slightly from the term domain used in Sect. 1.1 wherea domain was the functional grouping of systems. Aircraft domains in contrast are supersets ofnetworks with the same requirements for performance, safety and security. In [ARINC664P1-1]and [ARINC664P5], it is suggested to group the aircraft computing network into four aircraftdomains – the aircraft control domain, the airline information services domain, the passengerinformation and entertainment services domain, and the passenger owned devices domain – whichmay be further divided into sub-domains. Figure 1.6 depicts the suggested grouping. Some aircraftdomains may provide services to the other domains which requires appropriate mechanisms at theborders of each domain to ensure the required security level of each aircraft domain. Note that inFig. 1.6 the level of criticality decreases from left to right while the level of protection mechanismsneeded to achieve the required safety depends only on the characteristics of each domain.

Flight andEmbeddedControl SystemSub-Domain

AdminstrativeSub-Domain

Cabin CoreSub-Domain

PassengerSupportSub-Domain

AircraftControlDomain

AirlineInformationServicesDomain

PassengerInformationandEntertainmentServicesDomain

DevicesDomain

PassengerOwned

Figure 1.6: Grouping of the aircraft computing network into four aircraft domains

Aircraft domains. The aircraft control domain is divided into two sub-domains: the flight andembedded control system sub-domain and the cabin core sub-domain. The flight and embeddedcontrol system sub-domain comprises the systems which control the aircraft from the flight deck.This aircraft domain is related to the flight control domain, the engine domain, the cockpit domain,the utilities domain and the energy domain (all described in Sect. 1.1). The cabin core sub-domainsubsumes the systems for environmental control, smoke detection and slides and doors manage-ment. It is related to the cabin domain in Sect. 1.1.

The airline information services domain can be subdivided into two sub-domains: the administra-tive sub-domain and the passenger support sub-domain. The administrative sub-domain providesoperational and airline administrative information to the flight deck and the cabin and maintenance

10The term partition refers to one task of an IMA module. Two partitions on the same IMA module can eitherbelong to the same system (in this context usually called application) or to different systems. The terms partition andapplication are defined in Chap. 3.

Page 46: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

26 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

services. It is related to the OIS domain in Sect. 1.1. The passenger support sub-domain providesinformation to support the passengers. It is related to the OIS domain and the PCES domain.The passenger information and entertainment services domain provides the in-flight entertainment(i.e., video, audio, gaming), passenger flight information, and access to the Intranet and Internet us-ing built-in terminals including related services like Voice over IP, Short Message Service (SMS),and Email. It is related to the PCES domain.The passenger owned devices domain is a network of those devices that passengers may bring onboard to connect to the passenger information and entertainment services or to one another.These networks and sub-networks differ with respect to the platform types connected by the net-work, the performance requirements, the access rights, and the other configuration, security andcertification characteristics. The networks in the aircraft control domain are static networks whichconnect safety-critical systems and thus failure to meet the real-time or availability requirementsmay have catastrophic effects. The requirements of the airline information services domain andthe passenger information and entertainment services domain are less stringent and the effectsof potential failures less critical. Nevertheless, it is important that these networks are protectedagainst viruses, worms, denial of service attacks, etc. which may come from malicious passengerowned devices.In this thesis, the focus is on systems, platforms and networking technology applied in the aircraftcontrol domain. The above division into aircraft domains is given to provide a complete overviewof the networks aboard an aircraft and the many different requirements and characteristics to beconsidered when designing the aircraft network and specifying the requirements documents. Fora detailed and comprehensive comparison of the network characteristics the reader is referred to[ARINC664P5], in particular Appendix I.In the following, we will focus on networking technology used where real-time communica-tion with bounded jitter and latency, high availability and integrity of the network is requiredbut the necessary communication bandwidth is low, and the network configuration is static, i.e.,all networking nodes are known at configuration time. We will consider both networking tech-nology for controller-to-controller and for controller-to / from-peripheral communication. Inter-controller networks typically conform to MIL-STD-1553B, ARINC 429, ARINC 629 or ARINC664 /AFDX, but proprietary solutions are also possible. For communication between controllersand peripheral devices, analog and digital signals, commercial RS232 and RS422, digital databusses like ARINC 429, ARINC 629, Controller Area Network (CAN), as well as a variety ofproprietary solutions are used. For the less critical parts, Ethernet-based application-specific pro-tocols may also be used, e.g., for communication with intelligent graphical user interfaces in thecabin. As discussed in previous sections (e.g., Sect. 1.3), digital bus solutions are preferred todedicated wires carrying analog or digital signals in order to reduce the amount of wiring neededto connect a controller with many peripheral devices. In the next sections, the main characteris-tics of ARINC 429, ARINC 629, ARINC 664 /AFDX, and CAN are discussed along with someexamples. MIL-STD-1553B is a military standard and further consideration is outside the scopeof this thesis since it is typically not applied in civil aircrafts.

1.6.1 ARINC 429

The ARINC 429 digital data bus ([ARINC429]) became the first standard data bus to be applied incivil aircrafts, e.g., in Airbus A300/310, A318 and A340-500/600, and Boeing 757 and 767. Be-fore the development of ARINC 629 and ARINC 664 /AFDX, it was the most widely used meansfor communication between controllers or controllers and smart peripheral devices (as noted inSect. 1.5.4). ARINC 429 is intended for exchanging of very small payload units (24 bits per 32-bit-package) at a data rate of 100 kbit/s. The payload is identified by the 8 Bit label containedin each package. Many fixed labels and thus payload formats are defined in the standard which

Page 47: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.6. CATEGORIZATION OF AIRCRAFT NETWORKING TECHNOLOGY 27

allows that each receiver can access the payload without additional coordination efforts. Typi-cally, the payload contains discrete state information and switching commands. For example, onelabel is specified to represent the state of the fasten-seat-belt and no-smoking signs, others carrytemperature control commands for the air conditioning system.The ARINC 429 data bus operates in a single-source-to-multiple-sinks unidirectional mode so thateach sender can transmit a message to different receivers but for replying to the sender separatedata busses are required (one for each sender). Thus, collision avoidance is inherent.Security issues and access rights have not been considered in the ARINC 429 standard because thedata bus is only intended for static networks of mechanically integrated controllers and peripheralequipment.Summarizing, the pros and cons of ARINC 429 are:

+ standardized labels

+ digital data bus helps to reduce the wiring needed to connect different controllers

+ single-source-to-multiple-sinks mode means that it is well suited for inter-controller andcontroller-to-actuator communication that often distributes one information to many re-ceivers

– single-source-to-multiple-sinks mode also means that it is not suited for sensor-to-controllercommunication because the wiring reduction seems to be negligible (although only one databus is needed to transmit the status information to redundant controllers)

– single-source data bus means that each piece of equipment needs as many input ports as thenumber of sources it expects to receive data from

± unidirectional mode determines the roles of sender and receivers but can result in additionaldata busses for acknowledgments or replies

+ small ARINC data packages support guaranteed real-time behavior

– small payload may lead to low throughput, i.e., ARINC 429 cannot be used for audio orvideo transmission

± not for dynamic open networks but very reliable for static networks

1.6.2 ARINC 629

ARINC 629 was developed to overcome the drawbacks of ARINC 429 – in particular with respectto the number of senders and receivers per data bus and the data rate. The development was drivenby Boeing who invented the Digital Autonomous Terminal Access Data Communication (DATAC)data bus that has been used in the 777 only. Nevertheless, it later became an ARINC standard.ARINC 629 operates in multiple-sources-to-multiple-sinks mode which allows much more free-dom in the exchange of data between communication nodes and thus reduces the required amountof wiring. The data bus operates at 2Mbit/s. The protocol used by ARINC 629 follows a time-based collision-avoidance concept in which each communication node is allocated a particulartime slot to access the bus and transmit data on to the bus. Bus access control is distributed be-tween the nodes which autonomously decide when the appropriate time slot is available throughthe use of several control timers embedded in the bus interfaces.The advantages and disadvantages of ARINC 629 are:

+ multiple transmitters are allowed which helps to further reduce wiring

Page 48: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

28 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

+ multiple-sources-to-multiple-sinks means that a communication node can receive and trans-mit via a single communication port (addressing redundancy or other architectural consid-erations may lead to additional data busses)

+ multiple-sources-to-multiple-sinks mode means that it is well suited for communication onupper and lower layer of the networking hierarchy

+ high data rate allows higher throughput than is achievable with ARINC 429 networks, i.e.,ARINC 629 can generally also be used for audio and (low-quality) video communication

- throughput is not sufficient to allow parallel audio and video communication with acceptablequality

1.6.3 ARINC 664 and AFDX

ARINC 664 is a fast networking data bus based on commercial, state-of-the-art Ethernet stan-dards from IEEE and IETF (for details see introduction section of [ARINC664P3-1]. The aim ofARINC 664 is to provide secure and reliable communications for the exchange of both critical andnon-critical data. The standard ARINC 664 is specified in seven separate document parts (see ref-erences list at the end of the chapter). Part 1 to 5 of the standard describe the general concepts andimplementation options for an aircraft data network but give room for many implementation de-cisions with respect to operational mode (e.g., half- or full-duplex operation), used protocols andservices, address structures, and network interconnection services. One variant for a deterministicnetworks is defined in part 7 ([ARINC664P7])– the so-called Avionics Full Duplex Switched Eth-ernet (AFDX). This variant is used for the Airbus A380 where it replaces the ARINC 429 networkfor most inter-controller communication. For the rest of this thesis, we will consider AFDX only.

ARINC 664 – and thus also AFDX – are based on standard Ethernet which is basically anon-deterministic transmission medium without guaranteed bandwidth since collision cannot beavoided if hubs are used for connecting the network nodes and switches may need to buffer data.To achieve a deterministic behavior, the point-to-point connectivity used for ARINC 429 databusses is emulated using a star topology switched Ethernet. In such a topology, collisions cannever occur on the wire, but may be resolved within the switches within bounded time intervals.This is achieved by controlling latency and jitter of each so-called Virtual Link (VL) resulting in abounded bandwidth and bounded frame delivery interval for each VL. This results in a calculablemaximum latency in the network rather than a probabilistic latency depending on the amount oftraffic. Nevertheless, this solution forbids the use of transport layer protocols which require re-transmission of lost messages (e.g., TCP) since occasional loss of message cannot be determinedin advance and thus cannot be included in the bandwidth calculation of a VL. In general, occa-sional loss can be tolerable if state information is transferred repeatedly. Otherwise appropriateprotocols have to be defined at application layer to overcome this limitation, e.g., by adding asequence number to each message.11 In avionics applications, timely transmission of messagesis much more important than reliable data delivery. For many applications using conventionalnetworks, usually the opposite holds and short delays are acceptable. Note that the likelihood ofpacket loss in an AFDX network is very low owing to the topology and network characteristicsand expected only in case of rare bit errors (which can be detected using CRC mechanisms). Faulttolerance and availability of the network can be provided by redundant links via physically sepa-rated networks (and thus different wires). Duplicated messages are then checked for integrity andfiltered in the receiver redundancy management unit.

11Note that application layer means on top of ARINC 653 ports and thus on top of AFDX ports is outside the scopeof this thesis.

Page 49: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.6. CATEGORIZATION OF AIRCRAFT NETWORKING TECHNOLOGY 29

Data flow can be further isolated by providing a cascaded star topology, i.e., switched networksconnected by a switch or router. This also provides the necessary scalability for large networkswhich may be needed when AFDX is used for controller-to / from-peripheral communication.The switches used within AFDX networks are statically configured by configuration tables as arethe VLs of the communicating nodes (which are called end systems within the context of ARINC664). This is necessary to achieve deterministic performance and to avoid latencies when gener-ating new entries in dynamic routing tables. Unlike in open world networks, all communicationlinks and end systems are pre-defined and deterministic in avionics systems.Since ARINC 664 /AFDX is based on Ethernet technology, it offers higher bandwidth thanARINC 429 and ARINC 629. Additionally, the length of logical frames is up to 8KBytes whichallows complex payloads containing structured data, text, audio and video streams. Thus, it is alsowell-suited for application in the airline information services domain and passenger informationand entertainment services domain. Connection to these domains and to the passenger owned de-vices domain is easy to realize using appropriate switching and firewall techniques since the basicmeans are the same (namely Ethernet, UDP, etc.).The payload of AFDX messages is not standardized in the ARINC 664 documents, i.e., labels likethe ARINC 429 labels have not yet been defined. Nevertheless, AFDX messages can be used totransport ARINC 429 labels – either by sending a single ARINC 429 label per AFDX message(single label message format) or by combining several ARINC 429 labels in one AFDX mes-sage (multiple label message format). Corresponding examples are provided in [ARINC664P7d],p. 126–129. In addition, it is possible to group primitive data types (i.e., integer, float, boolean,etc.) together in a message using a sequence of Functional Data Sets (FDSs). Figure 1.7 de-picts the message structure of a UDP packet containing one or more FDSs (figure according to[ARINC664P7d], p. 86). Each functional data set consists of two fields: A 4-byte Functional

Functional Status Set(FSS) (DS 1)

Data Set 1 Data Set 2(DS 2)

Data Set 3(DS 3)

Data Set 4(DS 4)

Reserved

UDP Header

StatusByte 1

Status Status StatusByte 2 Byte 3 Byte 4

Data DataDataPrimitive 1 Primitive 2 Primitive 3 . . .

. . .Functional Data Set 1(FDS 1)

Functional Data Set 2(FDS 2)

sequence of Functional Data Sets (one or more)

UDP Payload

Figure 1.7: Sequence of functional data sets in a UDP packet

Status Set (FSS) which comprises in each byte the health and status of one of the following fourData Sets (DSs). The length of the sequence of FDSs is only limited by the payload of the un-derlying transport mechanism (e.g., by the UDP payload size). Each DS can comprise many dataprimitives whose health and status is represented by one common field in the corresponding FSS.

Page 50: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

30 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

This means that if one data primitive of a DS is invalid, the entire DS is marked invalid. As aconsequence, a DS usually groups data which originates from the same source to avoid this formof “fault propagation” which may occur when grouping data from different sources. For example,if a sensor sends different readings all at the same rate to its corresponding controller which haspreviously be done by sending ARINC 429 message with different labels, all information can besent by one AFDX message but each piece of information is contained in a different DS.

ARINC 664 /AFDX also allows multicasting, i.e., multiple sinks for one message. This is pos-sible by using MAC multicast addresses (like in standard Ethernet) which provide routing andduplication of the messages as required.

The API for the application layer of AFDX end system is defined in ARINC 653 which is de-scribed in detail in Chap. 3. The basic idea with respect to inter-module communication is thatARINC 653 provides two types of communication ports: queuing ports for transmission of indi-vidual, sequential messages and commands and sampling ports for state information which shallbe available for repeated reading until reception of newer values. Both types of ports shall be us-able with AFDX. This requires an AFDX layer on top of the OSI transport layer since for receiveports queuing or sampling port functionality is not provided by UDP ports.

The pros and cons of AFDX can be summarized as follows:

+ protocols and basic transmission technology based on open world technology and protocols

+ well-suited for all types of network domains since the maximum frame length is 8KBytes

+ high data rate of 100Mbit/s for the data bus and guaranteed bandwidth and latency per VL(if a legal configuration is used)

– switches and end systems have to be configured to achieve deterministic performance, con-sistency and correctness (according to the available and required bandwidth) of configura-tion tables has to be analyzed and adds additional verification tasks which are not necessaryfor ARINC 429 and ARINC 629 networks

+ use of COTS components for switches and end system technology possible (if complying tothe specific requirements of AFDX), helps reducing development and purchase costs

+ multiple transmitters are inherently allowed

+ multiple receivers are possible using the means of the link layer multicast

– format and content of payloads is not standardized by ARINC 664 and has to be done bythe aircraft manufacturers (probably because the novel technology applied throughout theaircraft will introduce many new signal types which – at the present point in time – cannotbe foreseen and properly standardized)

+ AFDX payload can be used to format ARINC 429 labels as AFDX messages (thereby al-lowing to send single label messages and multiple label messages)

+ AFDX payload formatting as Functional Data Sets possible which allows to group data indata sets and provide status information about each data set; the grouping is carried outaccording to functional and application reasons almost without considering the resultingmessage size

+ connection to open world networks (i.e., passenger owned devices domain, Internet) and lesssafety critical network domains (i.e., the airline information services domain and passengerinformation and entertainment services domain) is easy to realize since the implementationof AFDX is based on Ethernet and UDP/IP

Page 51: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.6. CATEGORIZATION OF AIRCRAFT NETWORKING TECHNOLOGY 31

1.6.4 Controller Area Network (CAN)

Controller Area Network (CAN) is a bus system that has been developed by Bosch in the early1980s for use in automobile networks to reduce the wiring within the automobile and thus itsweight. The specification has been internationally standardized as ISO 11898. ([CAN-P1],[CAN-P2], [CAN-P3d], [CAN-P4]). Besides its original application area in automobiles, it isnowadays also used in off-highway and off-road vehicles, trucks and busses, in passenger andcargo trains, maritime electronics, automation industry and aircraft and aerospace electronicswhere it has often replaced proprietary solutions. In avionics systems it is typically used by con-trollers to communicate with smart peripherals.

The CAN data bus operates in multiple-sources-to-multiple-sinks mode and bus access is regulatedby the communication nodes itself using bit-wise arbitration of the CAN message identifier. Themessage identifiers are uniquely assigned to a sender and serves as a priority measure and alsodenote the message content. Message identifiers are statically assigned during system design andcan only be used by one sender to avoid priority conflicts. One sender can use different messageidentifiers for different content since the CAN message identifiers are content-oriented and do notaddress a specific set of receivers. Instead each receiving node can filter the CAN messages usingthe identifier to decide which messages shall be handled. This means that new receivers can easilybe added to the data bus without any modifications to the previous communication nodes. Messageprocessing can then occur simultaneously in the nodes.

If two ore more communication nodes try to send a new CAN message simultaneously, the busaccess conflict is resolved by these nodes themselves by bit-wise arbitration of the identifiers. The“winner” can transmit its message while the others wait until the bus is available again. Thisapproach helps to guarantee low latencies for high-priority or time-critical messages and builds ahierarchy of messages. In addition, it guarantees that neither information nor time is lost since thewinner has already started transmission without time-consuming negotiation and the others knowthat their message has not been sent (no information loss). The disadvantage of this approach isthat latency increases with load and – in case of faults that cause some nodes to make excessivedemands – other nodes with a lower priority may be prevented from sending their messages.

Different types of message frames are possible: data frames, remote frames, error frames, andoverload frames. Data frames contain an identifier and the data itself whereas remote frames arerequests from other nodes to someone else to send a data frame with the respective information. Ifa data frame and a corresponding remote frame are transmitted simultaneously, the data messageprevails over the remote frame although both messages contain the same identifier, i.e., have thesame priority. Error frames are transmitted if a node has detected a bus error. Overload frames areused to provide an extra delay between frames. Data frames and remote frames are separated fromthe preceeding frame by an interframe space.

CAN messages can have a payload of up to 8Bytes not including the message identifier whichdefines the content of the payload. Thus, the concept of the CAN message identifier correspondsto ARINC 429 labels. Comparing CAN with ARINC 429, CAN allows multiple senders on thebus and provides up to 2.5 times the payload of an ARINC 429 message. On the other hand, theprotocol overhead in a CAN messages is more than 5 times the overhead of an ARINC 429 mes-sage. Also the number of identifiers / labels is different: CAN allows up to 211 different identifiers(standard CAN) or up to 229 different identifiers in the extended version whereas ARINC 429 onlyprovides up to 28 different labels.

The number of communication nodes on one CAN bus is theoretically not limited. During systemdesign the expected / possible delay times and the electrical loads on the bus line are calculatedto avoid too many nodes on one bus. To avoid total loss of communication with all peripheralequipment if a CAN bus fails (either by loss of power or by interrupts of the wiring), typically,redundant busses are used where at least one is connected to essential power.

Page 52: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

32 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

The possible length of the bus line depends on the transfer rate: up to 40m at a transfer rateof 1Mbit/s (maximum transfer rate), 620m at 100 kbit/s, 10km at 5 kbit/s. Note that aboard anaircraft, a bus length of 40 m is unrealistic for most controller-to-peripherals networks since thecontrollers are typically installed in centrally located compartments whereas the location of thesensors / actuators may be distributed over the whole aircraft (e.g., smoke detectors).

Summarizing, the advantages and disadvantages of CAN are as follows:

± definition of message identifiers at system design, but no standardized identifiers

+ multiple-sources-to-multiple-links mode means that it is well suited for controller-to / from-peripherals communication but can also be used for inter-controller communication (e.g.,communication between PLCs)

– arbitration method can lead to starvation of senders with low priority messages if malicioussenders want to send high priority messages; not a severe safety risk if used in static andcontrolled networks

+ built-in error detection mechanisms increase reliability of transmission

– relative small payload means that CAN is not applicable for audio / video communication

1.6.5 Application-specific Busses and Protocols

In many systems, proprietary bespoke data busses and protocols are used for communication withthe peripheral equipment or for inter-controller communication. For example, the German corpo-ration KID-Systeme has developed for the Cabin Management System CIDS for the Airbus A318and A340-500/600 two proprietary busses: the so-called Topline and Middleline. These are usedfor controlling external devices such as reading lights or Additional Attendant Panels (AAPs). Adetailed analysis of the pros and cons and a comparison with the above bus systems is outside thescope of this thesis.

Additionally, it is possible to implement application-specific protocols on top of standard busses.

ReferencesARINC 429 is standardized in [ARINC429]. ARINC 629 is specified in two parts:[ARINC629P1-5] provides the technical description and [ARINC629P2-2] an application guide.ARINC 664 consists of seven separate documents: [ARINC664P1-1] addresses the systemconcepts and provides an overview, [ARINC664P2-1] describes the Ethernet physical anddata link layer specification, [ARINC664P3-1] addresses Internet-based protocols and ser-vices, [ARINC664P4-1] contributes Internet-based address structures and assigned numbers,[ARINC664P5] deals with network domain characteristics and interconnection, [ARINC664P7]specifies the Avionics Full Duplex Switched Ethernet (AFDX) network, and [ARINC664P8]describes interoperation with non-IP protocols and services. CAN comprises four documents:[CAN-P1] addresses the data link layer and physical signaling, [CAN-P2] the high-speed mediumaccess unit, [CAN-P3d] the low-speed, fault-tolerant, medium dependent interface, and [CAN-P4]the time-triggered communication. The application of CAN in the automotive domain is described,for example, in [Sto96]. The Topline and Middleline as examples of application-specific bussesand protocols are introduced in [Pel02b]. A comparison of different networking technologies isprovided in [MS03] and [Rus02a], among others.

Page 53: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

1.7. DEVELOPMENT PROCESS – ARCHITECTURE AND PLATFORM DEPENDANCE 33

1.7 Development Process – Architecture and Platform Dependance

The V-Model introduced at the beginning of this chapter and depicted in Fig. 1.2 does not con-sider in detail different architecture models, different redundancy concepts, or the use of differentplatform types. Moreover, the development process as described in Sect. 1.1 presumes a federatedarchitecture and no development of dissimilar hardware or software. In the following paragraphs,the differences of the development process regarding diverse development and use of IMA mod-ules are emphasized.

Figure 1.8 depicts the development and V&V activities as typically performed for federated ar-chitectures where each system uses unstandardized, special-to-purpose platforms. These differentplatforms are developed in parallel and independent of each other. An aircraft architecture usingsuch a development process may use hardware or software redundancy with duplicated identicalcomponents. The figure shows that the development of the different pieces of HW and SW equip-ment are separate and parallel processes and that equipment duplication to achieve redundancydoes not result in additional development activities.

� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �

� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �

� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �

� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �

� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �

� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �

specialized HW equipmentdevelopment of different

S21E111E112E121E122

E211E212

E221E222

S11

D2D1

Aircraft

S12 S22

Figure 1.8: V-Model used for federated architectures with different specialized platforms for eachsystem

Figure 1.9 shows the development process for an architecture using hardware redundancy withdiverse hardware equipment. In this example, system S 11 uses an architecture with redundant anddiverse hardware but with the same (duplicated) software on the different platforms. This resultsat equipment level in two separate development processes – one for each diversely developedequipment E112a and E112b.

In Figure 1.10, the development process for aircrafts using an IMA architecture is depicted. Oneof the main characteristics is that different systems share the same computing resource – the IMAplatform. As a consequence, the IMA modules are developed independently from the systems’application software. In this example, the applications E111, E121, E211, and E221 are developedaccording to the IMA approach (i.e., use the services provided by ARINC 653) and are thenintegrated on the separately developed IMAmodule during system integration which is consideredin Chap. 5.

Page 54: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

34 CHAPTER 1. DEVELOPMENT OF AVIONICS SYSTEMS

� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �

� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �� � � � � � � � � � � � � � � �

� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �

� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �� � � � � � � � � � � � � � �

� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �

� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �� � � � � � � � � � � �

� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �

� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �

� � �� � �� � �� � �� � �� � �� � �� � �� � �� � �� � �

� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �

� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �

� � � �� � � �� � � �� � � �� � � �

� � � �� � � �� � � �� � � �� � � �

� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �

� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �

� � �� � �� � �� � �� � �

� � �� � �� � �� � �� � �

specialized HW equipmentdiverse development of

E211E212

E222

S21S11

D2D1

Aircraft

S12 S22

E221E211

E212E221E222

E111

E112aE112bE121E122

Figure 1.9: V-Model depicting the parallel development of diverse platforms for system S 11

S11 S12 S21 S22

E111

E121

E211

E221

D2D1

Aircraft

development ofIMA platform

no development ofspecialized HW equipment

Figure 1.10: V-Model showing the development process as used for IMA architectures wherestandardized IMA platforms are developed once and then used by different systems

Page 55: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Chapter 2

Testing of Avionics Systems

The life cycle model introduced in the previous chapter assigns the tasks needed for buildingan aircraft to two main processes: the development process and the verification and validationprocess. The development activities were the subject of the previous chapter – particularly inSect. 1.1 and Sect. 1.7. The verification and validation (V&V) process shall be examined in thischapter. Verification and validation are defined as follows:

• Verification is the process of determining that a system or one of its parts meets its specifiedrequirements. Thereby, the focus is on showing the conformance with the requirementsand design specification rather than demonstrating the actual correctness. This means thatcorrect and complete specification documents are required which is examined by validationactivities. Two sub-processes are distinguished:

– Requirement verification shows the compliance of a requirement with its respectivehigher-level requirement.

– Implementation verification demonstrates that a hardware or software component, asubsystem or a system complies with its respective requirement and design documents(e.g., system with SRD and SDD).

• Validation is the process of determining that a system and its requirements are appropriate,consistent and complete and reflect the customer requirements and expectations. There arethe following validation sub-processes:

– Requirement validation is performed on requirements and design decisions and deter-mines if the requirements are adequate, clear, correct, consistent and complete. Theresults of this sub-process are validated requirements to be used for verification. Ex-perience has shown that early validation of the requirements (ideally before furtherdetailing them) can identify subtle errors or omissions and thus may reduce costs andtime for late redesign of inadequate requirements or even systems.

– Implementation validation is performed on the implemented product and determinesthat it is appropriate for its purpose and meets the customer requirements.

The primary aim of the V&V activities is to increase confidence in the development process andthe implementation. Further, it shall provide the necessary documents required for certificationof the aircraft. Certification means to convince some external regulating body that the aircraftis safe and complies with the applicable requirements and standards. It is practically impossibleto guarantee the correctness and safety of a complex system for all situations and the absence ofany faults, but the verification and validation shall demonstrate an acceptable level of correctness,completeness, reliability and safety. The selection of useful methods and the extent to which each

35

Page 56: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

36 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

needs to be applied is the result of agreement with the certification authorities with regard to thestandards and regulation documents referred to by the certification authorities. A verification andvalidation plan subsumes the agreement and determines the organizational responsibilities, the se-lected methods, the environment, and guidelines for re-verification. The selection process dependsessentially on the associated development assurance level. However, to simplify development andcertification, a Functional Hazard Analysis (FHA) is performed at the beginning of the develop-ment process to identify potential functional failures and to assign a failure condition category toeach function, system and subsystem. If multiple categories of failure conditions can be associ-ated with the aircraft’s different functions and the respective systems and subsystems and if theinteraction between the systems or subsystems is limited by appropriate means (e.g., by architec-tural separation of the systems), different development assurance levels (DAL) can be assigned toeach part. For the certification of the aircraft, each platform type and application software part iscertified independently according to its respective DAL and then combined. Thereby, the V&Veffort for systems and components considered less critical can be reduced which as a consequencereduces the overall costs and time spent for verification and validation.

Note that verification and validation activities are also performed on V&V documents. In partic-ular, the test design documents and the test results are analyzed and reviewed. The test designdocuments describe which features have to be tested and how and relate these descriptions to therequirements of the respective development level (i.e., tests on system level are related to system-level requirements). The objective of analyzing the test design in relation to the requirements isto confirm that the tests satisfy the specified criteria, define the expected results, and are describedclearly and accurately. Test result analysis shall ensure that the test results are correct and thatdiscrepancies between actual and expected results are explained.

In this chapter, we will focus on testing as one of the most promising implementation verificationmethods that allow extensive automation and thus can help to reduce time and costs for certifi-cation and re-certification. To provide an overview of the different parts to be considered duringverification and validation, the following section (Sect. 2.1) briefly subsumes methods for veri-fication and validation. Section 2.2 then discusses the approach for aircraft integration and therespective testing activities. An essential part of testing are the test design documents which areaddressed in Sect. 2.3. These documents define the tests to be carried out. Of particular interest isthe question which formalisms are used for defining the tests (i.e., the test cases and the test pro-cedures). The different specification techniques (which are also reviewed in Sec. Sect. 2.3) leadto different restrictions regarding test data generation and test execution which are both discussedin Sect. 2.4. In addition, the formalisms may impose restrictions on the means for test evaluationand error diagnosis. These are also considered in Sect. 2.4.

2.1 Overview of Verification and Validation Methods

Verification and validation methods in general are applied to

• the requirement and design documents,

• the results of the development process (i.e., the hardware equipment, their system software,and the application software),

• the verification and validation documents (e.g., the test design documents and the verifica-tion and validation plans), and

• the results of the integration and V&V activities.

Page 57: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

2.1. OVERVIEW OF VERIFICATION AND VALIDATION METHODS 37

The applicable and useful methods are recommended in the respective standard documents (e.g.,[DO-178B], [DO-160E]), recommended practices (e.g., [ARP4754]), and airframer directives(e.g., [ABD200]). Among others, requirement and implementation verification activities for soft-ware are considered in RTCA /DO178B ([DO-178B]). RTCA /DO-160E ([DO-160E]) considersstandard procedures and environmental test criteria for airborne equipment. Implementation ver-ification is also regarded in [ARP4754]. Review and analysis methods for test case and test pro-cedure verification are considered in RTCA /DO178B ([DO-178B]) for software systems and canbe applied similarly for aircraft systems. Means for requirement validation are discussed amongothers in [ARP4754]. Since all verification and validation activities are performed to achieveequipment or aircraft certification, the importance of these standards and recommended practicesis demonstrated by the numerous references in the certification guidelines of the regulation author-ities (i.e., of the European Aviation Safety Agency (EASA) and the Federal Aviation Administra-tion (FAA)). This includes, for example, the IMA-related guidance documents [AC20-145] and[TSO-C153] which are both issued by the FAA.The verification and validation plans are compiled for each level (from aircraft level to equipmentlevel) and agreed with the certification authorities. They define in detail the combination and se-quence of V&V activities, the documents to be produced, the verification environment (e.g., thetest rig setup), the roles and responsibilities, and means for maintaining the status of verificationor validation after changes to the requirements or to the produced component. The plans are eachaccompanied by a schedule. Note that the effort (with respect to cost and time) for generatingverification-specific hardware and software should not be underestimated because this includesplanning, developing and assembling a test rig, selection of a test tool and development of therequired interfaces to the system under test, development of simulations, among others. Addi-tionally, the schedule should plan in advance that validation or verification activities may detectdeficiencies and that some development and V&V activities have to be repeated.

Verification Methods. Verification consists of the following complementary methods:

• Inspections and reviews typically use a checklist or similar aid and provide a qualitativeassessment of correctness. Questions addressed by inspections and reviews are the clar-ity and completeness of the description (requirements, test cases, test procedures, and testresults, irrespectively), the compliance of the document with documentation standards, con-sistency of the terminology and with related documents, correctness with respect to the aimof the document, data usage regarding types and specific values, and evaluation of inter-faces, maintainability, performance, reliability, testability and traceability.Inspections and reviews can be applied to all documents of the development and verificationprocess (i.e., specification documents, source code and equipment, V&V documents, andV&V results).

• Analyses examine in detail a functionality, a system’s or component’s performance, require-ments traceability or coverage of the requirements by the implementation. Analyses canbe applied to the results of the development process (e.g., code analysis) as well as to theresults of the verification process (e.g., test case traceability analysis). In comparison withreview methods, analyses provide repeatable evidence of correctness. Unlike testing, analy-sis methods look at the characteristics of a component that indicate or suggest the presenceof some kind of faults. Analysis methods can evaluate the entire operating range of a compo-nent and are thus an important complement to testing if exhaustive testing is impossible dueto an infinite number of test cases. A further difference to testing is that analysis methods ex-amine the component without executing it, i.e., they can be applied on non-executable codefragments or modules. Typical analysis techniques are formal proofs, semantic analyses,control flow analyses, data flow analyses, symbolic execution (also called model checking),and metrics.

Page 58: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

38 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

• Testing provides repeatable evidence of compliance with the requirements by exercisingthe respective component in an appropriate environment. Testing is applied on the resultsof the development process. It executes the respective component and is thus the onlymethod to investigate the timing performance of a system or component (in particular byperformance and stress testing). Testing is categorized by the chosen test case selectionstrategy resulting in three types of testing: functional testing, structural testing, and randomtesting. Functional or requirement-based testing is a black-box approach, i.e., no detailedknowledge about the implementation is required. Structural or coverage-based testing usesdetailed knowledge of the internal structure to investigate the characteristics and thus is atypical white-box approach.If the operational environment of a component or system is not available or cannot be usedduring testing, environment simulations or test stubs can partially or fully replace the realenvironment.

• Service Experience Analysis is applied to increase the confidence in the requirements anddesign and is based on documented experience reports of another similar system that isalready in use in a similar environment. The reliability of this verification method obviouslydepends mainly on engineering judgment that the system or component is similar enoughand installed and used in an adequately similar environment.Note that most avionics systems are developed further from one to another application areato add more functionality or to remove undesirable behavior. It is then difficult to guaranteeadequate similarity for the respective system.

It is outside the scope of this thesis to discuss in more detail which combination of verificationmethods is most commonly used in which life cycle phases. Storey ([Sto96], p. 325) provides apossible list of verification methods and their assignment to particular life cycle phases. ARP 4754([ARP4754], p. 55) proposes for each development assurance level a list of verification activitiesthat are recommended, to be negotiated or not required for certification. The appropriate selectionfor a given safety-critical system is guided by the respective standards and directives and thennegotiated with the certification authorities. Typically, several methods are used in each phase tocover statical and dynamical aspects.

Validation Methods. Validation methods are particularly using structured reviews, analyses andexecution of the component under validation to check the completeness and correctness at eachhierarchical level of requirements and design decisions and to determine that the regulatory stan-dards and guidelines have been applied. Safety-related analyses are usually based on the results ofa Functional Hazard Analysis (FHA), Failure Modes and Effects Analysis (FMEA) and Prelimi-nary System Safety Assessment (PSSA). In addition, similarity with an in-service certified systemor component can increase the confidence in the quality of the requirements and the design.Before the final product is developed or available, the specification can be validated using a sim-ulation which is developed either in an automated way animating a formal specification or bymanually implementing a simulation or prototype from a possibly non-formal specification.The selection of appropriate and necessary validation methods depends on the failure conditioncategory of a system or component and the techniques used for specifying the system. The vali-dation plan comprising this selection has to be agreed with the certification authorities. Detailedinstructions are outside the scope of this thesis, but, for example, ARP 4754 ([ARP4754], p. 49)provides for each development assurance level a list of recommended validation methods.

Focus on Testing. In the following, we will focus on testing including test case specification,test data generation, test execution and test evaluation although its main disadvantage is that itmay only detect existing faults but cannot guarantee that a system is free of faults. Nevertheless,focusing on testing is motivated by the following reasons:

Page 59: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

2.2. SYSTEM TEST APPROACH FOR AVIONICS SYSTEMS 39

• Current standards and directives in avionics put much emphasis on testing as the essen-tial verification method and suggest to apply other verification methods to complement theverification activities (and thus to overcome the disadvantages of testing).

• Tests are the only way to check the performance of a component or system in normal andstress situations.

• Testing can become more cost effective if adequate (i.e., usually automated) test data gener-ation, test execution and test evaluation techniques are applied.

• Performing reviews, inspections and analyses are manual methods and require an extensiveamount of engineering judgment, consequently they are less subject to automation and thusto cost reduction.

• In particular, formal analysis (i.e., model checking) requires much expertise from the usersince it is often necessary to formulate the problem in a form which is able to circumventthe shortcomings of the available tools (e.g., to avoid the so-called state explosion problem).

• Complete formal proofs (e.g., by model checking) are usually impractical for systems withthe degree of size and complexity that we are facing here.

• In general, it is not feasible to provide a complete formal specification for realistic industrialsystems if the hardware specifications and the integration of real-time aspects are consid-ered. Thus, a formal analysis of the complete system is often impossible.

• Approaches to support validation methods and to simplify test case generation and simula-tion are addressed by current and future research topics, e.g., in the research project KATO(see [OH05]) and in the research project HYBRIS (see [BBHP04] and [BZL04], amongothers).

ReferencesThe standard document [DO-178B] includes considerations for the development and verificationof software in avionics systems. [DO-160E] focuses on the hardware part and considers stan-dard procedures and environmental test criteria for airborne equipment. Recommended prac-tices for development, verification, validation and certification of aircraft systems are compiledin [ARP4754], [AC20-145] and [TSO-C153], among others. Respective airframer directives canbe found, for example, in the Airbus directive [ABD200]. A detailed discussion of verification,validation and testing of safety-critical systems is provided in [Sto96] (particularly in chapter 12).

2.2 System Test Approach for Avionics Systems

Generally, different strategies are possible for system integration and the associated testing. Theseare:

• a non-incremental or “big bang” integration strategy integrates all components and systemsin one step

• incremental integration strategies integrate in each step one or few system components fol-lowing a

– bottom-up approach: one integration process starting with the “smallest” components(e.g., at equipment level),

– top-down approach: one integration process starting at top level (e.g., at aircraft level),

Page 60: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

40 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

– available-components-first approach: one integration process starting with the firstavailable component and step-wise integration of further available components,

– function-oriented approach: one integration process per (aircraft) function which step-wise integrates the respective systems and components,

– business-process-oriented approach: one integration process per business process oruse case integrating step-by-step all systems and components contributing to this usecase (e.g., from boarding to de-boarding of the passengers),

– outside-in approach: two integration processes starting simultaneously at bottom andtop and moving towards each other,

– inside-out approach: two integration processes starting at medium level and movingsimultaneously towards bottom and top level,

– hardest-first approach: one integration process starting with the most critical compo-nents.

All incremental integration strategies need test stubs, dummies or simulations of componentswhich have not yet been integrated but are necessary for the already integrated components. De-veloping adequate test stubs or simulations is a costly process that can be avoided using a big-bangstrategy. However, this non-incremental approach has many disadvantages: all components haveto be available beforehand, error diagnosis and localization is difficult due to the collaborationof too many new and possibly erroneous components, and it often leads to an unstructured andunsystematic approach if problems occur and cannot be easily isolated and localized.

Integration strategies applied in avionics are typically combinations of different approaches pre-pared and applied concurrently: The main strategy is to follow a bottom-up approach starting atequipment level by assembling the platforms (i.e., by integrating the operating system softwareinto the equipment’s hardware) followed by the integration of the application software into theplatform. The next step on system level integrates the components of each system, which arethen assembled on domain level and finally on aircraft level. At each level, the current integrationactivities can focus on the respective details, e.g., on equipment level on hardware or softwarerequirements, on system level on system requirements. These integration activities can be per-formed at different sites, e.g., the platform integration at the platform supplier’s site, the systemintegration at system supplier’s site, the domain integration at domain integrator’s site, and thefinal aircraft level integration at airframer’s site.

The main integration process is accompanied by further integration processes at each level whichcan apply different integration approaches. For example, at system level, an available-components-first approach, a hardest-first approach or a bottom-up approach are most meaningful strategieswhile on domain and aircraft level additionally function-oriented or business-process-oriented ap-proaches constitute promising strategies. In future, the airframers envisage on all levels to ap-ply more test-objective-oriented integration strategies (i.e., function-oriented or business-process-oriented approaches) rather than adhering to the currently applied procedure-oriented integrationstrategies (i.e., bottom-up, top-down, inside-out, outside-in, or hardest-first approach). Possibly,this may lead to a set of integration processes that are mainly business-process-oriented and eachconsider aircraft to equipment level requirements. Each such integration process may follow afunction-oriented approach focusing on aircraft or domain functions. By this new approach, air-framers hope for improvement of the travel quality by ensuring that the business processes areconvenient, effective and coherent for passengers and crew members.

Also today the previously described main integration process is not exclusively a bottom-up ap-proach. In fact, it is an outside-in approach, i.e., the above described bottom-up approach isaccompanied by a top-down approach starting at aircraft level. For example, the assembling of theso-called iron bird is started while functional tests on equipment and system level are still carried

Page 61: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

2.2. SYSTEM TEST APPROACH FOR AVIONICS SYSTEMS 41

out. An iron bird is a full size system integration test rig that integrates most avionics systems andtheir original equipment and wiring in a laboratory environment without scaling the size and num-ber of hardware components.1 This allows, for example, to examine the effects of original cablelengths and the actual power consumption. Obviously, this second integration process focusing onassembling an iron bird or an aircraft mock-up is at first more hardware-oriented while the mainbottom-up approach initially focuses on software integration and HW/SW integration.The integration process is accompanied by verification and validation activities, in particular byintegration testing as depicted in Fig. 2.1. At equipment level, platform tests and applicationsoftware tests are performed. At system level, system tests and multi-system tests are executed.After integrating the systems of all domains into the aircraft, ground tests and flight tests canbe carried out. Figure 2.1 also emphasizes that at bottom level there is one test process for eachplatform and each application software, while after hardware / software integration there is one perintegrated platform and so forth. Note that flight tests, ground tests and partially system integrationtests are on-aircraft tests while the others are laboratory tests that each use their specific integrationtesting environment and appropriate simulations for components and systems not yet integrated.

� � � � � �� � � � � �� � � � � �� � � � � �

� � � � �� � � � �� � � � �� � � � �

� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �

� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �

� � � � �� � � � �� � � � �

� � � �� � � �� � � �

� � �� � �� � �

� � �� � �� � �

� � � � � �� � � � � �� � � � � �

� � �� � �� � �� � �� � �� � �

� � �� � �� � �� � �� � �� � �

� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �

� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �

� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �

� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �

� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �

� � � � � �� � � � � �� � � � � �� � � � � �� � � � � �

� � � � � �� � � � � �� � � � � �� � � � � �� � � � � �

� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �

� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �

� � � � � �� � � � � �� � � � � �� � � � � �� � � � � �� � � � � �� � � � � �

� � � � � �� � � � � �� � � � � �� � � � � �� � � � � �� � � � � �� � � � � �

� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �

� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �� � � �

� � � � �� � � � �� � � � �� � � � �

� � � � �� � � � �� � � � �� � � � �

System Tests

Multi-System Tests

System Integration TestsGround TestsFlight Tests

Application TestsPlatform Tests

E112E121E122

E211E212

E221

S11 S21S12 S22

D1 D2

Aircraft

E222

E111

Qualification Tests

Figure 2.1: Integration testing activities: (a) platform tests including hardware and operating sys-tem tests (green pattern), (b) application software tests (red pattern), (c) system tests (blue pattern),(d) multi-system test within one domain (dark blue pattern)

2.2.1 Platform Testing

The process of platform testing considers testing of the physical hardware, testing of the specificoperating system (OS) implementation, and testing the integration of the OS onto the hardware.

Hardware Testing. Hardware equipment testing includes functional tests and environmentaltests whereby the latter include mechanical, electrical, electronic and other tests. Thereby, thecompliance with functional and safety requirements is examined under normal and exceptionaloperational conditions with respect to temperature variation and in-flight loss of cooling, pressuredrop, decompression, overpressure, humidity, shock and crash safety, vibrations, acoustic vibra-tions, explosion, waterproofness, sand and dust, magnetic effects, icing, fire, and power consump-tion. The respective environmental testing methods and approaches are defined in the standards

1A smaller version of the iron bird is a so-called mini bird that uses only one or very few originals per componenttype and replaces the others with software simulations. The testing possibilities are limited with respect to an iron bird,but less laboratory space is required.

Page 62: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

42 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

and directives (e.g., [DO-160E], [ABD200]) but are not further detailed in this thesis. Note thatthe hardware characteristics of sensors, actuators and other equipment are verified similarly.

Operating System Testing. Operating system testing is focusing on functional tests under dif-ferent operational conditions to also verify the robustness of the system under test. Thereby, theenvironment of a platform may need to be simulated, e.g., to test the interface driver part of anoperating system, external communication partners have to be simulated to check the sending andreceiving of data and the compliance with the respective standards.

Integration Testing. Testing of the platform’s operating system can be performed on two levels:at software integration test level and at hardware/software integration test level. Software integra-tion test level means that a software simulation of the target hardware is used to test the OS. Athardware/software integration test level, the OS is tested on the target hardware which itself hasbeen tested separately as described above. The result of the hardware / software integration is atested and integrated platform that may be subject to certification. Note that for certain platformtypes and if platform and application software is developed by the same supplier, certification canbe postponed until the application software has been integrated.

The responsibility for platform testing varies depending on the type of platform or the combinationof hardware and operating system as described for each platform type in Sect. 1.5.For shared platforms (e.g., if IMA modules are used), the test process is described in detail inChap. 5.

2.2.2 Application Testing

Application testing focuses on testing the application software which shall later be executed onthe respective platform. It can be performed at module test level, at software integration test level,or at hardware/software integration test level.

Module Testing. At module test level, units of the software, e.g., specific functions or algo-rithms, are tested. The respective module is executed in a simulated environment, i.e., simulationsand test stubs are required. Therefore, it is often only performed for highly critical parts to avoidthe preparation effort. Testing at module test level may stepwise progress to testing the completeapplication software if an incremental integration strategy is applied on this level that replaces ineach step a module’s test stub with the respective module.

Software Integration Testing. At software integration test level, a simulation of the target plat-form is used to execute the application software which may allow to access parts of the platformwhich are normally protected. For example, a platform simulation may allow access to the ap-plication’s memory areas. Testing at this level can only be performed if a platform simulation isavailable.

HW/SW Integration Testing. At hardware / software integration level, the application softwareis executed on the (tested) platform. If a platform is not shared, the testing activities at this levelresult in a tested controller (i.e., a tested platform with tested and integrated application software)and are the basis for certification of the controller.

For applications to be integrated onto a shared platform (e.g., IMA module), the V&V activitiesare considered in more detail in Chap. 5.

Page 63: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

2.2. SYSTEM TEST APPROACH FOR AVIONICS SYSTEMS 43

2.2.3 System Integration

System integration assembles the controllers and peripheral equipment of a system that each havebeen tested at equipment level. For federated and IMA architectures, the interacting systems aresimulated. At this level, testing of the system architecture’s redundancy concept can be completed.

2.2.4 Multi-System Integration

At this level, the systems belonging to the same domain are integrated and their interaction is testedfor normal and exceptional operational environments. Systems belonging to other domains arestill simulated. For an IMA architecture, this test level may be more important because differentsystems of the same domain may share an IMA module. Multi-system integration is typicallyperformed at one site per domain (e.g., Toulouse for the cockpit domain and Hamburg for thecabin domain).

2.2.5 System Integration Testing

System integration physically assembles all domains (i.e., all systems) at one site. The aim ofsystem integration testing is to ensure that they all operate correctly – individually and together –when installed in the aircraft. For first integration steps, laboratory environments like an iron bird,a mini bird or an aircraft mock up are often considered more meaningful and cost-effective thanon-the-aircraft integration. Nevertheless, difficulties with fully modeling the aircraft environmentmay dictate that some integration activities are performed on aircraft.The system integration process may encounter various problems, for example, it may be phys-ically impossible to fit the hardware into the laboratory where the integration is to take place.Further, physical integration means that at first the focus is on hardware integration and relatedproblems (e.g., connectors or plugs do not fit) rather than on the interaction of the systems andthe related problems (e.g., incomplete/incompatible interfaces, interface misinterpretation, con-figuration problems). To overcome some of these drawbacks, the system integration testing canbe performed in two separate phases: During the first phase, the domains are virtually integratedby connecting the multi-system integration test rigs which are typically located at different sites,and in the second phase, the systems are physically integrated at one site. For the first phase, ithas to be considered that different test systems might have been used for each multi-system in-tegration test rig which typically do not provide means for cooperation with other test systems.Therefore, it is necessary to have a coordinating component (e.g., a test management tool) thatsupports interoperation between different test systems by managing distributed test preparation,test execution and test archiving. Adequate communication links have to connect the test systemsand the domains (i.e., the systems under test) to allow management-related communication as wellas data exchange between the systems under test. This concept has been developed in the researchproject VICTORIA by the author and Terma A/S and has been implemented by Terma A/S andAirbus. It has successfully been applied to connect systems located in Hamburg, Toulouse andFilton (for the concept see [TPLB03]). Note that virtual integration may also be applied at earliertest levels to avoid the development of simulations for components which are available at othersites as simulations or original items but cannot be integrated in the local test rig.

2.2.6 Ground Tests and Flight Tests

These tests are in-aircraft tests and the focus is on correct cooperation between the systems in theiroriginal and final environment under normal and exceptional conditions. Principally, these testsrepeat some of the tests executed at previous levels in laboratory environments or with simulatedcomponents but with special emphasis on the aircraft level requirements and functions. They are

Page 64: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

44 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

completed by additional tests. For example, special vibration tests on ground are performed toensure that plugs and connectors remain connected. For the development of a new aircraft, thefirst flight – the maiden flight – is of special importance and is not carried out until extensiveground testing and on-ground flight testing (e.g., taxiing, performance tests on ground, roll testsup to a speed closed to take-off speed) has been executed successfully. Further details on these testphases are outside the scope of this thesis.

ReferencesIntegration strategies and their pros and cons are discussed in [Bal98]. The system test approachfor avionics systems is addressed in [ABD100], [ABD200], [ARP4754], among others. Moreinformation about ground testing and first flight can be found, for example, in the A380-relatedarticles [Sch04] and [Lab04].

2.3 Test Design Documents

2.3.1 Test Case and Test Procedure Selection

The test design documents are compiled during the development phase to prepare the verificationand validation phase. They elaborate on the features to be tested by specifying test cases and testprocedures.

A test case description consists of a purpose for the test case, a set of inputs, conditions and ex-pected results to achieve the required coverage, and a pass / fail criteria. Each test case is relatedto one or more requirements of the respective level (e.g., for system level tests to system require-ments).

A test procedures describes in detailed how each test case is to be set up and executed, how the testresults are evaluated, and how the test environment has to be configured. Often, a test procedurecan combine several test cases.

A test specification is that part of a test procedure that considers the inputs to and outputs from theSUT and the timing requirements. Test specifications do not consider configuration and organiza-tion aspects like how to configure the test system or where to store the test execution log.

For automated test data generation or automated test execution (addressed in the next section), itis often necessary to get additionally an implemented test procedure that uses the specification for-malism of the test tool for describing the test specifications and contains appropriate configurationdata (complying to the needs of the test tool and the test bench).

Different strategies can be applied for test case selection which ensure that all important situationsare tested:

• functional or requirement-based test case selectionThis strategy is a black-box approach that has the point of control and the point of obser-vations outside the system under test and cannot use any internal information. It is oftenapplied for higher test levels since it focuses on functional aspects.Black-box approaches may analyze equivalence classes and their boundaries, may selectspecial values (e.g., zero values) in order to cover the state space and the possible transitionsof the SUT.

• structural or coverage-based test case selectionThis strategy needs detailed knowledge about the internal structure to control and to observethe behavior of the system under test and thus is a white-box test approach. It is typicallyused for lower level test cases, e.g., on equipment level.

Page 65: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

2.3. TEST DESIGN DOCUMENTS 45

White-box approaches usually analyze the code statements of the SUT and the respectivebranch conditions in order to cover all statements at least once. More elaborated structuralcoverage criteria are also possible (e.g., branch condition combination coverage or patchcoverage).

• random test case selectionThe aim of this selection strategy is to choose randomly test cases without generating themsystematically or deterministically. The selection can be based on both requirements andstructural information and thus cannot be categorized as black-box or white-box approach.Random test case selection can complement a systematic approach by adding test caseswhich have not yet been considered by the test case designer. It should not be used as theprimary strategy since it may take to long to achieve the required test coverage.

• intuitive test case selectionFor this strategy, the test case designer use their intuition and experience to generate testcases which typically lead to erroneous situations. It can be used to complement systematicrequirement- or coverage-based strategies, but should not be the only selection strategy.

Test coverage criteria determine if enough and adequate test cases have been defined, e.g., to coverall requirements, to structurally cover the requirement specifications, or to structurally cover thecode structures. The same criterion is used after test execution to analyze the test coverage in orderto find out if enough test cases have been executed.Test procedure selection approaches select test cases with matching pre- and post-conditions,i.e., test cases where the execution of one test case establishes conditions (defined by the post-condition) complying with the pre-condition of the next test case. Additionally, a test procedureoften combines test cases with similar test objectives testing a specific feature of the SUT. Fi-nally, appropriate test sequence selection approaches determine the sequence of test procedures,e.g., according to the configuration of the SUT or the test system, or according to the pre- andpost-conditions of the test procedures.For testing of avionics systems and software, RTCA /DO178B emphasizes requirement-based testcase selection “because this strategy has been found to be the most effective at revealing errors”([DO-178B], p. 30) in combination with a two-part coverage analysis: a requirement-base one anda structural coverage analysis (see [DO-178B], p. 33). As a consequence, the specification docu-ments defined during the development phase (see Chap. 1) are of special importance since they arethe basis for the test design documents, i.e., the test cases and test procedures. In particular, theformalism used for specifying the structural and behavioral aspects of the component has a majorimpact on the possibilities for automated test case selection, or the expressiveness and clearness ofthe specification necessary for manual test case selection. Many different formalisms can be usedfor requirement specification. Most of them can generally also be used for defining the test casesand the test procedures. Note that it is even possible to use the same specification model for therequirements and for the tests.As a matter of fact, each formalism may impose specific restrictions on how test case can beselected from requirement specifications, how test data can be generated from the test specifica-tions, and how the test can be executed using the (implemented) test procedure. In particular, thespecification technique determines if a clear and unambiguous semantics is defined and if toolsupport and automation are possible and available (and to which extent). As stated above it ispossible – and tempting – to use the same specification model for requirement and test specifica-tion (especially with tools at hand executing the specification model to generate test inputs andto evaluate the SUT outputs), since this eliminates one source of errors occurring when manuallydefining the test cases and test procedures. Nevertheless, usually separate specifications are usedfor development and verification documents of safety-critical systems. Furthermore, developmentspecifications are rarely compiled using a formalisms that can be used for automated test case

Page 66: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

46 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

generation or even automated test data generation. One reason for this is the lack of adequatespecification techniques which allow clear and unambiguous specifications, but are well suited todiscuss the requirements on management level. It is the aim of current research projects to provideappropriate specification techniques and adequate means for simulation and model-based testingin order to eliminate or reduce the manual steps from requirement specification to test cases (seealso [OH05], among others).

In the following, we will discuss briefly different state-of-the-art specification techniques used fordevelopment as well as for V&V documents. Each subsection shall analyze the pros and consof the specific set of techniques with respect to expressiveness and ambiguity, and give a briefexplanation of the restrictions for test case selection, test data generation and test execution. Notethat test data generation and test execution are addressed in more detail in Sect. 2.4.

2.3.2 Specification Techniques

The different state-of-the-art specification techniques can be categorized as informal, semi-formaland formal. Formal specification techniques have a well-defined syntax and a mathematicallydefined semantics typically given as an operational semantics or a denotational semantics. Semi-formal specification techniques lack the mathematical rigor associated with formal methods, butuse structured techniques to restrict the variability and ambiguity of statements given by informalspecification techniques. The distinction between the categories – especially between informal andsemi-formal specification techniques – is not strict, in particular since any combination of tech-niques can be used when describing the requirements and the design. The following sections dealwith these three groups of specification techniques and provide sub-categories: Section 2.3.2.1addresses informal specification techniques, Sect. 2.3.2.2 regards semi-formal specification tech-niques, and Sect. 2.3.2.3 considers formal specification techniques. Section 2.3.2.4 then elaborateson an example that compares three specifications for the same part of an ARINC 653 operatingsystem – the operating modes of a process (see also Sect. 3.1.2.2).

2.3.2.1 Informal Specification Techniques

Informally specified documents comprise definitions in natural language text, as figures and graph-ical depictions, or as informal tabular notations using natural language.

Using natural language in a precise and unambiguous way without producing unreadable text isa difficult task. The understanding relies on the readers and writers using the same words for thesame concept (lack of clarity). Furthermore, natural language is over-flexible since the same thingcan be expressed in different ways and it is up to the readers and writers to identify the equality ordifference (lack of consistency). Also, it is difficult or rather impossible to check the completenessof the specification.

Figures and graphical depictions include all forms of block diagrams and informal diagrams. Theypretend to be an easy intuitive formalism but detailed understanding is often restricted by differentunderstanding of the relations between the figure elements or by mixing different specificationlevels without noticing. They are typically combined with natural language labels or descriptionsand the problems with respect to clarity, consistency and completeness remain.

Informal tabular notations try to structure the pure natural language text but contain the same(above described) problems like the natural text descriptions.

Requirements and design documents using informal specification techniques cannot be convertedinto source code, code fragments or test cases in an automated way. Moreover, it is a pure manualprocess to identify and combine the relevant information adequately. Nevertheless, these draw-backs are often accepted or irrelevant for specifications on aircraft, domain or even on system

Page 67: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

2.3. TEST DESIGN DOCUMENTS 47

level. As a consequence, test cases and test procedures for the respective level have to be gener-ated manually by test case designers in cooperation with domain experts.Note that the above stated problems occur in all specifications using combinations of formal orsemi-formal specification techniques with natural language supplements.

2.3.2.2 Semi-Formal Specification Techniques

To this group of specification techniques belong structured text with tables, graphical languageswith text annotations, virtual reality environments, scripting and programming languages, andvarious special-to-purpose formalisms.

Structured text with tables structure natural language specifications using tables and “writing”rules for structured text and thus provide a more clear and complete specification. The writingrules can provide syntactical and semantical restrictions, but still fail to provide a complete (oreven formal) semantics.

To the class of graphical languages with text annotations belong formalisms like UML which areoften considered to belong to the group of formal specification techniques but miss a formal (i.e.,mathematical) semantics (exceptions are described below). In fact, the respective diagrams pro-vide more clearness and precision than natural language descriptions but may still be ambiguous.

Other possibilities of creating test specifications are specialized virtual reality environments thatcan provide means for developing test specifications by interaction with a virtual reality (VR)representation of the system under test. From these interactions, a formal test specification is gen-erated by domain experts with detailed knowledge of the system under test and its environment butwho are not familiar with formal methods. Nevertheless, the semantics of interactions within theVR environment, the formalism of the generated test specifications as well as the transformationfunction might not be fully formally described.

Scripting and programming languages have a well-defined syntax and an (implemented) semanticsbut usually miss a formal semantics. A well-known example is the Testing and Test ControlNotation (TTCN-3) that is a standardized “test specification and test implementation language thatsupports all kinds of black-box testing of distributed systems” ([GHR+03], p. 1). Other formalismsare less focused on testing (e.g., shell scripts, Windows batch files, Python, Perl, C, C++). Pseudocode used for specifying the requirements or tests can also be used, but the syntax is usually onlyinformally described and an implemented semantics is not available.

Furthermore, various special-to-purpose specification techniques are used that may have a seman-tics known to the domain experts (i.e., the writers of the requirements and probably the test casedesigners) but which may not be formally defined. The drawback is that the specifications are notcommonly understandable to other experts without more or less detailed domain knowledge (e.g.,test tool developers). Examples are forms of event-state matrices, among others.

Summarizing, requirements and design documents defined in semi-formal specification techniquesprovide more clearness and precision – especially to the domain experts –, but may lack a preciseand unambiguous formal semantics. Nevertheless, there are some techniques designed for defin-ing test cases and test procedures (e.g., TTCN-3) that have a standardized syntax and semanticsand allow the automated test data generation, test execution and test evaluation by appropriatetools. Other tools may support the use of special-to-purpose specification techniques by having animplemented but not necessarily clearly defined semantics how to interpret the test specifications.If expected and implemented semantics are different, this may lead to unexpected and surprisingtests.

Page 68: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

48 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

2.3.2.3 Formal Specification Techniques

Formal specifications techniques include mathematical formulas, sets, statecharts, and many oth-ers. Also, there are several approaches to define a precise and formal semantics for semi-formalgraphical notations like UML – in particular, precise UML, and HybridUML, among others. Hy-bridUML is reviewed briefly in one of the next paragraphs.Mathematical formulas and sets are inherent formal but usually have no graphical representationto simplify communication about the specification and lack a formal notation of time to deal withreal-time features of the system. Further features that are missing but should be provided to spec-ify real-time and reactive avionics systems are a concept of hierarchy (to simplify abstraction ofviews and to concentrate focus on relevant properties), notion of parallel composition (to sup-port a modular development), and separation of concerns (to distinguish between architectural andbehavioral issues).There are several semantics of statecharts described in the literature (see references below) that arewell-applicable for defining the behavior of a system, but not for specifying specific architecturalaspects. Furthermore, statecharts cannot be used to define time-continuous aspects.The list of other formal specification techniques is extensive, for example, Communicating Se-quential Processes (CSP), Timed CSP, the Z notation, timed automata, duration calculus, hybridautomata, CHARON, among others. It is outside the scope of this thesis to provide further detailsto all these specification techniques and the respective tools, or to compare their expressiveness forspecifying reactive real-time systems and their usability for automated test data generation and testexecution. In the following paragraph, we will very briefly introduce CSP since this formalism isused in Chap. 6 to define test specifications. Afterwards, the basic information about HybridUMLare presented since the example in Sect. 2.3.2.4 contains a HybridUML diagram. For all otherformalisms, the references paragraph at the end of this section provides a list of publications de-scribing their respective fundamentals and providing further links.The use of formal – mathematically based – methods for design and specification of the require-ments and for design and implementation of tests increases. The resulting advantage is that thespecifications have a well-defined and precise semantics and most formalisms can be used forautomated test case generation or test execution. Nevertheless, it is common praxis to define therequirements in natural language and transform them in formal specifications to make use of au-tomated test case selection algorithms and analysis methods (e.g., deadlock and lifelock analysis).Thus, the main bottle-neck of current testing methodologies which rely on many manual stepsshall be avoided.However, using formal specification techniques does not necessarily provide better and more read-able specifications since not all formalisms suite every problem and non-functional properties canusually not be addressed by most formalisms. Although being unambiguous and precise (and thusassumed to be “better”), understandability for all human actors in the development and verificationprocess (i.e., managers, systems and software engineers, etc.) is an important factor for acceptance– especially in the earlier phases. Typically, graphical notations simplify the communication aboutthe specification, especially, if hierarchy and abstraction are supported, too. Also appropriate toolsupport is important.

CSP. The formal specification language CSP has been developed by C. A. R. Hoare and providesmeans for specifying communicating, concurrent systems based on synchronous communication.The basic elements of a CSP specification are the processes and the events used by the processes tointeract with each other and the environment. This paragraph shall provide an informal descriptionof CSP and its extensions for real-time aspects called Timed CSP. The syntax and further exam-ples are additionally described in Appendix A. For gaining further (more formal) details regardingthe formal semantics of CSP and Timed CSP the reader is referred to the references at the end of

Page 69: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

2.3. TEST DESIGN DOCUMENTS 49

this section.Events can represent an arbitrary complex sequence of steps of a real-world system. Anevent can be used on every level of abstraction as a programming-language independent syn-chronization point that may trigger an associated sequence of steps. For example, an eventcreate process can represent the function call create process() which may include to de-termine beforehand the possible parameter values in order to perform the function call withinits defined range of normal behavior. In contrast, the event return value ok can denotethat the return value is as expected (without stating the return value explicitly). A pro-cess can then use these events to represent a sequence of steps that first trigger the func-tion call and then check the return value, e.g., process CHECK CREATE can be defined asCHECK CREATE = create process -> return value ok -> SKIP.To specify the interface between interacting processes explicitly and to define the set of possi-ble communication events, channels are defined.2 A channel declares either a simple, unstruc-tured signal (e.g., channel return value ok as described above) or a structured data item. Ingeneral, structured channels have a common prefix (the channel name) and a sequence of datacomponents where each is a set of values or a pre-defined data type. For example, channelcreate process : {1..10} can be used to trigger the creation of specific processes, e.g., the eventcreate process.10 can denote the function call create process(10). To improve readability ofthe CSP specifications, it is additionally possible to distinct (syntactically) between events pro-duced by another process (i.e., input to the respective process) and events generated by respectiveprocess (output event). For example, the above used process CHECK CREATE can be enhanced byusing structured events and by explicitly stating which events are considered as input or out-put. Thus CHECK CREATE = create process!10 -> return value ok?10 -> SKIP means that anevent create process.10 is generated and then it is expected that another process generates theevent return value ok.10. Further details are provided in Appendix A.2.The system modeled by the CSP specification is typically composed of different process. Thismodular approach helps to write compact and reusable specifications. The processes are com-posed by CSP operators which include operators for modeling sequences of events, branching,loops, concurrency, parallelism, and abstraction of certain events. The operators are described inAppendix A.CSP is typically used to write specifications on an abstract level (which is often called design orhigh-level specification) and – later on in the development process – on a more detailed level closerto implementation. The model-checker tool FDR can then be used to reason about the differentspecifications, e.g., to check for deadlocks and lifelocks or to examine if the detailed specificationis a refinement of the less detailed specification. Moreover, CSP test specifications can be usedfor testing using the test system RT-Tester. Further details about the RT-Tester are provided inSect. 5.4.2.1.CSP – as considered so far – is an untimed formalism. Timed CSP is the extension of CSP neces-sary to deal with real-time aspects. The language is extended by adding timing related operators:a timeout operator, a delay operator, and a timed prefix. As shown by [Mey01], Timed CSP pro-cesses can be structurally decomposed into three parts: an “untimed” CSP part, simple (mutuallyindependent) timer processes, and auxiliary events for synchronization between the “untimed”CSP processes and the timer processes. The test system RT-Tester uses such decomposed TimedCSP specifications and provides internally timer processes (realized by operating system timers).Examples of these “untimed” CSP specifications with timer events are provided in Sect. 6.4.The main drawback of using CSP to model large systems is that tools have to deal with the stateexplosion problem that occurs when the CSP specification is internally transformed into a transi-tion system and the model-checking or test tools cannot cope with too many states. The problemcan be reduced by considering the effects of a particular channel or process definition and by tun-

2Note that this explicit declaration is especially important for tools like FDR that aim to reason about the state spaceof a process. The set of possible events is intrinsic in formal CSP specifications as used by [Hoa85].

Page 70: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

50 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

ing the CSP specifications appropriately. Some ways are described in [FDR], p. 34–36. Otherssolutions are introduced in Sect. 6.2 when describing the CSP types and channel concept for theautomated test suited described in Chap. 6. Sect. 8.1 summarizes the occurred problems and thechosen solutions.

HybridUML. HybridUML is a UML 2.0 profile and thus can use the syntactic framework ofUML which contains well-accepted graphical constructs and different approaches for textual rep-resentations. HybridUML separates structural concerns from behavioral aspects: Structure andarchitecture are represented by class diagrams and composite structure diagrams and behavior bystatechart diagrams. Furthermore, HybridUML extends UML 2.0 by providing means to specifytime-discrete and time-continuous behavior. Inherent to UML and thus also to HybridUML arethe concepts for hierarchy, parallelism and separation of concerns. The formal semantics is de-fined by a formal transformation into an executable system conforming to the Hybrid Low-LevelFramework. A HybridUML statechart is depicted in Fig. 2.4.

2.3.2.4 Example Comparing Different Specification Techniques

This section elaborates on the differences of informal and formal specification techniques using anexample specification which defines the process mode transition model, i.e., the operating modesof an ARINC 653 process and when, why and how operating modes are changed. The exampleconsist of following parts:

1. the original specification as provided by [ARINC653] (p. 10–11),

2. the enhancements of this specification as provided by [ARINC653P1] (p. 19–22),

3. a HybridUML statechart diagram depicting the same information, and

4. a brief comparison of the three specifications and their pros and cons.

Original Specification ([ARINC653], p. 10–11). The standard ARINC 653 [ARINC653] pro-vides the API of an operating system to be used for IMA platforms. For describing the entities,their behavior and the requirements, [ARINC653] uses informal natural language descriptions andgraphical depictions as well as pseudo code fragments.The internal processing entities of each partition are processes that are created by a special ini-tialization process and which perform different API calls during the normal operating mode. Fordescribing the transitions from one to another operating mode, [ARINC653] uses a graphical de-piction of the modes and the possible transitions in a statechart-like figure and an informal naturallanguage description. The depiction is provided in Fig. 2.2. The accompanying description (see[ARINC653], p. 10–11) is as follows:

Process StatesThe process states as viewed by the O/S are as follows:

a. Dormant - ineligible to receive resources. A process is in the dormant state before it isstarted and after it is terminated (or stopped).

b. Ready - eligible for scheduling. A process is in the ready state if it is able to be executed.c. Running - currently executing on the processor. A process is in the running state if it isthe current process in execution. Only one process can be executing at any time.

d. Waiting - not allowed to receive resources until a particular event occurs. A process isin the waiting state if it is:- waiting on a delay,

Page 71: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

2.3. TEST DESIGN DOCUMENTS 51

dormant

ready waiting

running

Figure 2.2: Change between process states according to [ARINC653], p. 11

- or waiting on a semaphore,- or waiting on a period,- or waiting on an event,- or waiting on a message,- and/or suspended (waiting for a resume)

State TransitionsState transitions occur as follows:

a. Dormant - Ready, when the process is started by another process within the partition.b. Ready - Dormant, when the process is stopped by another process within the partition.c. Ready - Running, when the process is selected for execution.d. Ready - Waiting, when the process is suspended by another process within the partition.e. Running - Dormant, when the process stops itself.f. Running - Ready, when the process waits on a TIMED DELAY of zero or is preemptedby another process within the partition.

g. Running -Waiting, when the process suspends itself, and also when the process attemptsto access a resource (delay, semaphore, period, event, message) which is not currentlyavailable and the process accepts to wait.

h. Waiting - Ready, when the process is resumed, or the resource the process was waitingfor becomes available or the time-out expires.

i. Waiting - Dormant, when the process is stopped by another process within the partition.j. Waiting - Waiting, when a process already waiting to access a resource (delay,semaphore, period, event, message) is suspended. Also when a process which is bothwaiting to access a resource and is suspended, is either resumed, or the resource be-comes available, or the time-out expires.

State transitions may occur automatically as a result of certain APEX services called by theapplication software to perform its processing. They may also occur as a consequence ofnormal O/S processing due to time-outs, faults, etc.

Enhanced Specification ([ARINC653P1], p. 19–22). In 2003, the previous ARINC 653 stan-dard [ARINC653] has been updated by [ARINC653P1] which included some minor correc-tions and enhancements. In general, [ARINC653P1] still uses natural language descriptions andstatechart-like figures as shown in the previous paragraph. Regarding the description of the op-erating mode transition model, an additional figure describes the relation of the partition modetransitions and the process’ operating mode transitions. Furthermore, this new depiction that is

Page 72: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

52 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

provided in Fig. 2.3 has numbered transition labels which helps to find the related description part.Apart from that, the description style and the specification formalism has not been changed (andthus is not provided here in more detail).

waiting ready

dormant running

waiting

dormant

a processcreate

NORMAL modeCOLD START or WARM START mode

(9b)

(10)

(11b)

(2)

(1)

(6)

(5)

(8)

(9a)

(12)

(10)

(9c)

(3b) (4)

(11a)

(7)

Figure 2.3: Enhancement of Fig. 2.2 by considering also the different partition modesCOLD START/WARM START and NORMAL (figure according to [ARINC653P1], p. 20)

HybridUML Specification. When the API of the ARINC 653 operating system is defined byHybridUML, class diagrams, composite structure diagrams and statechart diagrams are generatedthat form together a complete, consistent and unambiguous specification of the structure and be-havior. In particular, the model contains an agent process that has an accompanying statechartdiagram. The statechart diagram is depicted in Fig. 2.4 and defines the mode changes of a partic-ular process with the process ID PID and also takes into consideration the partition mode changes.Process mode changes (i.e., transitions in this statechart) are triggered

• by the process running in initialization mode when calling SET PARTITION MODE (NORMAL)which results in a partition mode change,

• by other processes calling API services resulting in a state change of process PID (e.g., bystarting it using START(PID)),

• by the scheduler preempting and scheduling the process and when reporting that a resourcethat has been waited for becomes available (e.g., if a message has arrived for a port),

• by the process itself trying to access a resource (e.g., a queuing port message) by using ablocking API call (e.g., by calling RECEIVE QUEUING MESSAGE with a time out greater thanzero) or by calling STOP SELF, SUSPEND SELF, or TIMED WAIT(0), or

• when the time out of a blocking API call expires.

The transition labels reflect these different causes by different types of labels which conform tothe allowed label syntax event [guard] / action :

• labels containing only events denote API calls and other incidents triggered by other pro-cesses, e.g., RESUME(PID),

• labels with actions only are used for API calls triggered by the process itself, for example,/ SUSPEND SELF,

Page 73: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

2.3. TEST DESIGN DOCUMENTS 53

• labels consisting of a guard only are used for time outs (e.g., [ t=0 ] and to remember APIcalls in the initialization mode (e.g., [ not(initStarted) ]), and

• labels with combinations of events and actions are used if an external API call triggers thetransition and also an action occurs, e.g., START(PID) / initStarted=true.

Further details about the syntax and semantics of HybridUML are provided by the referencedpapers.

dormant

waiting

START(PID)/ initStarted=true

SUSPEND(PID)/ initSuspended=true

dormant

running

ready

[ flow: t = −1 ][ inv: t > 0 ]

/ TIMED WAIT(0)

RESUME(PID)

schedule(PID)

RESUME(PID)

waitingForResourceWhileSuspended

SUSPEND(PID)

[ initStarted andinitSuspended ]

[ t = 0 ] resource available

[ initStarted andnot(initSuspended) ]

[ not(initStarted) ]

waiting

waitingWhileSuspended

waitingForResource

STOP(PID)

preempt(PID)

STOP(PID)

xsuspend

esuspend

[ t = 0 ]

eresource

xresource

/ STOP SELF

resource available

/ access resource(delay);t=delay

[ flow: t = −1 ][ inv: t > 0 ]

Normal Mode

START(PID)

SUSPEND(PID)

/ SUSPEND SELF

SET PARTITION MODE

(NORMAL)

statemachine process (PID: Integer)

STOP(PID)/ initStarted=false

Init Mode

/ initSuspended=false/ initStarted=false;

Figure 2.4: HybridUML statechart diagram depicting the process states and transitions consideringalso the different modes of the respective partition (API services stand out in typewriter font)

Comparison. All specifications reviewed in the previous paragraphs define the possible processstates and state transitions, i.e., more or less the same information. The main difference are:

• clearness and unambiguity:Specifications using natural language descriptions cannot be unambiguous and are –in this particular example – difficult to understand (see for example the descriptionfor transition waiting - waiting). Furthermore, the assignment of the textual de-scription and the figures is often unclear as shown above for [ARINC653], but canbe improved by numbering the transitions as done in [ARINC653P1] (see Fig. 2.3).In addition, the particular specifications in [ARINC653] and [ARINC653P1] do notname the API services but use textual transcriptions. For example, [ARINC653P1]contains the description “(1) dormant - ready: When the process is started by

another process while the partition is in NORMAL mode.” ([ARINC653P1], p. 21).The HybridUML specification instead uses a transition from mode dormant to ready (bothin partition mode Normal Mode) and the label START(PID) references the API call explicitly.

• completeness of the specification:The [ARINC653] specification does not consider the relation to the partition’s mode whichis addressed by the other two.

• test case selection approaches:The specifications in [ARINC653] and [ARINC653P1] cannot be used for automated test

Page 74: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

54 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

case selection. To overcome this drawback, ARINC 653 has been split into three parts3and dedicated the part 3 to define test cases and test procedures for a conformance testsuite of the ARINC 653 API services (again using structured natural language). In contrast,HybridUML has a well-defined syntax and formal semantics and thus it is possible to haveautomated approaches for defining the test cases and test data for automated test execution(see [BBHP04]). In addition, a simulation environment is available for HybridUML modelsthat allows simulated execution of the specification (see also [Bis05]).

ReferencesThe description of strategies for test case and test procedure selection is based, among others, on[SL03] (p. 104-146) and [Sto96] (p. 325ff). The pros and cons of the different specification tech-niques are addressed in various publications: [Sto96] addresses at a general level the advantagesand problems of using informal, semi-formal and formal specification techniques. Testing usingformal specifications is also discussed, for example, in [Pel02a].The fundamentals of the various specification techniques addressed in this section can be found inthe following references (this list of references naturally cannot be exhaustive): CSP has been in-troduced in [Hoa85] and a description of the formal semantics can be found in [Ros98] and [DS04],among others. The semantics of Timed CSP is addressed, for example, in [Sch95] and [Mey01].Testing with CSP is discussed in [PAD+98], [Mey01], [DS04], among others. The specification ofthe Unified Modeling Language (UML) as well as various references can be found at [UML]. Ref-erences for the precise UML approach are compiled at [pUML]. The fundamentals of HybridUMLare addressed, for example, in [BBHP03], [BBHP04], and [Bis05]. The usage of semi-formalvirtual reality environments for specifying test cases is discussed in [BT02b] and [BT02a]. Anintroduction to TTCN-3 is provided, for example, in [GHR+03]. Statecharts as a mean to providehierarchical state machines for discrete systems were introduced in [Har87] and [HPSS87]. TheSTATEMATE semantics of statecharts can be found in [HN96] and [DJHP98], among others. A(slightly outdated) survey comparing the different statechart semantics is provided in [vdB94].Other formalisms that can be used for specifying real-time systems include (but are not lim-ited to) the Z notation (e.g., [Spi92], [DW96]), timed automata (e.g., [LV92], [AD94], [CO00]),the duration calculus (e.g., [ZRH93], [Rav95]), hybrid automata (e.g., [ACH+95], [Hen96]), andCHARON (e.g., [AGLS01], [ADE+01]). Various model checking techniques and tools are alsodiscussed in [BBF+01].

2.4 Test Data Generation, Test Execution and Test Evaluation

As described in the previous section, test design documents compile test cases and test procedureswhich can be defined using different specification formalisms – informal ones as well as formalspecification techniques. One part of the test procedures – the test specifications – define the in-puts to the SUT, the expected outputs and the timing requirements. Since the test specificationsmay be written in specification techniques which cannot be executed by appropriate test tools orthe test procedure may not contain the necessary tool-related configuration files, it is often neces-sary to produce so-called implemented test procedures that contain executable test specificationsand the required configuration data. For the following discussion about approaches for test datageneration, test execution and test evaluation, we will consider only the test specifications and dis-regard the specific configuration data. Test specifications – in particular test specifications using(semi-)formal specification techniques – can be script-based or specification-based. A script-basedtest specifications is typically one specific sequence of inputs to and outputs from the SUT, i.e.,

3ARINC 653 part 1 [ARINC653P1-2] focuses on the required services (and is a revised version of [ARINC653P1]),ARINC 653 part 2 [ARINC653P2] describes the extended services, and ARINC 653 part 3 [ARINC653P3] defines aconformity test specification.

Page 75: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

2.4. TEST DATA GENERATION, TEST EXECUTION AND TEST EVALUATION 55

when executing this specification each time the same sequence of test inputs is executed and ex-actly one sequence of outputs from the SUT are expected (typically without allowing that outputscome within a time interval). A specification-based test specification uses a specification that al-lows variation of inputs and outputs by defining branching and considering different sequences ofoutputs (which are all legal). Furthermore, specification-based test specifications usually defineallowed time intervals in which the outputs are expected to occur. Since such specifications usu-ally allow different sequences of inputs to the SUT, a test data generation algorithm has to generateall or a subset of possible sequences.

Generally speaking, three forms of test data generation, test execution and test evaluation can bedistinguished: manual, automated and semi-automated. When using automated approaches testdata generation and test evaluation can be performed on-the-fly, i.e., during testing, and offline,i.e., before and after test execution, respectively. The following table (Table 2.1) summarizesbriefly how these characteristics are related to the type of test specification provided by the testprocedure and implemented test procedure. The details are discussed thereafter in Sect. 2.4.1 withrespect to test data generation, Sect. 2.4.2 regarding test execution, and Sect. 2.4.3 consideringtest evaluation and error diagnosis. Note that implemented test procedures using an informaltest specification technique are usually based on test procedures using informal test specificationswhereas implemented test procedures using semi-formal and formal methods can be derived fromall kinds of test procedure specifications.

TestProcedure

ImplementedTestProcedure

Test DataGeneration

Test Execution Test Evaluation

informal inherent manual manual

informalsemi-formalformal

semi-formalformal

script-based inherent automatedsemi-automated(manual)

specification-based

automated automatedsemi-automated

automatedsemi-automated

Table 2.1: Relation of test data generation, test execution and test evaluation to the provided testspecifications

2.4.1 Test Data Generation

Test data generation is the process that determines the sequence of inputs to the SUT and the exactdelay between two inputs.

In informal and script-based test specifications, the test inputs and their sequence are describedexplicitly. Although informal specifications can provide branching, it is often avoided or expandedexplicitly in the test specification of the implemented test procedure. Since only one sequenceof inputs is defined, there is no possibility to adapt it on-the-fly while executing the test. As aconsequence, the script execution often aborts if the SUT reacts with an unexpected output whichmay be perfectly legal but just not admitted as the next step of the script. Furthermore, it has tobe considered that the length of a script growth linearly with the length of the execution to beperformed which may result in unreadable specifications.

Page 76: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

56 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

For specification-based test specifications it is necessary to generate the sequence of test inputsfor the SUT either dynamically during the test execution (i.e., on-the-fly or online) or beforehand(i.e., offline). On-the-fly approaches have the advantage that they can select the inputs dependingon the preceeding SUT outputs. In contrast, offline test data generation generates all or a subset ofpossible sequences in advance and thus cannot react to any occurring non-determinism in the SUT(e.g., caused by internal scheduling in the SUT which may result in minor changes of the sequenceof outputs or their timing). There are also approaches that use offline techniques to restrict the testspecification according to a particular test purpose and then use the resulting smaller specificationfor on-the-fly test data generation.

However, the drawback of on-the-fly test data generation approaches is that they have to be ex-ecuted intertwined with the test driver which is responsible for communication with the SUT.Consequently, for execution in (hard) real-time both the test driver and the test data generationalgorithm have to meet hard real-time constraints.

2.4.2 Test Execution

Test execution is the actual execution of an implemented test procedure (which is equal or derivedfrom a test procedure contained in the test design document).

If the implemented test procedure contains informal test specifications, it is inherent that it is notpossible to have a test tool supporting the test execution. Instead, it is necessary to perform manualsteps for configuring the SUT and the environment and for executing the informal test specifica-tion. For manual test execution, the test inputs are provided manually to the interfaces of the SUT,for example, by pressing buttons, changing the switch position, cutting cables, or changing theSUT environment (e.g., temperature, pressure). The outputs of the SUT are observed by manu-ally reading the measurement instruments which can also be small lamps. For the test report, thetester usually has to log manually the given test inputs, the observed outputs and the respectivetime stamps. Inherent to manual test execution is that giving inputs, observing outputs and givingnew inputs cannot be performed in hard real-time and thus cannot be used for testing reactive hardreal-time systems (assuming that reactions are expected within few seconds or milliseconds).

Script-based test specifications in an implemented test procedure allow manual test execution asdescribed in the previous paragraph but can also allow or require automated or semi-automated testexecution. Automated test execution means that the test inputs determined by test data generation(see previous section) are physically sent to the corresponding interface of the SUT and the outputsof the SUT are observed automatically. Typically, this includes that the test driver transformsthe abstract specification events into concrete interface signals before sending them to the SUTinterface and vice versa. Thereby, all events are logged with their time stamp. If required by thetest environment, it is sometimes necessary to perform some steps manually which are typicallyonly those that have no hard real-time constraints. This semi-automated test execution can thusmake profit of the real-time test execution and, simultaneously, cope with the deficiencies of thetest environment that requires the manual steps.

Implemented test procedures containing specification-based test specifications are made for auto-mated or semi-automated test execution, i.e., usually the objective to automate test execution isthe cause for writing such elaborated test specifications. If the test data have been generated inadvance using an offline approach, the test driver for specification-based and script-based testingdo not differ. In contrast, if the test driver is combined with an on-the-fly test data generator, bothhave to cooperate in real-time.

In general, automated test execution needs a test system or test tool running on a test engine. Dif-ferent test systems are available supporting different specification techniques. [Pel03] and [Mey01](p. 19ff) provide a list of references to papers about test automation tools based on formal methods.

Page 77: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

2.4. TEST DATA GENERATION, TEST EXECUTION AND TEST EVALUATION 57

In this thesis, we will focus on test specifications written in CSP (with a special timing-related se-mantics for the timer control events) and test automation using the test system RT-Tester. This testtool provides means for on-the-fly test data generation, automated test execution and on-the-flytest evaluation. The RT-Tester is introduced in Sect. 5.4.2.1.The test engine is the platform for running the test system and providing the physical connectionto the interfaces of the SUT. In order to allow hard real-time execution of on-the-fly test datageneration algorithms and test drivers, it has to supply an appropriate operating system and com-puting hardware. An example of a test engine and the requirements for testing an IMA platformare discussed in Sect. 5.4.1.

In addition to the test system, further tools can be required to prepare or configure the SUT asrequired by the test procedure, e.g., to load the correct configuration. As the pace of integrationtesting increases, interoperability of these special tools and the testing tools and the overall testingprocess is significant. In particular, limitations with respect to interoperability or automation havea major impact on the overall test process. Interoperability can often be established by intermediatetools or scripts. Nevertheless, if such tools miss a command line interface for automation, it is notpossible to fully automate the testing process and manual steps are required instead. This increasesthe overall costs and time for test execution since manual steps are less effective. Section 6.5.1and Sect. 6.5.2 discusses the tool environment for testing an IMA platform.

2.4.3 Test Evaluation and Error Diagnosis

Test evaluation checks the correct sequence and timing of outputs from the SUT with respect tothe given inputs and the intended behavior. It can be performed during the test execution, witha small delay or afterwards. Checking the correctness while the test is performed allows to stopthe test execution in case of errors. This permits that test runs with errors in the beginning canbe stopped, analyzed and corrected, and then restarted. Nevertheless, not all unexpected outputsfrom the SUT are errors by default, but can also show that the test specifications impose too manyrestrictions, e.g., by not allowing minor deviations with respect to time.

For informal test specifications, the test evaluation is typically performed manually based on themanually written test logs of the tester who has executed the test. If test execution and test evalu-ation is performed by the tester, on-the-fly test evaluation is also possible to some extent.

Semi-formal and formal test specifications with adequate tool support usually allow automatedtest evaluation against the specification. Depending on the expressiveness of the specificationtechnique, certain situations are considered errors but are allowed deviations. Thus, all automat-ically detected erroneous situations have to be analyzed which is often performed manually bythe tester in cooperation with a domain expert (semi-automated test evaluation). Generally, mostformal specification techniques allow to have elaborate test specifications which can consider dif-ferent correct behavioral reactions (in particular with respect to timing).

Test evaluation is affiliated with error diagnosis if deviations are detected when comparing thetest log with the expected behavior. Error diagnosis is the process of analyzing the derivationin order to locate the cause of the error. Most faults are due to an implementation error, i.e.,located in the system under test, but can also be located in the requirement specification or thederived test cases and test procedures. This means that then backtracking of the errors via thetest specification to the requirement specification is necessary to locate the error. Of course, thistask is simplified if one formal specification model is used for all documents. Fault diagnosiscan also be supported by built-in test procedures that may help to exclude hardware faults (e.g.,missing communication links). Furthermore, it is often necessary to narrow down the interfacescontributing to fault propagation and to generate the fault tree based on the involved interfaces andcomponents.

Page 78: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

58 CHAPTER 2. TESTING OF AVIONICS SYSTEMS

ReferencesLists of publications discussing test automation (and test automation tools) based on formal meth-ods can be found in [Pel03] and [Mey01] (p. 19ff), among others. The usage of formal specifi-cations (particularly CSP specifications) for automated test data generation and test evaluation isalso addressed, for example, in [PAD+98], [Pel02a], and [Pel02b]. Like this thesis (particularlyChap. 6), the examples in the latter two use CSP test specifications and the test system RT-Testerfor on-the-fly test data generation, automated test execution, and on-the-fly test evaluation.

Page 79: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Part II

Avionics Systems using IntegratedModular Avionics

59

Page 80: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das
Page 81: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Chapter 3

Introduction to IMA Platforms

This chapter elaborates on IMA platforms thereby focusing on the standardized real-time operatingsystem and the constraints for developing application software to be running on IMA modules.IMAmodules as considered in this thesis are so-called Core Processing and I/OModules (CPIOM)which provide a common computing resource and different interfaces to several applications. Asa shared resource, the IMA module can host multiple applications and each application can useone or more partitions to perform its tasks. Partitions are conceptually the central program unitsof the IMA module. They are assigned to exactly one application and have separate memory areasand dedicated processing time slots. Furthermore, the operating system supports cooperation ofdifferent partitions hosted on the same or different modules by providing inter-partition and inter-module communication means. The consequence of the resource sharing (i.e., sharing of CPUtime, memory, and communication means) is that spatial and temporal partitioning has to ensurethat the applications remain functionally separated, i.e., that applications cannot interfere witheach other. This robust partitioning helps to prevent the expansion of failures from one faultyapplication to another application on the same IMA module and is necessary to execute severalavionics applications of different criticality levels in parallel.

For the operating system to provide partitioning, it is necessary to define which memory part shallbe used by which partition, when each partition shall be scheduled, and which ports can be usedto communicate with others. In addition, IMA platforms shall be used for many different purposesand shall allow efficient usage of memory and computing resources. Thus, configurability hasto be addressed to provide the necessary parameters to the operating system. This means thateach application determines their demands on the IMA platform, i.e., the number of requiredpartitions, the necessary stack size, code area size, data area size, scheduling frequency, requiredcommunication ports, etc. The system integrator then collects the configuration needs of eachapplication and distributes all applications of one domain across several IMAmodules. In addition,the system integrator can use different types of IMAmodules which may provide different numbersof interfaces, varying computing power, different memory size. A possible system architectureusing integrated modular avionics is described in the next chapter (Chap. 4).

In the following section (Sect. 3.1), the IMA platform architecture is introduced, i.e., the hardwarecharacteristics, the possible interfaces, and operating system related properties are described. Italso details which operating system services are provided to the avionics applications accordingto the ARINC 653 API. Configurability of the IMA platform is considered in Sect. 3.2. Whendeveloping avionics applications to be running in one or several partitions of one or more IMAmodules, the impact of the resource sharing and the characteristics of the operating system haveto be considered. Section 3.3 discusses some general aspects.

61

Page 82: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

62 CHAPTER 3. INTRODUCTION TO IMA PLATFORMS

3.1 IMA Platform Architecture

The IMA platform architecture comprises the following parts:

• The application software is contained in the avionics partitions of the respective IMA mod-ule and performs (parts of) the application program.

• The standardized ARINC 653 API of the operating system provides services to be used bythe avionics applications for performing their tasks and for communicating with others.

• The operating system manages partitions and their communication. At partition level, theoperating system manages processes within a partition. For communication, scheduling,memory management, timing, and health monitoring, the operating system interacts withthe hardware interface system.

• The hardware interface system comprises the set of interface drivers for the contained hard-ware interfaces and additionally provides means to access the Memory Management Unit(MMU) and the clock.

• The hardware of an IMA module consists of the hardware interfaces (e.g., AFDX, CAN,ARINC 429, etc.), the processor(s), the hardware clock, the MMU, and the memory.

• The configuration tables are used by the operating system and the hardware interfaces sys-tem to configure memory access, scheduling, and communication.

Figure 3.1 depicts this platform architecture. It also shows that an application can use severalpartitions to perform its tasks, e.g., one for interacting and the other one for monitoring.

Configuration

Tables

Avionics Partition MAvionics Partition 1 Avionics Partition 2

ARINC 429 CAN Discrete I/O Analog I/OAFDX

Application 1 Application A

Process 1 Process NM. . .. . .

. . . Process N1 . . .Process 1 Process N2Process 1

IMAModule

ARINC 653 API

Operating System

HW Interface System

Hardware

Figure 3.1: IMA module architecture

The following subsections detail the hardware issues (Sect. 3.1.1) and operating system issues(Sect. 3.1.2). The latter also addresses the operating system API, i.e., the services provided by theoperating system. The configuration tables are addressed thereafter.

3.1.1 IMA Platform Hardware

An IMA platform is a single controller providing computing and communication resources andis therefore also called Core Processing and I/O Module (CPIOM). The hardware characteristicsof IMA platforms are not fully standardized to allow different types of IMA modules and to per-mit technological progress without resulting in non-conforming hardware platforms. Therefore,the number and particular types of processors, memory size, etc. are not standardized (e.g., as

Page 83: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

3.1. IMA PLATFORM ARCHITECTURE 63

an ARINC standard). Nevertheless, the requirements on the hardware are quite stringent, e.g.,with respect to reliability, availability and safety, and are also influenced by other factors likepower consumption, specific hardware support for temporal and spatial partitioning, and hard real-time requirements. However, the IMA module’s connector and hardware dimensions comply withARINC 600 and thus ensure that IMA modules of different suppliers can easily be interchanged.

Example. For the VICTORIA project, IMA modules have been provided by different suppliersfor different domains, e.g., by Diehl Avionik Systeme for the cabin domain (see [Die04a] and[Die04b]), by Thales Avionics for the utilities domain (see [Tha04c] and [Tha04b]), and by Smithsfor the energy domain (see [Smi04]). Each IMA module contains several boards for processing,I/O and power supply. Common to all these IMA module types is a processor of the PowerPCfamily, an internal PCI bus, and an AFDX interface to be used for inter-module communication.The IMA module provided by Smiths is a Core Processing Module (CPM) which means thatno further interfaces are provided. The other CPIOMs provide hardware interfaces for discretein- and output, analog in- and output, CAN and ARINC 429 busses, and some special I/Os (fortemperature sensors) in addition to the AFDX interface.

3.1.2 IMA Operating System and the ARINC 653 API

The operating system manages the defined partitions and the processes of the partitions and pro-vides means for communication, scheduling, and health monitoring to ensure spatial and temporalpartitioning.The operating system operates at two levels: at module level and at partition level. At modulelevel, the operating system manages partitions and their inter-partition communication includingcommunication with external communication peers (e.g., partitions on other IMAmodules or non-IMA controllers). At partition level, the operating system manages processes within each partitionand their intra-partition (i.e., inter-process) communication. At both levels, the operating systemprovides scheduling, memory and time management, and health monitoring.The ARINC 653 standard (part 1) defines a set of functions (“services”, “system calls”) whichthe operating system provides for application software to control scheduling of the partition’s pro-cesses, to communicate with other partitions and processes, and to get status information aboutthe partition’s objects. Thus, the application software is decoupled from the actual hardware ar-chitecture and hardware changes are transparent to the application software. Further benefits are

• portability which allows to use the application software also for other aircraft types (withminimal recertification efforts),

• reusability which allows to reuse application code for other IMA applications, and• modularity which allows to modify the overall system architecture (e.g., by moving a parti-tion to another IMA module).

The services and their behavior are specified by using a specific pseudo code notation that de-scribes the post conditions of the services. The interface specification is language-independent,but an Ada and a C binding are provided in Appendix D and E of ARINC 653 part 1, respectively,which directly define the enumerations, parameter types, and the services in a programming lan-guage notation. In this thesis, all examples providing concrete application code use the C interfacespecification.1

The ARINC 653 services are described in subsequent subsections based on [ARINC653P1 d4s2](which is the draft document prior to adoption of current standard document [ARINC653P1-2]).The services are typically grouped into the following categories:

1There are minor differences in the Ada and C interface specification due to the different handling of types andstrings but these are not considered in more detail since this thesis focuses on the C interface specification.

Page 84: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

64 CHAPTER 3. INTRODUCTION TO IMA PLATFORMS

• partition management services• process management services• inter-partition management services• intra-partition management services• time management services• memory management services• health monitoring services

Almost all services provide a return code to denote successful completion of the service or theoccurrence of an error (e.g., invalid parameters, invalid parameter range, invalid parameters withrespect to the current configuration table, invalid operating mode of the partition). The allowedreturn codes are defined by an enumeration type (RETURN CODE TYPE).The aim of the following description is to provide an overview of the OS services and some of theirparameters. For detailed behavioral specifications, the reader is referred to [ARINC653P1 d4s2](and the current standard document [ARINC653P1-2]).

3.1.2.1 Partition Management

Partitions are defined in the configuration table and – according to the configured scheduling – ac-tivated in fixed, deterministic cycles. This major time frame (MAF) is periodically repeated whilethe module is in operational mode. Depending on the configuration, each partition has one or morepartition windows within the MAF which is each defined by its offset from the beginning of theMAF and its duration. Thus, the partitions are activate in a predefined order and each partition hasa predetermined amount of time to access the common resources. Temporal partitioning ensuresthat a partition has uninterrupted access during the assigned time periods. Each partition has alsopredetermined areas of memory to be used by its processes (with segregated code and data areas).Spatial partitioning ensures that each partition has write access to certain areas of its memory butprohibits access to other partition’s memory. Since the individual requirements vary form appli-cation to application, the scheduling, the memory size and the access rights are configured in theconfiguration tables.After successful initialization of the module, the partitions are in operating mode COLD STARTand the respective initial processes (often called main processes) are running – one in each par-tition. Operating mode COLD START is the initialization phase of the partition where processesand communication objects are created by the main process as implemented by the applicationprogrammer. After completion of the partition’s initialization tasks, the initial process can callSET PARTITION MODE (NORMAL) to switch to the partition’s operating mode where the created (andstarted) processes are scheduled. The service can also be used for restarting the partition by call-ing SET PARTITION MODE with parameter COLD START or WARM START2 and for setting the partitionto idle (SET PARTITION MODE (IDLE)) if serious faults have been detected. Nevertheless, changingthe operating mode of one partition – in particular setting it to idle – affects only the respectivepartition and the others continue as before. Also, the module’s partition scheduling remains un-changed.For checking the current status of the partition, the service GET PARTITION STATUS is providedwhich returns some configuration parameters (e.g., partition identifier, period, duration) and statusinformation like the start condition and the current partition’s operating mode.

2Operating mode WARM START is similar to COLD START and the differences depend mainly on the hardwarecapabilities and system specifications.

Page 85: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

3.1. IMA PLATFORM ARCHITECTURE 65

3.1.2.2 Process Management

Within each partition, the task to be accomplished by the application is performed by one or moreconcurrently executed processes that share the computing resources, the communication ports, andthe execution time of the partition. The processes of each partition are scheduled using a prioritypreemptive scheduling algorithm which supports periodic and aperiodic scheduling. Several OSservices support the accurate process control, e.g., to change the priority, to allow or preventrescheduling, to stop or start processes.

The attributes of the initial process are specified during linking or in the configuration tables. Theother processes are not configured in the configuration tables but created during the initializationphase of the partition using the service CREATE PROCESS. During process creation, the stack sizeand other process attributes regarding scheduling (e.g., base priority, time capacity and period) aredetermined. Thereby, the value of attribute time capacity defines how long the process can runwithout rescheduling which is crucial when the process is considered to have a hard deadline andthe deadline is missed. Depending on the process’ attribute ‘period’, two types of processes aredistinguished:

• Periodic processes are periodically activated as determined during process creation.• Aperiodic processes have no period and are therefore always ready to run (if they are notsuspended or waiting for a resource or a timer).

As part of the process management, the operating system administers the attributes of each processand evaluates their current values to determine which process has to be scheduled. This includesthe process states and the process state transitions triggered by respective API service calls (seefigures in Sect. 2.3.2.4). To get the current process status, the API service GET PROCESS STATUScan be used. To get the process identifier (based on a process’ name or for the running process),the services GET PROCESS ID and GET MY ID are provided.

Each process has a defined process state which changes as depicted in Fig. 2.4. However, afterbeing created, each process needs to be started. Typically, the processes are started during theinitialization phase when the initial process calls START for the respective process. Processes canalso be started with a delay using DELAYED START. For switching to a partition’s operating modeNORMAL, at least one process has to be started. After switching to NORMALmode, all started aperiodicprocesses are in state READY, all started periodic processes are in state WAITING denoting that theyare waiting for their next release point, and all process that have not yet been started are in stateDORMANT until being started by one of the other processes. The active process which has beenselected by the scheduler according to its state and priority is in state RUNNING.

Processes can also stop other processes when calling STOP with the specific process ID. The re-sulting process state is DORMANT. A process can also stop itself using STOP SELF, then the schedulerhas to select another process.

An aperiodic process can also be suspended (SUSPEND) or suspend itself (SUSPEND SELF). It then re-mains in the resulting state WAITING until it is resumed by another process (RESUME) or the timeoutexpires (for SUSPEND SELF).

The process scheduling depends on the priorities of the processes in state READY. To controlscheduling, the priority of a process can be increased or decreased using SET PRIORITY. As aconsequence, the currently running process can be preempted if it has set the priority of anotherprocess higher than its own. To avoid this preemption, the process can disable process reschedul-ing for the partition using LOCK PREEMPTION. Each call of LOCK PREEMPTION increases the locklevel. After accessing the critical sections or resources, the process should enable rescheduling bycalling UNLOCK PREEMPTION (until the lock level becomes zero).

Page 86: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

66 CHAPTER 3. INTRODUCTION TO IMA PLATFORMS

3.1.2.3 Communication

For interaction between applications and their equipment, the operating system provides meansfor communication with external communication peers which is often referred to as inter-partitioncommunication. For partition-internal communication between processes and their synchroniza-tion, the operating system offers appropriate services called intra-partition communication ser-vices. These groups of communication services are described in the following.

Inter-partition Communication. The processes of a partition can communicate with the pro-cesses of other partitions on the same module (intra-module or inter-partition communication) aswell as with processes of partitions hosted by other IMAmodules, with other non-IMA controllers,or with peripheral equipment (inter-module or external communication). For all inter-modulecommunication, the operating system interacts with the respective interface drivers to send or re-ceive messages. For inter-partition communication the operating system manages the respectivememory areas that are mapped to the communication ports. Nevertheless, the operating systemdoes not distinguish inter-module and inter-partition communication but provides a unified accessmechanism.

The partitions (or, more precisely, their processes) communicate using messages which are sentand received using the services provided by the OS. A unified port concept supports portability ofthe applications by mapping the ports internally to the interface channels. Thus, the I/O interfacesused for inter-partition communication are transparent for the application and only determined inthe configuration table. As a consequence, the messages at the API level contain only the payload(i.e., the data to be transmitted) and do not contain routing information like sender and receiver. Ifthis information is requested by a specific application, an application-level protocol has to ensurethat the sender attaches this information.3

Each port can either be used for sending or receiving (port direction SOURCE or DESTINATION,respectively). Ports are also distinguished according to the transfer mode:

• Sampling ports are used for communicating continuously varying signals which are trans-mitted periodically when loss of single or a few messages is acceptable.

• Queuing ports are chosen for transmitting irregular events that may occur at random inter-vals (e.g., the change of a discrete value).

The information concerning source and destination of a communication link, the physical com-munication medium (e.g., AFDX, CAN), the transfer mode (i.e., sampling or queuing mode), themaximum message size, queue length (for queuing ports), etc. are determined in the configurationtables.

Figure 3.2 depicts an IMA module with several partitions. Partition 1 has three queuing ports:one for receiving AFDX messages, one for sending AFDX messages, and one for communicatingwith partition M. Since partition M is hosted on the same IMA module, the messages are routedinternally which is transparent to the partition’s processes. Partition M has one queuing port forreceiving messages from partition 1, and two sampling ports – one for analog input and one foranalog output.

The communication ports have to be created during partition initialization using the servicesCREATE SAMPLING PORT (for sampling ports) or CREATE QUEUING PORT (for queuing ports) whichreturns a port identifier for each port to be used by subsequent API calls. Port creation allo-cates no further memory since all memory needed for communication management has already

3An example for such an application-level protocol is the communication protocol between test applications andtest specifications which is described in Sect. 6.1.2.

Page 87: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

3.1. IMA PLATFORM ARCHITECTURE 67

Avionics Partition MAvionics Partition 1IMA Module

Operating System

Tables

Configuration

QPQP QP

QP

Process 1Process 1 Process NMProcess N1. . .

. . . . . .

SPSP

ARINC 429 CAN Discrete I/O Analog I/OAFDX

HW InterfaceSystem

Hardware

ARINC 653 API

Figure 3.2: IMA module with inter-partition and I/O communication using queuing and samplingports

been accounted at configuration time. For sending messages to the configured communicationpartner, WRITE SAMPLING MESSAGE and SEND QUEUING MESSAGE, respectively, are called with themessage as one of the parameters. The operating system then checks that all parameters arevalid (and that the queue of the queuing port is not yet full). For sampling ports, the previousmessage is overwritten. For queuing ports, the new message is appended to the message queue.The messages can then be received using READ SAMPLING MESSAGE and RECEIVE QUEUING MESSAGE,respectively. The operating system also provides services to get status information about theport (GET SAMPLING PORT STATUS and GET QUEUING PORT STATUS, respectively), and to get the portidentifier (GET SAMPLING PORT ID and GET QUEUING PORT ID, respectively).

Intra-partition Communication. To reduce the overhead of inter-partition communication ser-vices when used for intra-partition communication, the operating system provides means for gen-eral inter-process communication and synchronization (buffers and blackboards) and means forinter-process synchronization (semaphores and events).Intra-partition communication objects are created during initialization phase using the providedAPI services (CREATE BUFFER, CREATE BLACKBOARD, CREATE SEMAPHORE, and CREATE EVENT). Theobjects are not predefined in the configuration tables, but the amount of memory required formanagement of intra-partition communication objects is allocated from the partition’s memory(which is defined in the configuration table), i.e., as many objects can be created as supported bythe pre-allocated memory space.Buffers and blackboards are the equivalent to queuing and sampling ports, respectively, but have nodirection and thus can be used for writing and reading. This is depicted in Fig. 3.3. The respectiveAPI services are SEND BUFFER (for writing into the buffer), RECEIVE BUFFER (for reading fromthe buffer), DISPLAY BLACKBOARD (for overwriting the previous message), READ BLACKBOARD (forreading the current blackboard message), and CLEAR BLACKBOARD (for emptying the blackboard).Additionally, GET BUFFER STATUS and GET BLACKBOARD STATUS are provided for getting the currentstatus, and GET BUFFER ID and GET BLACKBOARD ID for getting the respective object ID.Semaphores are used to control access to the partition’s resources. The OS provides the ser-vices WAIT SEMAPHORE (to request access to the resource), SIGNAL SEMAPHORE (to signal its endor to denote that the resource is available), GET SEMAPHORE ID (to get the semaphore ID), andGET SEMAPHORE STATUS (to get the semaphore’s current status).Events are used for synchronization between processes by notifying the occurrence of a specificcondition (SET EVENT) to those processes which explicitly waited for the event (WAIT EVENT). Forresetting the event, RESET EVENT can be used. The current status of the event and its event ID arereturned from GET EVENT STATUS and GET EVENT ID, respectively.

Page 88: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

68 CHAPTER 3. INTRODUCTION TO IMA PLATFORMS

Avionics Partition MAvionics Partition 1IMA Module

Operating System

Tables

Configuration

Process 1Process 1 Process NMProcess N1. . .

. . . . . .

ARINC 429 CAN Discrete I/O Analog I/OAFDX

HW InterfaceSystem

Hardware

ARINC 653 API

Figure 3.3: IMA module with intra-partition communication

3.1.2.4 Time Management

Each IMA module has a single clock which is used by the operating system for scheduling and fortreating deadlines or timeouts. The notion of time is local to a module. A global network time hasto be received separately.

For the processes, the operating system provides services to get the current local time (GET TIME)and to wait until a timer expires (TIMED WAIT). Additionally, a periodic process can suspend itsexecution until the next release point using PERIODIC WAIT which also postpones its deadline andavoids deadline violations. An aperiodic process can update its deadline with a specific amount oftime provided as a parameter to the service REPLENISH.

3.1.2.5 Memory Management

All memory areas of the partitions are defined in the configuration tables, and allocation of addi-tional memory is not possible nor supported at runtime. In the configuration tables, memory areasfor stack, code, and data are reserved for each partition which are shared by all processes andcommunication objects (e.g., buffers) of the respective partition. In the initialization phase, whenall processes and communication objects are created, each one gets its specific memory space (asrequested by the attributes of the specific object) which remains reserved for the particular objectuntil re-initialization of the partition. Thus, there is no need for specific memory management ormemory allocation services.

Furthermore, the memory ares of the partitions are not accessible by processes of other partitionswhich is checked by the Memory Management Unit (MMU). Any memory violation is detectedby the Health Monitoring.

3.1.2.6 Health Monitoring

The operating system provides mechanisms for common maintenance including (a) monitoringand reporting of hardware, application and OS software faults and failures, (b) fault isolationmechanisms, and (c) prevention of fault propagation. For detecting errors, the health monitoringservice uses the Built-In Tests (BIT) and the Built-In Test Equipment (BITE) provided by theplatform’s hardware and hardware interface system.

Errors may occur at module, partition or process level. Module-level errors are, for example, con-figuration errors, module initialization errors, or errors caused by OS-internal functions. Partition-level errors include partition configuration errors, partition initialization errors, and errors during

Page 89: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

3.1. IMA PLATFORM ARCHITECTURE 69

process management as well as errors occurring while the error handler process is running. Typi-cal process-level errors are application errors (raised explicitly by one of the processes), illegal OSservice requests, and process execution errors (e.g., overflow, memory violation, numeric errors).Fault responses depend on the error level: For module- and partition-level errors, the fault re-sponses are configured in configuration tables once for the module-level errors and per partitionfor the partition-level errors. At process level, the fault responses can be specified by a specialhighest-priority process called error handler process. The error handler can identify the error andthe faulty process and then take recovery actions at process level (e.g., stop or start processes) orat partition level (e.g., restart partition, stop partition). The error handler has to be created duringthe partition’s initialization phase using the OS service CREATE ERROR HANDLER. If no error handleris created, recovery actions are taken at partition level.The error handler (if created) is activated only during operating mode NORMAL. During partitioninitialization (even after the creation of the error handler) and for errors occurring while the errorhandler is active, partition-level recovery mechanisms are performed as configured.The OS provides the following services: If a process wants to invoke the error handler, it canuse RAISE APPLICATION ERROR with the specific error code APPLICATION ERROR and an error mes-sage as parameter. This starts the error handler of the partition (if created) which can then takethe recovery actions.4 Therefore, it can call GET ERROR STATUS to determine the error code, theidentifier of the faulty process, and the associated error message. If a process or the error handlerprocess need to report erroneous behavior to be logged by the health monitoring function, it canuse REPORT APPLICATION MESSAGE.

3.1.2.7 Operating Modes of IMA Modules

ARINC 653 focuses on the interface between OS and application software and does not considerthe operating modes of the module. During module initialization, i.e., after a power on or after ahardware reset, the IMA module’s core operating system software performs some basic checks

• to retrieve the current status of the IMA module (e.g., on ground, during flight),• to check which software has already been loaded (i.e., is the OS software already loaded,have the application software and the configuration tables been loaded), and

• to ensure that the IMA hardware, the operating system software, the configuration tables,and the application software are compatible.

Depending on the results of these checks, the core OS software allows, requires, or denies loadingof software and determines the next operating mode:

• If parts of the required software have not yet been loaded but all other tests have beensuccessful, the module remains in a passive mode which allows uploading of data.5

• In case of any errors or serious problems, the IMA module is completely deactivated, i.e.,halted.

• If all checks have been successful and OS software, configuration tables, and the softwareof the different applications have already been loaded, the module is in so-called operationalmode. The operating system then starts the scheduling of the configured partitions and eachrunning partition performs its initialization tasks.

4Note that recovery actions do not include correction of errors, e.g., limiting a value in case of overflow. In suchcases, the possible actions are stopping and restarting of the faulty process and restarting or deactivating the respectivepartition.

5The protocol for aircraft data loading is defined by ARINC 615A.

Page 90: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

70 CHAPTER 3. INTRODUCTION TO IMA PLATFORMS

ReferencesIMA platforms (i.e., CPIOMs and CPMs) as used within the VICTORIA project are described, forexample, in [Die04a], [Die04b], [Tha04c], [Tha04b], and [Smi04].The ARINC 653 API of the IMA operating system was first published in 1997 ([ARINC653])and updated in 2003 ([ARINC653P1]). At this time, it was also decided to extend the ARINC653 specification and split it into three parts: Part 1 describes the required services (i.e., allservices already covered in the first standard), part 2 introduces optional extended services,and part 3 specifies procedures for conformity testing. The current standard documents are[ARINC653P1-2], [ARINC653P2] and [ARINC653P3]. The descriptions in this thesis are basedon [ARINC653P1 d4s2] which is the draft document prior to adoption of [ARINC653P1-2].

3.2 Configurability of IMA Platforms

IMA modules are environments that allow full portability and reuse of applications by providing,on the one hand, an API that abstracts from hardware implementation details and, on the otherhand, means for configuring the IMA module according to the functional requirements of theapplications and according to the safety requirements of the system. The aim of the configurabilityis

• to provide flexibility for a large number of potential applications,• to provide the OS with the information necessary for integrity checking during moduleinitialization and for ensuring spatial and temporal partitioning, and

• to allow additionally that the available resources (i.e., processing time, memory, and com-munication interfaces) are efficiently shared between the partitions.

However, it is evident that the module’s configuration has to be consistent and complete. Thesystem integrator is responsible for the configurations of all modules of the system. This means,the system integrator has to ensure that

• the requirements of each application are fulfilled,• the partitions hosted by a specific module are (in sum) consistent with the hardware proper-ties of the module (e.g., do not need more interfaces than provided),

• scheduling, memory assignments, and communication ports for each IMA module’s config-uration are consistent and complete,

• the correct routing of messages and signals between partitions on the same IMA module,between different modules, and to/from other controllers or equipment are correct and com-plete, and

• the architectural requirements with respect to redundancy and selected communication tech-niques are considered adequately.

The role of the system integrator is usually performed by the airframer who is responsible for thevarious domains and the complete system.The system integrator compiles the configuration data by collecting the requirements of the appli-cations and capturing their communication partners. Then, the partitions of each application areassigned to the IMA modules of the respective domain by considering the partition’s requirements(memory needs, scheduling period and duration, etc.), the redundancy requirements with respectto the system architecture and the criticality level of the applications, and the safety requirements

Page 91: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

3.2. CONFIGURABILITY OF IMA PLATFORMS 71

of the aircraft. The distribution can be optimized, for example, by increasing the efficient usageof modules (with respect to memory or scheduling) to reduce the number of required modules,by maximizing intra-module communication, or by distributing applications of high criticalityto different modules. Moreover, the system integrator can provide spare memory and executiontime for each partition or in each module to prevent that configuration requirement changes ofone application have a global effect. The result of the distribution process are configurations foreach module and routing tables which specify the inter-module communication and can be usedto configure switches or other intermediate nodes. Appropriate tools can support the system in-tegrator in checking for consistency and completeness or can provide means to detect and markinconsistencies during data capturing.For representing the configuration data, different formats are possible for defining the structureof the configuration data: a XML schema (see e.g., [ARINC653P1 d4s2], p. 100–102), a classdiagram (e.g., like the XML schema relationship class diagram in [ARINC653P1 d4s2], p. 101), atable format (e.g., as an Excel table), an ASCII representation of such tables (e.g., comma or semi-colon separated value (CSV) tables), or a C structure composed of other structures, enumerationsand primitive types, among others. The configuration instances are then XML instance files, ob-ject diagrams, sets of Excel tables, sets of CSV tables, or C structures with constant, pre-assignedvalues. All formats can represent the same information. In the following, we will focus on aconfiguration representation in the form of a set of tables which are internally represented as CSVtables. The tables are grouped according to their specification level (i.e., module level or partitionlevel). At module level, 13 configuration tables define module-global information like, for exam-ple, the number of avionics partitions and the interface links and busses. At partition level, thepartition configuration data such as partition name, temporal and spatial allocation and communi-cation ports are defined in 16 configuration tables for each partition. Consequently, a module withone avionics partition is configured by 29 tables, for two partitions 45 tables are needed, and so on.The number of rows of each table depends on the type of table. For example, global data at modulelevel (GLOBAL DATA) and temporal and spatial allocation for one partition (TEMPORAL ALLOCATION,SPATIAL ALLOCATION) are each defined by a table with exactly one row. The number of rows forthe health monitoring configuration tables (HM SYSTEM, HM MODULE and HM PARTITION) depends onthe number of error sources (i.e., fixed number of rows). For the other tables, the number of rowsdepends on the number of links, busses, messages and signals to be defined for the module andthe partitions. Figure 3.4 depicts how an IMA module configuration is composed of module-leveland partition-level configuration tables.In the following sections, the formats (i.e., the parameters) of the different configuration tablesare described: Section 3.2.1 provides the column description for the module-level configurationtables, and Sect. 3.2.2 defines the partition-level configuration tables. The format is compiledaccording to the descriptions in [ARINC653P1 d4s2] with regard to the configuration tables usedfor the IMA modules tested in the VICTORIA project. The format of the latter is confidentialand the described tables are an extract containing the obvious and most relevant configurationparameters. Moreover, to limit the amount of implementation specific information, certain sets ofconfiguration parameters are abstracted by a single parameter – denoted by parameter names initalics.An IMA module configuration thus consists of a set of configuration tables which assign concretevalues to the configuration parameters. For abstracted parameters, no values are provided. All spe-cific configuration values have to comply with the hardware characteristics of the IMA platform,e.g., with the memory or the number and properties of the interfaces provided by the module. Suchtechnical data are provided by technical user guides and hardware mapping tables. In addition, themodule-level configuration data have to comply with the needs of the partitions which are definedin the partitions’ configuration table. For example, for scheduling, the duration of the major timeframe (parameter MAF DURATION in table GLOBAL DATA) has to be a multiple of all partition periods(for each partition: parameter PARTITION PERIOD in table TEMPORAL ALLOCATION).

Page 92: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

72 CHAPTER 3. INTRODUCTION TO IMA PLATFORMS

1

1

1

*

*

*

*

*

*

*

*

*

*

*

*

GLOBAL DATA

*

*

*

*

*

*

*

*

*

1

Module-Level Configuration Tables Partition-Level Configuration Tables

*

GLOBAL PARTITION DATA

TEMPORAL ALLOCATION

SPATIAL ALLOCATION

HM PARTITION

AFDX OUTPUT MESSAGE

AFDX INPUT MESSAGE

A429 OUTPUT LABEL

A429 INPUT LABEL

CAN INPUT MESSAGE

DISCRETE OUTPUT SIGNAL

DISCRETE INPUT SIGNAL

ANALOGUE OUTPUT SIGNAL

ANALOGUE INPUT SIGNAL

OUTPUT DATA

INPUT DATA

ERROR NUM

DISCRETE OUTPUT LINE

DISCRETE INPUT LINE

ANALOGUE OUTPUT LINE

ANALOGUE INPUT LINE

AFDX OUTPUT VL

A429 OUTPUT BUS

CAN OUTPUT BUS

ERROR NUM

HM SYSTEM

HM MODULE

AFDX INPUT VL

A429 INPUT BUS

CAN INPUT BUS

1 1..PARTITION NB

IMA Module Configuration

1 1

1 1

ERROR NUM

CAN OUTPUT MESSAGE

Figure 3.4: IMA module configuration consisting of several tables at module and partition level

Examples of IMA module configurations are included in Appendix C: Appendix C.1 providesthe module- and partition-level configuration tables for a module with two avionics partitions(configuration tables for system partitions are not included). Appendix C.2 provides a set ofconfiguration tables which have been changed with respect to the ones in Appendix C.1 by addinga pair of AFDX ports for each partition. Finally, Appendix C.3 provides the configuration tablesfor a module with four avionics partitions.

3.2.1 Module-level Configuration Tables

At the module level, the configuration tables define the number of partitions, global schedulingdata (e.g., the duration of the major time frame), the different module-level memory areas, the

Page 93: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

3.2. CONFIGURABILITY OF IMA PLATFORMS 73

health monitoring responsibilities and the respective module-level actions, and the interface bussesand lines used by the partitions.

The module-level configuration tables can be grouped into three categories: general module con-figuration data (Sect. 3.2.1.1), health monitoring configuration data (Sect. 3.2.1.2), and I/O con-figuration data (Sect. 3.2.1.3).

3.2.1.1 General Module Configuration Table

This table defines the general configuration parameters of the module. For each IMA moduleconfiguration, the table contains exactly one row (see, for example, Table C.1).

GLOBAL DATA

PARTITION NB number of avionics partitionsSYS PARTITION NB number of system partitionsMAF DURATION scheduling data: duration of the major time frame, has to be a

multiple of all partition periodsCACHE CONFIG cache configuration dataRAM BEGIN start address of memory for the OS and driversRAM SIZE memory size for the OS and driversCFG AREA BEGIN start address of memory for the configuration dataCFG AREA SIZE memory size for the configuration dataMAC ADDRESS MAC addresses for sending and receptionMODULE LOCATION identifier for location within the aircraft architecture

3.2.1.2 Health Monitoring Configuration Tables

These tables define which error has to be handled at which level (table HM SYSTEM) and what therecovery actions are to be taken if these errors shall be handled at module level (table HM MODULE).For each possible error source, the tables contain one row (see, for example, Table C.1).

HM SYSTEM

ERROR SOURCE detected error (e.g., configuration error, divide by 0, overflow, ini-tialization error, deadline missed, stack overflow, power interrupt,I/O access error)

RECOVERY LEVEL defines the level at which the error is handled, possible levels:module level and partition level

HM MODULE

ERROR SOURCE detected errorRECOVERY ACTION defines the action when considered as module-level error, possible

actions: reset, shutdown, ignore

3.2.1.3 I/O Configuration Tables

These tables define the AFDX virtual links, ARINC 429 and CAN busses, and the analog and dis-crete lines. For each communication medium, input and output is distinguished. The configurationdata have to comply to the hardware interfaces provided by the module. For example, if the IMAplatform provides N discrete inputs it is only possible to configure up to N discrete input lines,

Page 94: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

74 CHAPTER 3. INTRODUCTION TO IMA PLATFORMS

i.e., the table DISCRETE INPUT LINE can have up to N rows with configuration values. Examplesare provided in Table C.10 and Table C.11.

AFDX OUTPUT VL

VL NAME name of the virtual link (VL)VL ID unique identifier for the VLNETWORK selected network (one, several or all of the redundant AFDX net-

works)PORT ID unique identifier for the AFDX portPORT CHARAC type of the AFDX port (queuing or sampling)PORT TRANS TYPE port transmission type (unicast or multicast)VL DATA VL parameter such as bandwidth allocation gap (BAG), number

of sub VLs, buffer size used for the port in the modules RAM,maximum payload, etc.

IP ADDR source (i.e., own) and destination IP addressUDP PORT identifier of source (i.e., own) and destination UDP port

AFDX INPUT VL

VL NAME name of the virtual link (VL)VL ID unique identifier for the VLNETWORK selected network (one, several, or all of the redundant AFDX net-

works)PORT ID unique identifier for the AFDX portPORT CHARAC type of the AFDX port (queuing or sampling)VL DATA VL parameter such as buffer size used for the port in the modules

RAM, maximum payload, etc.IP ADDR destination (i.e., own) IP addressUDP PORT identifier of destination (i.e., own) UDP port

A429 OUTPUT BUS / A429 INPUT BUS

BUS NAME name of the ARINC 429 busCONNECTOR cavity and pin location on the ARINC 600 connector

CAN OUTPUT BUS / CAN INPUT BUS

BUS NAME name of the CAN busCONNECTOR cavity and pin location on the ARINC 600 connector

DISCRETE OUTPUT LINE / DISCRETE INPUT LINE

LINE NAME name of the discrete lineCONNECTOR cavity and pin location on the ARINC 600 connector as well as its

role (e.g., set switch, read status, etc.)

ANALOG OUTPUT LINE / ANALOG INPUT LINE

LINE NAME name of the analog lineCONNECTOR cavity and location on the ARINC 600 connector as well as its

role (e.g., read voltage, read temperature, etc.)

Page 95: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

3.2. CONFIGURABILITY OF IMA PLATFORMS 75

3.2.2 Partition-level Configuration Tables

At the partition level, the configuration tables define the general partition data (e.g., partition nameand identifier, the application to which the partition belongs to, etc.), the temporal and spatialconfiguration data, the health monitoring actions at partition level, and the communication portsand messages and signals used by the partition. Each partition is configured by its set of partition-level tables and, therefore, the partition identifier is contained in each table.

The partition-level configuration tables can be grouped into three categories: general partitionconfiguration data including the scheduling and memory parameter of the partition (Sect. 3.2.2.1),partition health monitoring configuration data (Sect. 3.2.2.2), and communication configurationdata (Sect. 3.2.2.3).

3.2.2.1 General Partition Configuration Tables

These tables define the general partition configuration parameters such as partition name andidentifier and configuration parameter for the main process (GLOBAL PARTITION DATA) as well asthe partition-relevant scheduling and memory configuration parameter (TEMPORAL ALLOCATION andSPATIAL ALLOCATION). For each partition in a IMA module configuration, the tables contain ex-actly one row (see, for example, Table C.3).

GLOBAL PARTITION DATA

PARTITION ID partition identifier (unique within the module)PARTITION NAME partition name (unique within the module)APPLICATION NAME name of the associated application (unique within the module but

another partition can belong to the same applicationAPPLICATION ID identifier of the associated applicationCACHE CONFIG cache configuration data (to restrict the module cache configura-

tion)MAIN STACK SIZE stack size of the main processMAIN ADDR start address of the main process in the partition’s code, i.e., the

name of the main functionPROCESS STACK SIZE size of the memory area to be used for the stack of the other pro-

cesses (each process gets a dedicated memory area located withinthis one); the process stack area is located in the data area of thepartition

MMU CONFIG configuration data for the MMU data relevant to the partition

TEMPORAL ALLOCATION

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

PARTITION PERIOD scheduling period for the partitionSCHED WINDOW POS position of the scheduling window within the MAF (each position

can be assigned to one partition only within the module)SCHED WINDOW OFFSET offset of the respective scheduling window from the beginning of

the MAFSCHED WINDOW DURATION duration of the respective scheduling window

Page 96: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

76 CHAPTER 3. INTRODUCTION TO IMA PLATFORMS

SPATIAL ALLOCATION

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

CODE AREA BEGIN start address of the code areaCODE AREA SIZE memory size of the code areaDATA AREA BEGIN start address of the data area; data area contains process stacks,

administrative blocks for each port type, for the buffers, black-boards, events, and semaphores, and for the global variables

DATA AREA SIZE memory size of the data area

3.2.2.2 Partition Health Monitoring Configuration Table

This table defines the recovery actions if an error is handled at the partition level. Additionally,it is configured if the error is allowed to be handled by the error handler process (if created). Foreach possible error source, the table contains one row (see, for example, Table C.3).

HM PARTITION

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

ERROR SOURCE detected errorRECOVERY ACTION action when considered as partition-level error and error handler

not allowed (see HANDLER RECOVERY) or not created or error oc-curred within the handler, possible actions: partition idle, warmstart, cold start, ignore

HANDLER RECOVERY defines if the respective error can be handled at process level byan error handler, possible values: yes or no

3.2.2.3 Communication Configuration Tables

These tables define the messages, labels and signals used by the partition as well as the inter-partition communication ports. The communication ports are associated with a message, label,signal, or other API port depending on the communication medium used for data transmission.Each message, label, signal, and port is defined in a separate row of the respective configurationtable. Examples for AFDX messages and API ports using AFDX are provided in Table C.12 andTable C.13.

AFDX OUTPUT MESSAGE

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

ASSOCIATED VL NAME name of the VL defined in the module configuration tableAFDX OUTPUT VL

ASSOCIATED AFDX PORT ID identifier of an AFDX port defined in the module configurationtable AFDX OUTPUT VL, has to be an AFDX port defined for the VLwith name ASSOCIATED VL NAME

TYPE DATA type information

Page 97: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

3.2. CONFIGURABILITY OF IMA PLATFORMS 77

AFDX INPUT MESSAGE

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

ASSOCIATED VL NAME name of the VL defined in the module configuration tableAFDX INPUT VL

ASSOCIATED AFDX PORT ID identifier of an AFDX port defined in the module configurationtable AFDX OUTPUT VL, has to be an AFDX port defined for the VLwith name ASSOCIATED VL NAME

TYPE DATA type information

A429 OUTPUT LABEL

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

ASSOCIATED A429 BUS name of the A429 bus defined in the module configuration tableA429 OUTPUT BUS

A429 LABEL NAME name of the A429 labelA429 LABEL NUMBER number of the A429 labelSIGNAL LSB position of the LSB in the A429 wordSIGNAL MSB position of the MSB in the A429 wordTYPE DATA type information including minimum and maximum values, reso-

lution, etc.

A429 INPUT LABEL

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

ASSOCIATED A429 BUS name of the A429 bus defined in the module configuration tableA429 INPUT BUS

A429 LABEL NAME name of the A429 labelA429 LABEL NUMBER number of the A429 labelSIGNAL LSB position of the LSB in the A429 wordSIGNAL MSB position of the MSB in the A429 wordTYPE DATA type information including minimum and maximum values, reso-

lution, etc.

CAN OUTPUT MESSAGE

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

ASSOCIATED CAN BUS name of the CAN bus defined in the module configuration tableCAN OUTPUT BUS

CAN MSG NAME name for the CAN messageCAN MSG ID identifier for the CAN messageCAN MSG PAYLOAD payload of the message (in byte)SIGNAL LSB position of the LSB in the CAN frameSIGNAL MSB position of the MSB in the CAN frameTYPE DATA type information

Page 98: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

78 CHAPTER 3. INTRODUCTION TO IMA PLATFORMS

CAN INPUT MESSAGE

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

ASSOCIATED CAN BUS name of the CAN bus defined in the module configuration tableCAN INPUT BUS

CAN MSG NAME name for the CAN messageCAN MSG ID identifier for the CAN messageCAN MSG PAYLOAD payload of the message (in byte)SIGNAL LSB position of the LSB in the CAN frameSIGNAL MSB position of the MSB in the CAN frameTYPE DATA type information

DISCRETE OUTPUT SIGNAL

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

ASSOCIATED LINE name of the discrete line defined in the module configuration tableDISCRETE OUTPUT LINE

SIGNAL NAME name of the signalDEFAULT VALUE default value (GND, open, or 28V)

DISCRETE INPUT SIGNAL

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

ASSOCIATED LINE name of the discrete line defined in the module configuration tableDISCRETE INPUT LINE

SIGNAL NAME name of the signalLOGIC interpretation logic of the value (positive or negative)

ANALOG OUTPUT SIGNAL

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

ASSOCIATED LINE name of the analog line defined in the module configuration tableANALOG OUTPUT LINE

SIGNAL NAME name of the signalTYPE DATA type information including minimum and maximum values, reso-

lution, value conversion parameters, etc.

ANALOG INPUT SIGNAL

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

ASSOCIATED LINE name of the analog line defined in the module configuration tableANALOG INPUT LINE

SIGNAL NAME name of the signalTYPE DATA type information including minimum and maximum values, reso-

lution, value conversion parameters, etc.

Page 99: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

3.2. CONFIGURABILITY OF IMA PLATFORMS 79

OUTPUT DATA

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

PORT NAME name of the portPORT CHARAC type of the port (queuing or sampling)PORT MAX MSG SIZE maximum message size to be transmitted by the portPORT MAX MSG NB for queuing ports the maximum number of queued messages,

empty for sampling portsMEDIUM TYPE type of the communication medium (module-internal RAM,

module-external AFDX, A42, CAN, DISCRETE, or ANALOG)ASSOCIATED PORT for RAM ports, it contains the receiving API port name

and its partition identifier; for the other communication me-dia it defines the associated AFDX port identifier or nameof the label, message, or signal depending on the com-munication medium used (name or identifier as definedin AFDX OUTPUT VL, A429 OUTPUT LABEL, CAN OUTPUT MESSAGE,DISCRETE OUTPUT SIGNAL, or ANALOG OUTPUT SIGNAL, respec-tively)

TYPE DATA type information for the payload like message type, FDS configu-ration data, etc.

INPUT DATA

PARTITION ID partition identifier (as defined in the configuration tableGLOBAL PARTITION DATA)

PORT NAME name of the portPORT CHARAC type of the port (queuing or sampling)PORT MAX MSG SIZE maximum message size to be transmitted by the portPORT MAX MSG NB for queuing ports the maximum number of queued messages,

empty for sampling portsMEDIUM TYPE type of the communication medium (module-internal RAM,

module-external AFDX, A42, CAN, DISCRETE, or ANALOG)ASSOCIATED PORT for RAM ports empty; for the other communication me-

dia, it defines the associated AFDX port identifier orname of the label, message, or signal depending on thecommunication medium used (name or identifier as de-fined in AFDX INPUT VL, A429 INPUT LABEL, CAN INPUT MESSAGE,DISCRETE INPUT SIGNAL, or ANALOG INPUT SIGNAL, respectively)

TYPE DATA type information for the payload like message type, FDS configu-ration data, etc.

3.2.3 Configuration Responsibilities and Change Management

The module- and partition-level configuration tables contained in an IMA module configurationare assembled by a single authority – the system integrator. However, they include configurationinformation and requirements of the partitions which are provided by the application supplier.This information cannot be provided in the format needed for the partition-level configurationtables described above because

• the application supplier knows the application’s communication partners but has no explicitinformation regarding the location (module and partition number) of the respective partitions(i.e., the supplier’s own ones as well as those of the communication partner),

Page 100: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

80 CHAPTER 3. INTRODUCTION TO IMA PLATFORMS

• the application supplier defines how many different AFDX VLs, ARINC 429 or CANbusses, or analog or discrete lines are required (to comply with the redundancy conceptand safety requirements) but does not explicitly define on which links, busses or lines thesemessages, labels and signals are transmitted,

• the application supplier does not know on which module(s) the partitions of its applicationare running,

• the application supplier knows the memory requirements for code and data areas but doesnot require to know their exact location in the module’s memory, and

• the application supplier knows the scheduling requirements of its partition(s) but is notaware of the sequence of partition activations within a module.

Nevertheless, other configuration data is defined by the application suppliers and has to be includedby the system supplier in the partition-level configuration tables. For example, the applicationsupplier defines the names and characteristics of its input and output communication ports.

Contract Model. To overcome the problem of different competences and responsibilities, a con-tract model between the system integrator and the application suppliers is suggested by Verocel(see [Ver05]) for consideration in future versions of ARINC 653. The aim of this contract modelis to allow the application suppliers to define their requirements separately from the respectivepartition-level configuration tables and from the configuration requirements of the other applica-tions, and to ensure that the system integrator provides adequate module configurations. The con-tracts are of additional value when considering configuration changes which are likely to occurduring early development phases and when applications are enhanced with additional functional-ity. Then, the contracts ensure that (a) the system supplier changes the module configuration asrequested, and (b) that changes requested by other applications hosted on the same module do notconflict with the requirements of the application.

Configuration Management. In general, the significance of configuration management is of-ten underestimated which becomes apparent when analyzing the respective processes and toolsand the consideration of configuration management in the various standards and guidelines. Forgeneral safety-critical systems, this problem is discussed in detail by [Sto04] stating that config-uration data is often assumed to be static and unchanging without considering that configurationchanges require the same revalidation and recertification activities as changes to the software orhardware. When verifying and certifying IMA modules and systems using IMA, the use of differ-ent configurations is considered adequately as described in Chap. 5. Nevertheless, the approachdescribed there expects to use consistent configurations and adequate tool support for checking theconsistency of IMA module configurations is rare. Configuration change management thus oftenremains a manual activity.

ReferencesAn XML configuration schema as well as some simple examples are included in[ARINC653P1 d4s2]. Configuration management for IMA modules and safety-critical systemsin general is addressed by [Ver05] and [Sto04].

Page 101: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

3.3. DEVELOPING APPLICATION SOFTWARE FOR IMA PLATFORMS 81

3.3 Developing Application Software for IMA Platforms

When developing application software to be running on an IMA module, the application supplierhas to consider the characteristics of the operating system and the hardware platform as well asthe configuration possibilities and limitations. Of course, for a specific application, the applicationsupplier also has to regard the application-specific requirements with respect to system behavior,system and software design, and other detailed information (e.g., on algorithms), but such consid-erations are independent of the target platform and are thus not considered in this section.

With respect to configuration, the application supplier has to consider mostly the partition-levelconfiguration data provided by the system integrator based on the application supplier’s initialrequirements. This means for the initial configuration that each application supplier specifies itsminimum requirements based on the application design and the system integrator then generatesthe configurations for all IMA modules of the domain and for each partition. However, duringthe coding phase and when enhancing the application with additional functionality, it has to bechecked if the configuration is still sufficient or if configuration changes are necessary. Config-uration changes of one partition may have a major impact on the overall configuration and thusare usually considered as the last resort when it is not possible to cope with the restrictions of theconfiguration.

However, considering the restrictions imposed by a particular configuration is not only interestingfor application development but also for testing the IMAmodule’s operating system. For example,when testing the memory allocation in the process stack area of a partition (defined by parameterPROCESS STACK SIZE), it has to be calculated how many processes N can be created with a givenprocess stack size according to the OS specification in order to verify that N processes can be cre-ated and used, but N + 1 cannot. The same calculations have to be performed when changes in theapplication implementation require more processes: Then, it has to be checked if the configurationstill allows to create all processes or if more memory has to be reserved for the application. Thelatter requires to change the partition-level configuration and, thus, has to be co-ordinated with thesystem integrator.

Hence, the considerations for application software development and finding test objectives arerelated and include (but are not limited to):

Partitions. Each application can consist of one or more partitions which are either located onthe same module or on different modules depending on their configuration requirements,safety requirements, and the system architecture defined by the system integrator. Similarto communication with other partitions, the partitions belonging to the same applicationhave to use the common API services for intra-application communication.

Inter-Partition Resource Sharing. The resources of an IMA module are shared by several par-titions which means that CPU, memory space, and HW interfaces are not uniquely assignedto one application. In particular, it has to be considered that there are predetermined periodswhen other partitions are scheduled which can also mean that the currently running processis interrupted at the end of its partition’s scheduling window (which may result in misseddeadlines). Furthermore, memory area is limited and therefore pre-allocated for particularpurposes (e.g., to store the code, the global variables, partition-related OS control blocks, orthe process stacks of a specific partition). In addition, further memory allocation at runtimeis not supported to ensure a predictable execution time.

Intra-Partition Resource Sharing. The tasks of a partition can be accomplished by several pro-cesses which share the common resources allocated to the partition, i.e., all communicationports, the partition data, etc. However, it has to be considered that (a) writing into a sam-pling port or blackboard overwrites the previous message (potentially without having been

Page 102: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

82 CHAPTER 3. INTRODUCTION TO IMA PLATFORMS

read before), and (b) reading from a queuing port or buffer is destructive. Moreover, it hasto be considered that the sender and the receiver of a message are partitions and not specificprocesses running in these partitions, i.e., unless encoded in the message itself, the receiverhas no information about the sender process and can thus not distinguish messages fromdifferent sender processes.

Pre-allocated Memory Areas. The memory areas to be used by a partition are defined in theconfiguration. The executable code of the partition has to fit in its code area (configured bythe parameters CODE AREA BEGIN and CODE AREA SIZE). The data area (configured by the pa-rameters DATA AREA BEGIN and DATA AREA SIZE) is used for the the global variables definedin the source code, the stacks of the processes, and the control blocks required by the OSfor managing the communication objects (e.g., ports, buffers). For the process stacks, thememory is already pre-allocated in the configuration by parameter PROCESS STACK SIZE. Forcommunication objects, the required memory is allocated during object creation (e.g., portcreation, buffer creation) and the different types of objects compete for it. If – by mistake– not enough data area is configured, the sequence of creation determines which objectscannot be created any more.

Portability. Application software shall be HW-independent and portable meaning (a) that the par-tition uses the OS services to access the HW interfaces and to affect the process scheduling,(b) that the application software is written independently of its location and the locationof its communication partners, and (c) that the application software makes no assumptionsabout the implementation of the OS other than given in the respective OS specification (e.g.,ARINC 653 specification).

Inter-Partition Communication Objects. For creating sampling and queuing ports, the respec-tive parameters (e.g., port name and maximum message size) are defined in the configura-tion; non-configured ports cannot be created. If additional ports or different port characteris-tics are needed, the required configuration changes have to be co-ordinated with the systemintegrator and the respective communication partners.

Communication Objects. All inter-partition and intra-partition communication objects can beused directly after their creation in any operating mode (i.e., during COLD START, WARM START,and NORMAL). For example, a destination queuing port can be used for receiving messagesdirectly after its creation in operating mode COLD START (provided that the communicationpartner has already sent a message).

Operating System Limitations. Regardless of the configuration restrictions, the operating sys-tem defines the system limit number of processes, ports, etc. For example, although enoughmemory is available to manage 300 blackboards, the operating system’s limit is 256.

Summarizing, for developing an application, the configuration requirements have to be agreedupon with the system integrator which then compiles adequate configurations for the moduleshosting the application’s partitions. However, the configuration requirements often change be-cause (a) the behavior has to be implemented in a different way than planned beforehand which,for example, may require more processes and intra-partition communication objects, (b) fault cor-rection requires different algorithms, or (c) the implementation of new features requires additionalvariables, processes, or communication objects. In these cases, it has to be checked if the appli-cation’s currently configured resources are still sufficient or if the implementation changes alsorequire configuration changes.

Test applications have to consider the same configuration possibilities and limitations. In contrastto real applications, the test applications check the operating system’s behavior and usage of theconfiguration. Therefore, in some test cases, the test applications fully exploit the configured

Page 103: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

3.3. DEVELOPING APPLICATION SOFTWARE FOR IMA PLATFORMS 83

resources in order to verify that, for example, the data area is shared as specified and can beallocated completely. Thus, it is guaranteed that real applications can be safely changed as long asthis does not require more resources.

ReferencesThe characteristics of the operating system to be used for IMA platforms are defined in the ARINC653 API specification [ARINC653P1 d4s2] that also directly addresses some of the above consid-erations, e.g., partitions and inter-partition resource sharing (p. 2, 3, 108), intra-partition resourcesharing (p. 11, 27), memory allocation (p. 13, 26), portability (p. 6), operating system limitations(p. 125, 148 for Ada, p. 170 for C). The configuration details are not discussed there, but followthe configuration description in the previous section.

Page 104: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

84

Page 105: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Chapter 4

System Architectures using IntegratedModular Avionics

The concept of integrated modular avionics is to use IMA platforms as the main modular buildingblocks to create complex structures which shall, on the one hand, provide the execution platformfor the aircraft functions and, on the other hand, support fault tolerance as required by the sever-ity levels of the respective systems. Thereby, the specific characteristics of IMA platforms (asdescribed in the previous chapter) affect the possible and required characteristics of system ar-chitectures using integrated modular avionics. Such system architectures usually require specificsystem engineering processes for development as well as verification and validation. While thelatter is discussed in detail in the following chapter, this chapter aims at analyzing and discussingthe architectural features and peculiarities required for understanding the changing system engi-neering processes for IMA architectures.The system architecture of an aircraft can be considered at two levels: aircraft level and systemlevel. While the aircraft-level architecture comprises the components of all systems and focuses ontheir interconnection and their physical location in the aircraft, the architecture at the system leveldescribes in more detail the setup of the systems, i.e., their controllers and peripheral equipment.In the following, Sect. 4.1 analyzes the general characteristics of an aircraft-level architecture andSect. 4.2 discusses the features of typical system-level architectures.

4.1 IMA Architecture at the Aircraft Level

An architecture specification at the aircraft level comprises the platforms for all avionics func-tions – from flight control to passenger entertainment – and describes their interconnection aswell as their physical location in the aircraft. As mentioned above, a typical IMA architecturestands out by being composed of different types of a shared IMA module and several types of pe-ripheral equipment. Where necessary, the architecture may also comprise special-to-purpose andCOTS platforms as controllers. The controller platforms are extensively connected by fast digi-tal data busses for inter-system communication and by analog or digital lines for communicationwith the respective peripheral equipment – either directly with the sensors and actuators or viaintermediate so-called remote data concentrators. Within an IMA architecture, a typical choicefor a controller-to-controller data bus is AFDX, whereas for controller-equipment communicationothers are preferred, e.g., CAN, ARINC 429, analog signals, and discrete signals.As stated in [DP04] (p. 4), the physical architecture of an aircraft like the Airbus A380 containsthe interconnected platforms and peripherals for about 100 aircraft functions. Although enormousreductions have been achieved by the use of shared platforms like IMA modules and digital databuses such as AFDX and CAN, the physical architecture still comprises very many network nodes

85

Page 106: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

86 CHAPTER 4. SYSTEM ARCHITECTURES USING IMA

and their connections and, therefore, it is advisable to further structure the aircraft-level architec-ture, e.g., into different domains. Usually, two complementary types of domains are distinguished:(1) so-called functional domains which provide a functional grouping and (2) so-called aircraft do-mains which separate the aircraft networks according to the criticality of the comprised systems.However, the enormous number of network nodes also results from implementing redundancyconcepts that are necessary to achieve the required level of fault tolerance for each aircraft func-tion. This means that identifying redundant platforms also supports the structuring approach. Thefollowing paragraphs at first discuss the architectural characteristics of the two domain types, thendiscuss redundancy concepts at aircraft level, and finally briefly address structuring according tothe physical location of the components. The section concludes by analyzing a sample aircraftarchitecture in Sect. 4.1.1.

Functional Domains. To provide a structure in the overall aircraft architecture, the aircraft func-tions are functionally grouped into so-called domains. These typically comprise the flight controldomain, engine domain, cockpit domain, cabin domain, utilities domain, energy domain, OIS do-main and PCES domain (see Sect. 1.1 for a short characterization of each domain). This approachof distributing the systems across different domains also reflects the diverse requirements with re-spect to safety and security which have to be considered when choosing the system architecturesand the redundancy concepts, but also when choosing the tools and methods during the develop-ment process. For example, the systems in the flight control and the engine domain are obviouslymost safety critical and therefore require redundant and diverse controllers. On the other hand, thesystems in the PCES domain are provided for comfort and entertainment and are not required forsafe flying – thus, the demands for redundancy are less stringent. Even less critical with respectto safety are the passenger-owned devices which nowadays can access the on-board informationsystems or the Internet. However, with respect to security, functional grouping is not sufficient toavoid possible undesired intrusions, e.g., from passenger-owned devices to flight control. There-fore, the functional grouping has to be complemented by physically separating the systems ofdifferent criticality by assigning them to different aircraft domains.

Aircraft Domains. To ensure the appropriate level of safety and security to each system in anIMA architecture, it is recommended to assign the systems to different networks which are phys-ically separated by appropriate means but still allow exchange of well-defined data. As describedin Sect. 1.6, it is common to divide the aircraft computing network into four sub-networks (alsocalled aircraft domains). Thus, aircraft control systems (belonging to sub-network 1) are sepa-rated from airline information services (sub-network 2), passenger information and entertainmentservices (sub-network 3), and passenger-owned devices (sub-network 4).However, the separation into these four aircraft domains is somehow driven by theoretical con-siderations and, in practice, the architectures often consider only two or three separate networks:All systems which do not belong to the aircraft control network are considered to be open worldand are separated from the networks in the aircraft control domain by a secured communicationinterface (SCI). Depending on which systems are considered to be in the open world, passenger-owned devices may be further separated. In the following section’s analysis of a sample aircraftarchitecture, this different aircraft domain structure can be observed: Systems belonging to theOIS domain which provide or collect administrative data (and thus should be assigned to the sep-arated airline information services domain) are connected directly to the aircraft control network.However, this different allocation of systems to separated networks does not necessarily result ina lower level of security and safety since it can be compensated by appropriate verification andvalidation means.

Redundancy Concepts at the Aircraft Level. Wherever required, appropriate redundancy con-cepts are implemented ranging from duplicated software on duplicated platforms (e.g., for the

Page 107: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

4.1. IMA ARCHITECTURE AT THE AIRCRAFT LEVEL 87

cabin intercommunication data system) to dissimilar software running on platforms with differenthardware architecture (e.g., for the flight controller). Additionally, redundancy is provided forthe inter-controller network (usually by duplicated wiring) and the data busses and signals to theperipheral equipment (usually by different busses to sensors or actuators at the same location).These forms of redundancy also require a redundant power supply with important controllers con-nected to both supplies. Identifying redundant controllers is one additional mean for structuringthe aircraft architecture.

Physical Architecture. One main characteristic of integrated modular avionics is the use ofshared platforms where several system controllers share the computing, memory, and interfaceresources of one IMA module. Thus, each functional domain has a set of IMA modules whichhost the controlling and monitoring functions of the systems and which are connected to the AFDXnetwork for communication with systems hosted by other modules (either located in the same orin a different functional domain). For redundancy, the modules as well as the communicationnetwork are duplicated (or triplicated) and placed at different locations in the aircraft to supportthe ability to manage system loss caused by smoke or fire without losing functions whose losswould be catastrophic.

4.1.1 Analysis of a Sample Aircraft Architecture

There are only very few IMA aircraft architectures published which can be used for more detailedanalysis. Examples of a potential A380 architecture are shown, for example, in [DP04] (p. 4) andquite similarly also in [ZB05]. In this thesis, we will further analyze an aircraft architecture shownin Fig. 4.1 which is based on these examples. The figure depicts the controller platforms of thesystems (white boxes) and their interconnections (red and blue lines). The figure does not containany peripheral equipment like actuators or sensors, but for a few systems remote data concentratorsare also contained. All names of the systems and system components are abbreviated and the longforms of the acronyms are given in Appendix G. The following paragraphs analyze the depictedaircraft architecture with respect to redundancy and grouping as described above.

Network Redundancy. The network used in the sample aircraft architecture is a redundantswitched network – probably an AFDX network. Figure 4.2 focuses on this detail by emphasizingonly the fully redundant wiring (blue and red lines) and AFDX switches (blue and red nodes). Theopen world is connected to the SCI using a different networking technology – probably Ethernet– without further redundancy (single black line).

Assignment to Aircraft Domains. When assigning the systems to their probable aircraft do-mains, it becomes obvious that almost all explicitly mentioned systems belong to the aircraftcontrol domain and only a few belong to the airline information services domain and provideadministrative information or passenger information and entertainment support. All systems be-longing to the open world may be assigned to one of the latter ones depending on their task. Thisassignment to the aircraft domains is emphasized in Fig. 4.3 which also shows that the redundantAFDX network is primarily used for controller-to-controller communication within the aircraftcontrol domain. The passenger-owned devices aircraft domain is not shown because, at most, therespective gateway platform is part of the aircraft architecture.

Assignment to Functional Domains and Redundancy of Systems. The sample aircraft archi-tecture in Fig. 4.1 already sketches a possible functional separation of the systems into the eightdifferent functional domains distinguished in this thesis. Unlike the figure in [DP04] (p. 4) – where

Page 108: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

88 CHAPTER 4. SYSTEM ARCHITECTURES USING IMA

Cabin Domain

Ope

nW

orld

Ope

nW

orld

MONCOM

SFCC1 MONCOM

FCSC1MONCOM

FCGC1

MONCOM

FCGC3MONCOM

FCSC3

MONCOM

FCGC2MONCOM

FCSC2

MONCOM

SFCC2

IOM

ADIRU1 FM1 FM2 ADIRU2

IOM

AESU1

FCDC1FW1

ACMFFDIF

ATC2ATC1ACR1

EEC1

EEC2EHM2

EHM1

AIC2HSM

ECB ext lights ctrl

PWCU

IOM

C2L3 R3

L1 R2 R1C1

SPDB SPDB

IPCUIPCUVSC

Ventilation &pressurization

ACFVentilation &pressurizationACF

EEC3EHM3

EEC4EHM4

L2

FCDC2FW2ACR2 (opt)

AESU2

CIDS CIDS

AIC2HSM SB24

CBMELM

SB24CBMELM

MONCOM

FuelMONCOM

FuelSCI

IRDC IRDC

PESC

MONCOM

LG TP&BSMONCOM

LG TP&BS

SCI

ADIRU3 FM3

Energy Domain

PCES Domain

Utilities Domain

OIS Domain

Eng

ine

Dom

ain

Eng

ine

Dom

ain

Coc

kpit

Dom

ain

Fligh Control Domain

Utilities Domain

Figure 4.1: Example of a possible A380 system architecture at the aircraft level

only four different groups are shown –, Figures 4.1 and 4.4 offer a more fine-grained subdivisionthat splits the utilities group in [DP04] (p. 4) into systems belonging to the OIS domain, the energydomain, the PCES domain, the cabin domain, and the utilities domain.Further analysis of Fig. 4.1 shows that (a) the engine controllers are provided in quadruplicate toassign one engine controller to each engine (emphasized in Fig. 4.4 as four green engine domainboxes), (b) the systems in the flight control domain and the cockpit domain are mostly providedin triplicate (depicted as three rose and three pink areas), and (c) the others are usually duplicated(depicted as two boxes per domain).In addition to this (hardware) redundancy, Figures 4.1 and 4.4 also point out the possible physi-cal location of the platforms in the left-hand and right-hand side of an airplane. This is depictedby the vertical symmetry axis. This presentation in the figures is motivated by knowing that theredundant platforms are usually not located in the same area to protect them from disturbanceswhich occur only in one part of the aircraft (e.g., fire in one of the avionics compartments).

Assignment to Platforms. The architecture provided in Fig. 4.1 does not consider assignment ofsystems to platforms. In particular, it is not shown which systems share a common IMA module.This is depicted, for example, in [AV04a] for the cabin domain systems air conditioning, fire andsmoke detection system, doors and slides management system, and cabin pressure system. Forthe utilities domain, [AV04f] shows that the controlling and monitoring components of the fuel

Page 109: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

4.1. IMA ARCHITECTURE AT THE AIRCRAFT LEVEL 89

Ope

nW

orld

Ope

nW

orld

SCISCI

Figure 4.2: Redundant switched network for interconnection of the platforms (based on Fig. 4.1)

system, the landing gear system, the braking system, and the steering system share two IMAmodules.

ReferencesBrief design guidance for an IMA architecture at the aircraft level is given, among others, in[AC20-145], p. 21. Sample IMA aircraft architectures are shown in [DP04], p. 4 and [ZB05].Less detailed architectures are also shown in [Rag04], p. 7 and [Tha04d]. Examples of aircraftarchitectures focusing on a single functional domain are also depicted in [AV04a] (cabin domain),[AV04c] (cockpit domain), [AV04d] (energy domain), [AV04e] (OIS domain), [AV04b] (PCESdomain), and [AV04f] (utilities domain).The different components contained in an aircraft architecture haven been described, comparedand categorized in the beginning of this thesis: different platform types in Sect. 1.5 and differentaircraft networking technologies in Sect. 1.6. The concepts of hardware, software and informationredundancy have been described in Sect. 1.4.

Page 110: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

90 CHAPTER 4. SYSTEM ARCHITECTURES USING IMA

AirlineInformation

ServicesDom

ain

PassengerInformationand

Entertainm

entServicesDom

ain Aircraft Control Domain

SCIOpen World Open WorldSCI

Figure 4.3: Grouping into aircraft domains (based on Fig. 4.1)

Page 111: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

4.1. IMA ARCHITECTURE AT THE AIRCRAFT LEVEL 91

OpenWorld

OpenWorld

SCI SCI

Engine Domain Engine Domain

Engine Domain Engine Domain

Cockpit Domain

Fligh Control Domain

Cockpit Domain

Cockpit Domain

Flight Control Domain Flight Control Domain

OIS DomainOIS Domain

Energy Domain Energy DomainUtilities Domain

Cabin DomainCabin Domain

Utilities Domain

PCES Domain PCES Domain

Figure 4.4: System redundancy and grouping into functional domains (based on Fig. 4.1)

Page 112: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

92 CHAPTER 4. SYSTEM ARCHITECTURES USING IMA

4.2 IMA Architecture on System Level

As described above, an IMA architecture comprises several interconnected platforms which areused by the system functions or applications for computing and system-to-system communication.The platforms are also connected to several peripheral equipment because, for these calculations,most systems also require several inputs from directly connected sensors and also generate outputsfor directly connected actuators. While the previous section has focused on the aircraft-level IMAarchitecture and discussed the characteristics of the complete network of platforms, this sectionbriefly analyzes system-level IMA architectures and addresses their redundancy concepts.Each system is usually composed of a controller part which gets inputs from sensors or othersystems and provides outputs to actuators or other systems, i.e., the system is connected at theaircraft-level to other systems, but also sets up a network of its own at the system level for com-munication with its peripheral equipment. For this controller-to / from-peripheral communicationtypically other communication links are used than for inter-system communication (as already dis-cussed in Sect. 1.6). The sensors and actuators can be connected to the computing platform eitherdirectly – typically using digital or analog signals – or via so-called smart components or remotedata concentrators which convey the inputs and outputs of the connected sensors or actuators andcommunicate with the platform using CAN or ARINC 429. Nowadays, the latter solution is usu-ally preferred if the sensors and actuators of the system are physically distributed over large areasof the aircraft because this allows significant reduction of wiring weight and space.A typical architecture at the system level is depicted in [Spi01] (p. 33-9) addressing the fuel gaug-ing system which is similarly also shown in [AV04f] and [MS03] (p. 302–308). The main charac-teristic of fuel gauging systems is that the main controller is connected via ARINC 429 (or ARINC629) to so-called fuel remote data concentrators (FRDCs) which are located close to the tanks andwhich are connected to the sensors via analog or discrete signals. The task of the FRDCs is tocollect and combine the information from the sensors (e.g., the fuel temperature and density aswell as many other variables) and transmit them as data messages.The architectures of other systems like the doors and slides management system are similar butuse CAN for connecting to the smart components.The following paragraphs will further discuss typical redundancy concepts implemented forsystem-level IMA architectures.

Redundancy Concepts at the System Level. The redundancy concepts at system level dependon the considered system, i.e., highly safety-critical systems (e.g., the flight control system) im-plement more elaborated concepts. In the following, we well further analyze the elaborated redun-dancy concept of the flight control system.

Command and Monitor Elements. As shown in Fig. 4.1, highly-safety critical systems like theflight control system, the slats and flaps control system, the landing gear system, and the fuelsystem comprise command and monitor elements (white boxes labeled COM and MON, respec-tively). Such a configuration increases the redundancy and allows cross-checking and integritychecking. Usually, diversely developed software is used for the two elements. The utilities do-main (aircraft-level) IMA architecture depicted in [AV04f] shows that the command and monitorelements of the fuel system, the braking system, and the steering system are assigned to differentIMA modules.

Primary and Secondary Controllers. Another means for redundancy of highly-safety criticalsystems is to provide primary and secondary computers. The flight control system has primaryflight control and guidance computer (FCGC) and the flight control secondary computer (FCSC)which is a hot standby for the primary one. Both flight control computers are depicted in Fig. 4.1.As described in [MS03] (p. 260), the concept is usually accompanied by dissimilar hardware andsoftware to be used for primary and secondary computers.

Page 113: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

4.2. IMA ARCHITECTURE ON SYSTEM LEVEL 93

Redundant Controllers. In addition to the other redundancy concepts, the primary and secondarycontrollers are each triplicated as it has also been depicted in Fig. 4.4. In less safety-criticalsystems, the controllers are usually only duplicated.

Mechanical Reversion. The last means for redundancy of important computers is a mechanicalreversion, e.g., for the flight control system computers the reversionary flight control allows tofly and land the aircraft manually. However, new aircrafts like the A380 use fly by wire withoutthe mechanical backup and compensate it by means of a direct electrical link and a considerabledegree of dissimilar redundancy ([MS03], p. 260–261, 283–286).

Network Redundancy. The redundancy concept applied for the intra-system network dependson the criticality of the system and the type of communication link. For example, for most linksbetween the system’s main controllers and the local controllers or sensors/actuators located some-where in the aircraft, duplication of the wires is not an option since this would double the weightand volume of wiring. Otherwise, most systems have several sensors/actuators at one locationto protect the system against hardware failures of these components. Thus, redundant wiring isalready provided and the redundancy characteristics can be improved by connecting these redun-dant wires to different power supplies. For example, for the smoke detection function (SDF), thereare several smoke detectors located in each compartment which are each connected to one of theredundant SDF controllers via different CAN busses (which are in turn connected to normal poweror essential power).

ReferencesSystem-level architectures – especially for highly-safety critical systems like the flight controlsystem – are addressed in several books (e.g., [MS03], [Spi01]). For example, [MS03] describesthe architecture of the flight control system for different architecture models and also addressestheir various redundancy concepts. The (aircraft-level) IMA architecture of the utilities domain in[AV04f] also provides information about the implemented redundancy concept.

Page 114: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

94

Page 115: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Chapter 5

Testing of Systems based on IntegratedModular Avionics

In the beginning of this thesis – particularly in Sect. 1.1 and Sect. 1.7 – the general developmentprocess for avionics systems has been described and differences with respect to the chosen ar-chitecture or platform concept discussed. For systems based on integrated modular avionics, themain characteristic is that different systems can be hosted on a standardized IMA platform that isdeveloped independently from the system’s application software to be running on the IMAmodule(see Chap. 3). This means that if one type of IMA module is used for all applications, it is onlydeveloped once. This separation of application development and associated platform developmentis also depicted in Fig. 5.1 on the left hand side of the V. However, the figure focuses on the fur-ther consequences for the right hand side of the V – the system test approach for avionics systemsusing shared platforms like IMA modules. As pictured by the separate development and V&Vprocess for the IMA platform, a shared resource has to be tested only once and can then be usedby the applications.

� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �

� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �� � � � �

� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �

� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �� � � � � � �

� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �

� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �

� � � � � �� � � � � �� � � � � �� � � � � �� � � � � �

� � � �� � � �� � � �� � � �� � � �

� � �� � �� � �� � �

� � � � � �� � � � � �� � � � � �� � � � � �

� � � � � � �� � � � � � �� � � � � � �� � � � � � �

� � � � � � �� � � � � � �� � � � � � �� � � � � � �

� � � � � � � � � �� � � � � � � � � �� � � � � � � � � �� � � � � � � � � �� � � � � � � � � �

� � � � � � � � � �� � � � � � � � � �� � � � � � � � � �� � � � � � � � � �

� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �

� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �� � � � � � � � � � �

S11 S12 S21 S22

E111

E121

E211

E221

D2D1

AircraftFlight TestsGround Tests

System Integration Tests

Multi-System Tests

System Tests

Application TestsPlatform Tests

IMAplatform

Figure 5.1: Verification and Validation Process using an IMA architecture: (a) IMA platformtesting (green pattern), (b) application software tests (red pattern)

In the following section (Sect. 5.1), the system test approach for an IMA based system architectureis described. Then, two of the described test steps are addressed in more detail: Testing a singleIMA module with different test configurations and test applications in Sect. 5.2 and testing the

95

Page 116: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

96 CHAPTER 5. TESTING OF SYSTEMS BASED ON IMA

communication flow in a network of configured IMA modules which still execute specific testapplications in Sect. 5.3. This chapter concludes in Sect. 5.4 with a discussion of the generalrequirements of test benches used for such tests. Finally, examples of a test engine and a testsystem which can be used for testing IMA platforms are given.

5.1 System Test Approach for Integrated Modular Avionics

The test approach for systems based on integrated modular avionics is based on the general sys-tem test approach for avionics systems as described in Sect. 2.2. This means that the approachfollows a bottom-up strategy starting at equipment level by testing each platform and each appli-cation and continuing with stepwise integration and testing. However, the details of the integrationand testing steps differ due to the specific characteristics of IMA modules and the IMA approach.In particular, it has to be considered that (a) the modules are configurable, (b) IMA modules areshared resources, (c) IMA modules provide a standardized API to be used by the applications,(d) IMA modules provide standardized hardware interfaces, and (e) IMA modules and the ap-plication software for the different systems are developed by different suppliers. Configurabilityaffects mainly testing of the platform itself because for the subsequent steps a tailored (i.e., ap-propriately configured) and tested module is required. Nevertheless, the integration steps also usedifferent configurations, on the one hand, because the system architecture contains several differ-ently configured modules and, on the other hand, because certain integration steps require onlypartial configurations (e.g., only the partitions of the specific application are tested and others aretherefore not configured). The effect of the other IMA characteristics – particularly that one IMAmodule usually hosts the applications of several systems which are all supplied by different com-panies – is that new integration steps have to be introduced which consider stepwise integrationof the applications hosted by the same IMA module but also consider stepwise integration withthe peripheral devices of all these systems. In addition, the overall test approach has to divide theintegration and testing activities in a way that, on the one hand, supports joint activities of IMAmodule manufacturers, avionics software developers, and system integrators while still definingclear responsibilities for each step and, on the other hand, simplifies certification and particularlyre-certification of the IMA module or one of the applications. Moreover, the test approach shallsupport integration testing and fault localization in case of failure situations such that debuggingactivities can be restricted based on the results of previous test steps. For example, debuggingactivities can focus on the application itself, if platform tests with the same configuration have notdetected any problems.

For the following more detailed considerations, we will assume that (a) the system architecturecomprises only systems to be integrated on IMA modules and their peripheral equipment and (b)the peripheral equipment has already been tested separately. For a mixed architecture with IMAmodules as well as special-to-purpose and COTS platforms, the IMA test approach is combinedwith the conventional one.

5.1.1 Platform Testing

The approach for platform testing is affected, on the one hand, by using a configurable moduleand, on the other hand, by providing a shared platform to be used by different suppliers’ ap-plications. In particular, the responsibilities for platform development, application development,peripheral equipment development, and finally system integration are in most cases allocated todifferent companies. With regard to the IMA platform, its supplier is responsible for providinga tested and qualified HW platform with associated operating system but is not responsible forfurther integration steps. These are supervised by the system integrator which therefore definesthe module configurations and stepwise integrates the applications and assembles the systems. As

Page 117: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

5.1. SYSTEM TEST APPROACH FOR IMA 97

a consequence, the system integrator requires fundamental confidence in the module itself andthe chosen configurations. Hence, platform testing is performed both by the supplier and by thesystem integrator. The latter performs its tests in close cooperation with the supplier.

Supplier Platform Verification. The objective of the supplier’s activities is to show full compli-ance with the platform requirements by considering the platform’s hardware, the operatingsystem as well as their integration. Different verification methods are applied: Hardwaretests as mentioned in Sect. 2.2, analysis methods like model checking and formal proofs orsoftware tests for the source code of the operating system, and hardware / software integra-tion tests. All activities are part of the platform certification to be performed by the supplierand are usually performed at the supplier’s site.

Integrator Platform Tests. The objective of the integrator’s activities is to achieve confidencein the platform’s functionalities and properties when configured as defined by the config-uration tables and to complement the supplier’s HW/SW integration tests. Thereby, theconfigurability of the modules plays a major role and the tests are performed with differentconfigurations – at best with all possible configurations such that all later used ones as wellas potential configurations have already been tested. However, performing all functionaltests with all possible configurations is illusory with respect to time and costs and thus onlya selection of relevant test configurations is specified and used for functional testing. As aconsequence, it is then required to perform the tests also with the real configurations, i.e.,with those which are used in subsequent integration steps. The integrator’s tests are per-formed in the integration test bench at the integrator’s site which is also used for furthermodule integration tests.Further details for testing a single IMA module are provided in the next section (Sect. 5.2).

5.1.2 Application Testing

The applications to be hosted by an IMA module use the services provided by the ARINC 653-compliant operating system (see Sect. 3.1.2). Again, the distributed supplier chain requires thatcertain test activities are performed by the application supplier and others by the integrator (whoclosely cooperates with the suppliers).

Supplier Application Verification. For the application software, most verification activitiescomprise testing and only for a few very critical applications other means like formal anal-ysis are applied. Application testing as described in Sect. 2.2 can be performed at modulelevel, software integration test level, and HW/SW integration test level. For the integrationtest levels, an emulator and a simulator for the IMA module’s operating system are providedas part of the development environment. These tools simplify testing of the application byallowing access to the resources used by the application and therefore provide the requiredoperating system API and are configurable like IMA modules. As usual, such tools miss thehardware interfaces of the IMA module and do not comply with the performance require-ments of the IMA module. However, running the application software in these platformsimulations is a good preparation step for HW/SW integration testing with the IMA mod-ule. For this integration step partially configured platforms are used which means in thiscontext that the configuration provides the application’s partitions as in the final configu-ration but may miss other avionics partitions. For all tests, the communication partners ofthe application, i.e., other systems or peripheral devices, have to be simulated. The verifi-cation and testing activities are relevant for application software qualification but have to becomplemented by later integration activities.

Page 118: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

98 CHAPTER 5. TESTING OF SYSTEMS BASED ON IMA

Integrator Application Testing. The integrator performs application testing only on target, i.e.,using a configured module. This means that the IMA module is configured for its purposeof hosting several applications including the application under test. During integrator appli-cation testing, the respective application is functionally tested using simulations of the otherapplications which are stepwise replaced by the real applications. Additionally, simulationsof the peripherals are used as required. However, by integrating the different applicationsonto one module, system integration and multi-system integration steps are partially antici-pated.

5.1.3 Platform Tests with Application Software

After integrating all applications into the IMAmodule, the integrator verifies the compliance of thefully integrated module with the expected interface behavior, the expected module performance,the correct module operation (e.g., with respect to the module’s operational modes), the correctconfiguration (e.g., with respect to memory usage, temporal partitioning, etc.), and the powerconsumption and the behavior in case of power interrupts. The objective for these tests is thequalification of the fully integrated module with the final configuration. These tests do not focuson the correct application behavior but focus on the module’s functionality. Hence, these tests aresomehow “late platform tests” but can only be performed after integrating all applications (as donein the previous step).

5.1.4 System Integration /Multi-System Integration

A consequence of using integrated modular avionics is that system integration and multi-systemcannot be separated as easy as before since a fully integrated module usually contains the appli-cation software for different systems. Nevertheless, the systems, i.e., the application software andthe peripheral devices of this particular system, have to be qualified separately. Again, a two-stepapproach is applied for system integration:

Supplier System Integration. Based on the supplier’s previous integration step where the ap-plication was integrated into a partially configured IMA module, the simulations for thesystem’s peripheral equipment are step-by-step replaced with the real devices. Neverthe-less, the system’s environment is still simulated. The aim of this step is to show compliancewith the system requirements.

Integrator System Integration. The activities in this step are based on the result of the integra-tor’s application integration – the fully integrated module, i.e., the IMA module with allapplications to be hosted. As during supplier system integration, the peripheral devices areassembled in a step-by-step manner while still using simulations for systems to be hosted byother modules. The integrator’s system integration testing is usually performed on specificsystem integration test rigs which are developed specifically for the respective system undertest. The results of the system integration testing are relevant for system qualification.

Multi-system integration is performed on domain level by stepwise integrating the domain’s IMAplatforms and their peripheral equipment resulting in an AFDX network of fully integrated mod-ules which are each connected to the respective peripherals using other networking technology.As a preparation for this integration step, it is advisory to perform network integration tests withthe configured modules to show their compliance and compatibility with respect to the networkconfiguration.

Page 119: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

5.2. TESTING SINGLE IMA PLATFORMS 99

Network Integration Testing. A domain’s network consists of a set of configured modules whichare interconnected using AFDX and connected to their peripherals using typical intra-systemcommunication means like CAN, ARINC 429, etc. The respective communication links aredefined in the domain configuration which consists of a set of module configurations. Thenetwork integration tests shall demonstrate that the modules and the network provide theconfigured routing by allowing simultaneous network access from different modules viathe configured ports and comply with the specified performance constraints. Of particularinterest are worst-case scenarios for the modules and the network which occur when, forexample, intra-module or inter-module communication is intensified. Hence, these tests useconfigured modules which execute specific test applications instead of the real applications.All tests are performed by the domain integrator.Further details for testing a network of IMA modules are given in Sect. 5.3.

Multi-System Integration. During multi-system integration, all fully integrated IMA modulesand the peripheral equipment of the respective systems is stepwise integrated and then func-tionally tested as a whole. Thereby, the interaction between the avionics systems can befunctionally verified and the network performance can be analyzed. In addition, the multi-system integration tests allow to proof the domain’s architecture and redundancy concepts,e.g., the redundant network for inter-module communication.

5.1.5 System Integration Testing

At aircraft level, the networks integrated in the previous step are stepwise assembled – usuallystarting in a laboratory environment like an iron bird and then continuing on aircraft. The testsperformed by the aircraft integrator comprise functional tests of the systems as well as certaintests which are required on aircraft level for qualification of the hardware (e.g., electromagneticinterference tests).

5.1.6 Ground Tests and Flight Tests

System integration testing is followed by ground tests and flight tests. Ground testing and flighttesting in IMA-based architectures does not differ from the conventional approach describedbriefly in Sect. 2.2 because the aircraft-level architecture is usually a mixed architecture com-prising IMA platforms as well as special-to-purpose and COTS components.

ReferencesThe strategy for verification, validation and testing of IMA platforms is discussed, for example, in[Pel03]. This strategy is compliant with the approach followed in the research project VICTORIAas it is introduced, for example, in [Air04] and [TD04a] from the provided tool chain for systemdevelopment and the respective verification and validation steps. The IMA-related impact on theguidelines for system acceptance are also addressed in other publications, e.g., [Ret05], [Gas05].

5.2 Testing Single IMA Platforms

The previous sections have described the testing and integration strategy for IMA based architec-tures and identified as tests performed with a single IMA platform a subset consisting of platformtests, application tests, and integrated platform tests. For each such integration test step, the sys-tem under test (i.e., the IMA module with its configuration and its applications) is characterizedby the combination of configuration and applications used for testing: The IMA module can be

Page 120: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

100 CHAPTER 5. TESTING OF SYSTEMS BASED ON IMA

configured using an arbitrary but legal test configuration, using a part of one of the final configura-tions, or using one of the final configurations; the applications running in the configured partitionsare either real applications or specific test applications. Nevertheless, only a subset of combina-tions is possible and executable because a configuration has to comply with the requirements ofeach application running in one of the partitions for providing, for example, enough memory area,adequate scheduling, correct port names, etc. Thus, almost all combinations using arbitrary testconfigurations and real applications are very unlikely to comply unless the test configuration issimilar to the respective real configuration. The combinations listed below are more likely and arethus applied in those tests considering a single IMA module:

Combination 1. IMA module with test configuration and test applications running in the config-ured partitions.

Combination 2. Configured IMA module with one of the final configurations and specific testapplications running in the partitions.

Combination 3. Partially configured IMA module with the respective real applications.

Combination 4. Configured IMA module with one of the final configurations with some inte-grated real applications and simulations or test applications for the others.

Combination 5. Fully integrated IMAmodule with one of the final configurations and the respec-tive applications.

Thereby, platform testing uses combination 1, 2, and 5 for testing the functionalities and propertiesof the platform (see Sect. 5.1.1 and Sect. 5.1.3). Application testing is applied with combination3, 4, and 5 with a focus on the application under test which is one of the integrated applications(Sect. 5.1.2).

In the following, we will shortly discuss only those tests using test applications (i.e., combination1 and 2) because real application software can vary considerably. Moreover, the specification ofreal applications is usually confidential and thus testing their functionality cannot be discussed inthis thesis. Section 5.2.1 considers testing a “bare” IMA module with test configurations and testapplications (combination 1) and Sect. 5.2.2 discusses what has to be considered when using realconfigurations instead of test configurations (combination 2).

5.2.1 Testing Bare IMA Platforms

For bare IMA platform testing, an IMA module is functionally tested with many different testconfigurations and, therefore, specific test applications are running in each configured partition. Toshow full compliance with the module’s specification, the basic idea is that the applications executeall possible application behavior and are additionally combined with all possible configurations.Thus, the confidence is increased that the module can be used for the envisaged purposes, but alsomeets future demands.

Bare module tests are hardware-in-the-loop tests which stimulate and check all interfaces of theIMAmodule: For the internal interfaces, the test configurations address the module’s configurabil-ity and the test applications use the operating system API. For the external interfaces, a test systemis connected to the hardware interfaces (see Sect. 5.4.2) and executes appropriate test specifica-tions which control and monitor all interfaces. Thus, the operating system of the IMA module aswell as its hardware interface system (including the interface drivers) and the hardware itself aretested.

Page 121: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

5.2. TESTING SINGLE IMA PLATFORMS 101

Test Objectives. The objectives for testing an IMA module can be grouped according to theverified property or service category and the resulting groups are defined in the test plan. Withinthis thesis, the following grouping is suggested (in compliance with the test objectives used withinthe VICTORIA project):

• Operating system test objectives shall verify that the operating system services perform asspecified in the respective OS specification (e.g., in the ARINC 653 specification).

• Partitioning test objectives shall verify that a robust partitioning is established for the sharedresources (computing resources, memory, interfaces) and that no application can jeopardizethe behavior of another application.

• Inter-partition communication test objectives shall verify that the I/O communication worksas specified and configured and that the respective operating system services perform asspecified when using different interface types.

• Intra-partition communication test objectives shall verify that partition internal communica-tion through buffers, blackboards, events, and semaphores works as specified.

• Data loading test objectives shall verify that it is possible to load different configurationsand applications using appropriate data loading tools.

• Health monitoring test objectives shall verify that the health monitoring services detect prob-lems and violations and handle them as configured or specified.

• Configuration test objectives shall verify that the resources are configurable within the al-lowed range and that valid configurations can be loaded onto the module.

• Operational modes test objectives shall verify that the startup checks detect invalid configu-rations, inconsistent data loads, etc.

• Power supply test objectives shall verify that power interrupts are correctly handled andpower consumption is within the specified limits for various operating conditions.

A general test objective is to demonstrate that the IMA module provides the specified features forall valid module configurations and for nominal operating conditions as well as for failure modesand worst-case scenarios.

Test Approach. The basic idea is a two-part testing approach which consists, on the one hand,of test applications performing the API calls and, on the other hand, of test specifications control-ling the external interfaces. For the test applications, it is possible (a) to have tailored applicationswhich each execute a specific sequence of API calls or (b) to implement a generic test applica-tion which provides some kind of interface for remote procedure calls triggered by external testspecifications. When implementing the first solution, it would be necessary to have a set of ded-icated test applications for each test procedure which contains probably one specific applicationfor each partition (which may also vary if the test procedure is tested with different configura-tions). Unfortunately, also the second solution is not provided by the ARINC 653 API whichcontains no services for remote procedure calls (RPC) or remote method invocation (RMI) – theonly interface to external partners are ports and the specific API services to access them.1 Toovercome this drawback while avoiding to implement very many tailored applications, a protocolis designed which emulates an RPC-like mechanism using ports connected to the external inter-faces: The generic test applications implement a command interpreter for API calls and receive

1The port concept and the respective API services are described in paragraph inter-partition communication ofSect. 3.1.2.3.

Page 122: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

102 CHAPTER 5. TESTING OF SYSTEMS BASED ON IMA

the commands via reserved command ports. If a new command has been received, the respectiveAPI call is executed and the return code and output parameters are returned using a specific returnport. The commands – and thus the API calls – are triggered by the external test specifications.Hence, test specifications are used for two purposes: (1) For controlling the test applications andinvoking remote API calls via reserved external interfaces and (2) for controlling and monitoringthe other external interfaces.A difference between the implemented message-passing model and the conceptually desired re-mote procedure call mechanism is that triggering an API call and receiving its return code andthe output parameters are encoded by separate, asynchronous messages. This means a block-ing command protocol has to be implemented in the test system when fully emulating the RPCmechanism. However, the advantage of the message-passing model for our testing approach isthat non-blocking commands facilitate pre-programming a sequence of API calls (provided that aqueue stores them for the test application) and further allow to distinguish controller and checkertest specifications (otherwise both has to be implemented inseparably by one test specification).Another difference between the above described command protocol mechanism and RPC is thatthe latter requires a relatively fast connection between test application (i.e., IMA module) and thetest specification (i.e., the test engine executing the test system). In contrast, using the commandprotocol mechanism allows pre-programming a sequence of commands which can be processedmuch faster than a sequence of remote API calls over a low or medium bandwidth network. How-ever, for a few performance tests where specific API calls have to be performed in rapid successionnone of the two mechanisms is sufficient and an additional scenario mechanism is needed but de-stroys the clearness of the approaches. These scenarios are pre-defined test application functionswhich are triggered like API calls but result in a sequence of API calls instead of only a singleone. Conceptually, this means that the ARINC 653 API is extended by the scenario services.Despite the effort for implementing the command protocol and the scenarios, the approach of usinga generic commandable test application allows to bundle most of the test relevant data for eachtest procedure within the test specifications. This reduces the complexity of each test procedurein particular the tailored parts within the test application. Furthermore, only one test applicationhas to be implemented which compensates the implementation effort for the command protocol.As a consequence, the test suite contains one generic test application and an extensive set of testprocedures.As stated above, one aim of these bare module confidence tests is to perform each test procedurewith as many different IMA module configurations as possible. This means, on the one hand, thata configuration library with possible test configurations is required and, on the other hand, thatparameterized test procedures and test specifications have to be developed (so-called test proceduretemplates). For test execution, each test procedure template is then instantiated with appropriatetest configurations resulting in many test procedures which each uses a different configuration butfocuses on the same functionality.Summarizing, a test suite for bare module testing contains a generic test application, a configura-tion library and a test procedure template library. A case study for such a test suite is introduced inChap. 6 which details the above test approach and provides examples of test specifications and testconfigurations. Furthermore, it addresses the environments and the tools required for test prepa-ration and test execution. Note that the general requirements and characteristics of the test engineand the test system for hardware-in-the-loop IMA module testing as well as concrete examples areprovided in Sect. 5.4.

5.2.2 Testing Configured IMA Platforms

When testing configured IMA platforms, the module under test is configured with one of the finalconfigurations, but still test applications are executed in the configured partitions because real

Page 123: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

5.3. TESTING A NETWORK OF IMA PLATFORMS 103

applications usually show the requested behavior only under specific conditions and erroneousbehavior should occur only in case of major application errors. For example, memory and timingfaults can usually not explicitly be triggered when using real applications but such behavior isnecessary for demonstrating robustness with respect to spatial and temporal partitioning of theIMA module.

Generally, the test approach and the test objectives for testing configured IMA platforms matchwith those described for bare module testing in the previous subsection (Sect. 5.2.1). This meansthe tests are controlled and checked by external test specifications which send commands viareserved I/O links to the test applications, check the command results received via other reservedcommunication links, and control and monitor the module’s other external interfaces. As describedabove, the test application implements an interpreter for the commands which either trigger APIcalls or scenario functions (i.e., pre-defined sequences of API calls).

The main difference is that the test applications may have to be tailored to comply with the re-quirements of the configuration resulting in slightly different test applications for each partition.Different types of reasons require custom tailoring of the test application which include (but arenot limited to): (a) specific port names which are different to those expected by the generic test ap-plication, (b) configuration restrictions which enforce to change the command protocol for certainpartitions, (c) configuration restrictions which limit the available memory and require to downsizethe test application, and others. Customizations for (a) and (c) are very configuration specific,but for (b) a set of standard solutions can be provided. For example, the command protocol usedfor sending commands from the test specifications to the test applications relies on the availabil-ity of appropriate ports which are connected to external interfaces, allow message transmission(i.e., are not connected to discrete or analog signals), can convey at least messages of the com-mand message’s size, support queuing of commands for pre-programming, etc. However, thereal configurations are customized for the real applications and thus may not provide appropriateports, e.g., the application may only require ports to another partition which is hosted by the samemodule. If no appropriate port is configured for a partition, problem specific solutions have to beimplemented, e.g., by allowing also commanding via sampling ports or by implementing a specificcommand relaying protocol which dispatches commands to partitions that have no inter-modulecommunication ports.

Testing configured IMA platforms is not further considered in this thesis for two reasons: Firstly,all real configurations are kept confidential and thus an analysis of how the test applications haveto be tailored cannot be discussed in further detail. Secondly, the test approach is very similar totesting bare IMA modules which is further addressed in Chap. 6.

ReferencesThe approach for testing single IMA platforms is based on [Pel03] which introduces the usageof generic test applications and generic test specifications and describes the organizational modelof a possible IMA system integration test bench. The test objectives as well as the test approachfor bare IMA module testing is also addressed in [MTB+04]. For the design and analysis ofthe user-level communication protocol suggested to be used in bare IMA module testing, [Tan01](p. 534–540) elaborates on message passing and remote procedure calls (RPC). Testing configuredIMA platforms is addressed in more detail in [MP04].

5.3 Testing a Network of IMA Platforms

When regarding inter-system communication which is one characteristics of IMA architectures,one has to consider intra-module and inter-module communication depending on the assignment ofsystems to IMA modules. Intra-module communication takes place when the application software

Page 124: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

104 CHAPTER 5. TESTING OF SYSTEMS BASED ON IMA

of two systems is integrated on the same IMA module but in different partitions (inter-partitioncommunication). Inter-module communication in contrast refers to communicating systems whichare integrated on different IMA modules and either belong to the same functional domain (intra-domain communication) or to different functional domains (inter-domain communication). Thismeans that for ensuring that all inter-system communication is possible (within the range of theconfigurations and the module’s specification), one has to deal, on the one hand, with the commu-nication behavior of a single platform and, on the other hand, with the communication flow withina network of IMA platforms. Both consider module-internal communication as well as commu-nication via the external interfaces, but the latter also addresses the network itself. Thus, testingthe communication flow in a network of IMA modules can also show that the network provides itscharacteristics in any case of legal operating condition – also in case of maximum load.For testing a network of IMA platforms, the following two aspects have to be considered: Firstly,the IMA platforms in a network of IMA modules have to be configured in a consistent way withrespect to communication links. For example, the characteristics of a source port and its associatedcommunication link have to comply with the characteristics of the associated destination port.Final configurations of a specific domain or of the whole aircraft are expected to fulfill theserequirements, but adequate test configurations can do so, too. Secondly, application aspects have tobe considered in addition to the configuration ones, i.e., the configured partitions can either executereal application software or test application software (as long as the application is compliant withthe configuration). As a consequence, different combinations of IMA module configurations andapplication types are possible and those which might be considered in more detail are listed in thefollowing:

Combination 1. The IMA modules of the network are configured by consistent test configura-tions and the configured partitions execute specific test applications which can simulate allpossible communication flow.

Combination 2. The network of IMA modules is configured using the final configurations, butstill specific test applications are executed to allow simulation of all possible communicationflow.

Combination 3. The final configurations are used for configuring the network’s IMA modulesand each partition executes its respective real application software.

Theoretically, it is important and possible to test the communication flow with all three combi-nations. However, combination 1 is impractical and somehow utopian because it results in toomany test cases when considering (a) the number of possible test configuration combinations and(b) the potentially high number of test cases for each such combination. Therefore, only onecombination of configurations is regarded – the one which considers the final configurations foreach IMA platform (combination 2). With respect to combination 3, the main disadvantage ofusing real applications is that they imply a pre-defined communication behavior which is only a(minor) part of the possible communication flow. Thus, testing all possible communication flowusing real applications is not possible although the communication behavior which can be shownby real applications should always be contained in those tests which address testing all possiblecommunication flow.As a consequence of the above considerations, only combination 2 is addressed in the following.The test objectives and the general approach are addressed in the following section. A case studyin Chap. 7 details the test approach and provides an algorithm for test data generation.

5.3.1 Testing a Network of Configured IMA Platforms with Test Applications

When testing a network of IMA modules, the main focus is on testing the communication flowwithin each IMAmodule and between the modules with respect to what is possible when using the

Page 125: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

5.3. TESTING A NETWORK OF IMA PLATFORMS 105

final configurations. Thereby, the possible communication flow depends on different configurationaspects as well as on specific module characteristics which are:

• the assignment of application system partitions to IMA platforms of the network,• the port configurations defining the type of each port as well as its attributes like, for exam-ple, the maximum message size or the queue length,

• the I/O mapping specifying which source and destination ports are connected,• the scheduling configurations defining when and how long each partition is scheduled, and• the module and network performance information which defines the duration of API calls(i.e., the latencies) and the message transmission times over the network.

In particular, the partition assignment to IMA modules defines which partitions can theoreticallyaccess the network simultaneously (those which are assigned to different modules) or only se-quentially. The scheduling also restricts when a specific port can be addressed from an applicationbecause each port belongs to exactly one partition and can thus only be used for message recep-tion or transmission while the respective partition is scheduled.2 Thereby, the port characteristicsadditionally restrict when successful writing into or reading from a port is not possible because,for example, the port is full or no message has been sent to the respective port yet. Furthermore,the duration of the API calls restrict how fast different ports on the same single-CPU module canbe triggered while the network transmission time restricts how fast it is possible to read a mes-sage from a destination port which has before been sent via a source port of another module.Consequently, all above mentioned aspects have to be considered when defining the test cases fornormal behavior testing (i.e., those tests which are expected to perform successfully the respec-tive sequence of communication API calls) and for robustness testing (i.e., when performing asequence of API calls which are expected to fail). In addition, these aspects have to be regardedwhen defining the test objectives.

Test Objectives. The objectives when testing a network of configured IMA modules (i.e., anetwork of IMA modules with their final configurations) are:

• All configured ports can be accessed, i.e., created during initialization and afterwards usedin communication API calls.

• The communication flow from one port to the other is as defined in the I/O mappings andconfiguration tables and thus inter-system and intra-system communication (including intra-and inter-module communication) is provided for the application systems.

• Simultaneous network access from different modules is possible as configured.• All interleaving of communication flow within each module and for the whole network ispossible.

• Latencies within the module for executing the respective API calls and for transmitting thewritten or received data via the respective I/O driver do not exceed the guaranteed worst-caselatencies.

• Network transmission times do not exceed the specified worst-case latencies.All these test objectives shall be demonstrated for nominal operating conditions with average uti-lization of the communication interfaces as well as for worst-case scenarios where the communi-cation load on the network or on specific types of interfaces is increased such that real applicationswould show this kind of behavior only in exceptional situations or failure modes.

2However, this does not mean that, for example, the interface driver may not receive messages for the particular portwhile the partition is not scheduled.

Page 126: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

106 CHAPTER 5. TESTING OF SYSTEMS BASED ON IMA

Test Approach. For this communication flow test approach, the network under test consists ofconfigured IMA modules with communication links interconnecting the partitions. The respectiveinter-partition communication ports and links are configured in the module configurations and theused communication technique depends on the assignment of the communication partners to themodules: If the partitions are assigned to the same module, they use intra-module communicationmeans, i.e., typically RAM communication. If they are assigned to different modules, inter-modulecommunication techniques like AFDX or ARINC 429 are used (see Sect. 1.6). However, foraccessing the ports, the same API services are used.In addition to the intra- and inter-module communication, the module configurations may containthe systems’ interfaces to peripheral devices and non-IMA controllers, but since equipment andcontrollers other than IMA modules are not part of the network under test, each module has con-nected ports (i.e., those which are used for intra- and inter-module communication) as well as un-connected ones (i.e., those which are used to communicate with all non-IMA module equipment).It is the aim of the approach to consider all communication ports and hence the unconnected portshave to be connected to stub interfaces of the test system such that all ports of the IMA modulesin the network are connected.A two-part test approach has been chosen which consists of (1) an offline test data generationpart and (2) test applications and test specifications executing the generated test cases. The testinterfaces are the pairs of connected communication ports, the unconnected ports, and the externalinterfaces of the unconnected ports. All communication ports can be accessed by the applicationpartition to which they belong; all external interfaces are accessed by specific test specificationsrunning on the test system. However, during test execution, both trigger their test interfaces bysending or receiving messages (depending on the respective port’s direction) at predefined pointsof time. For determining when to trigger which port, an approach has been chosen which controlsthe network’s sequence of communication triggers and their exact time of occurrence by means ofa so-called communication schedule which is interpreted for test execution. Each communicationschedule is a timed trace of events which expresses the (communication) behavior of all IMAmodules in the network. Therefore, only three types of events are used in the communicationschedules: events for sending a message into a port, events for receiving a message from a port,and events for explicitly doing nothing (i.e., waiting a specific amount of time). When the testapplications or test specifications interprete the communication schedules, these events are mappedto adequate function calls, i.e., to ARINC 653 API calls triggered by the test applications or tosimilar function calls generating or receiving messages triggered by the test specifications.Each communication schedule can be regarded as one test case. All test cases, i.e., all possiblecommunication schedules, are generated and prepared offline by a test data generation algorithm(see part 1 of the test approach) which ensures that all combinations of simultaneous and sequen-tial communication with respect to interleaving and temporal variance are prepared as a separatetest case and together test the aforementioned test objectives. For generating the communicationschedules, the algorithm examines the module’s configuration tables and the platform and net-work characteristics and extracts (a) the defined ports as well as the I/O mapping of the network,(b) the port characteristics like port type, port direction, maximum message size, queue length , (c)the scheduling characteristics of each module, and (d) module and network performance informa-tion (i.e., the worst case hardware and software latencies and the worst-case network transmissiontimes). All these information are required to determine which ports can be accessed simultane-ously (only those which are assigned to different modules), what are the maximum and minimumdelays between port triggers (depending on the scheduling of associated partitions and the respec-tive ports’ latencies), how long has to be waited after starting to send a message before it can bereceived at the destination port (depending on the ports’ latencies and the network transmissiontime), when sending or receiving a message is not possible because the queue is full or empty or nomessage has yet been sent into a destination sampling port (depending on the port characteristicsand the previous communication triggers). Additionally, for varying the communication triggers

Page 127: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

5.4. TEST BENCH FOR IMA PLATFORM TESTING 107

with respect to message size and message type, these port characteristics can also be analyzed andthe communication schedules can then consider these factors.

Chapter 7 addresses testing a network of IMA platforms and describes with more details the testapproach and particularly the test data generation algorithm.

ReferencesThe first ideas of the approach for communication flow testing in a network of configured IMAmodules have been presented in [Tsi04] and are detailed in this thesis in Chap. 7.

5.4 Test Bench for IMA Platform Testing

A test bench provides the means for test preparation, test execution, and test evaluation. For testingsingle IMA platforms or a network of them, the integration test bench includes

• one or more IMA platforms – the systems under test –,• a rack with ARINC 600-compliant trays to install the IMA platforms and connect them tothe test engine, power supply for the IMA platforms, and an AFDX switch to connect severalIMA platforms,

• break-out boxes to access the external interfaces of the IMA platforms in order to monitor,switch or patch them,

• bus monitoring tools for AFDX, CAN bus, ARINC 429, and ARINC 661 which supportdata analysis as well as generation of input data,

• interface monitoring tools for discrete and analog I/O which also allow to simulate (poten-tially faulty) behavior of peripheral devices on the monitored interfaces,

• a data loading tool to configure the IMA platforms and install application software,• a test engine connected to the external interfaces of the IMA platforms, and• a test system to perform the specified test procedures and support the evaluation and docu-mentation of their results.

For most hardware equipment of the test bench (including the rack with the trays, power supplies,and AFDX switches as well as the break-out boxes), different commercial suppliers are available.Also, several monitoring tools and data loading tools can be obtained from the market. However,not all of these tools provide control interfaces that allow the test system to automatically controltheir behavior (e.g., for monitoring tools, to start and stop recording or data generation) and,therefore, the test system sometimes also integrates basic monitoring or analysis functionality.

The situation is different for test engines: The test engines provide the computing platform andthe interfaces for the test system which shall perform test execution in (hard) real-time, preferablywith on-the-fly test data generation and evaluation of the responses from the system under test.Therefore, hardware and operating system support is required in order to guarantee precise timingof generated test data and measured responses. This means that both test engine and test systemcan be considered separately but are tightly coupled because both have to consider the charac-teristics and requirements of the other. However, each test engine may be used by different testsystems and each test system can run on different test engine types which is especially importantwhen considering that the same test system (and partially even the same tests) shall be used fordifferent test levels.

Page 128: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

108 CHAPTER 5. TESTING OF SYSTEMS BASED ON IMA

In the following, the general requirements on a test engine and a test system used for IMA platformtesting are discussed. Both sections also briefly introduce concrete examples which are used fortesting bare IMA modules (see case study in Chap. 6) and which could also be used for testing anetwork of IMA modules (see Chap. 7). There are also many other test benches used for avionicstesting (e.g., [Tec04]) which are, however, not addressed in this thesis.

5.4.1 Test Engine

When considering a test engine for IMA platform testing, several general requirements have to beconsidered which are mainly independent of the test system but depend on the type of tests to beperformed.

Test Engine Interfaces. For testing a single or a network of IMA modules as described in theprevious sections, it is evident that the test engine has to provide interfaces which can be used tointerconnect with the external interfaces of one or more IMA modules. Thus, depending on theconcrete types of IMA modules to be tested, interfaces for AFDX, ARINC 429, CAN, discretein- and output, analog in- and output and some special I/Os should be provided. The number ofinterfaces for each type of interface thereby depends, on the one hand, on the number of interfacesprovided by the modules under test and, on the other hand, on the type of test cases to be executed(particularly, how many different interfaces shall be triggered and/or monitored simultaneously).Preferably, the test engine provides at least the same types and number of interfaces as the moduleor network under test to ensure that every interface access – either correct or erroneous – can bemonitored.

Computing Power. For avionics testing, all tests are usually hard real-time tests meaning thatthe test engine has to provide enough computing power (a) to run the interface drivers in hardreal-time and (b) to sustain the hard real-time execution of the test specifications. Usually, thisrequires that at least a multi-CPU computer system and an adequate real-time operating systemare used.

Further Hardware Characteristics. When determining the dimensions of the test engine andthe computing power, it also has to be considered that, for providing many different hardwareinterfaces, usually many different interface cards are required which have to be integrated intothe test engine. In addition to the computing power and the number of interface card slots, it isrequired that the test engine provides enough memory for recording (and perhaps even archiving)the test logs.

The following subsection briefly introduces one test engine – the hard real-time test engine cluster– which complies with these requirements.

5.4.1.1 Example: Hard Real-Time Test Engine Cluster

As a result of the above considerations and general requirements, the concept of a hard real-timetest engine has been developed at the research group Operating Systems and Distributed Systemsat Universitat Bremen (department for Mathematics /Computer Science) and at Verified SystemsInternational GmbH. This research has partially been carried out within the European researchproject VICTORIA and the student project HaRTLinC. The concept considers the test engine’shardware and its operating system.

Page 129: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

5.4. TEST BENCH FOR IMA PLATFORM TESTING 109

Hardware of the Test Engine Cluster. For the test engine’s hardware, a cluster-based archi-tecture has been chosen consisting of several standard multi-processor PCs which each provide aMyrinet interface for a fast interconnection of the cluster nodes via a Myrinet switch. The clus-ter size (i.e., the number of cluster node PCs) is scalable and thus allows that enough computingpower and sufficient interface card slots can be provided. Furthermore, a concept for cluster con-figuration and interface distribution provides the means for arbitrary interface board distributionfor the purpose of load balancing and avoiding of bottlenecks and performance problems.Figure 5.2 shows a test engine cluster consisting of three cluster nodes which each provide aMyrinet interface for cluster node interconnection. Moreover, each cluster node has five availablePCI card slots to be used by arbitrary interface cards. Using these interfaces, the test engineprovides means to connect to all interfaces of the system under test. For special I/O where noPCI interface cards are available but VME bus cards exist, a PCI/VME bridge can be used asa reflective memory between the cluster nodes and a specific VME bus computer providing theinterface card for the special I/O.

Cluster Node 2

Cluster Node 3

Cluster Node 1

PCI bus

PCI bus

PCI bus

VME bus Computer

AFDX AFDX

Discrete I/O AFDX

parallel Analog I/O ARINC 429

serial Discrete I/O Discrete I/O ARINC 429

CAN CAN AFDX

MyrinetCAN

AFDX

ARINC 429

CAN

AFDX

Discrete I/O

special I/O

SystemUnder Test

MyrinetSwitch

Myrinet

Myrinet

special I/O

VMIPCI

VMIPCI

Component2

Component1

Test Engine Cluster

PCI-VME bridge(reflective memory)

Figure 5.2: Cluster-based test engine providing different interfaces to be connected to the inter-faces of the system under test

Operating System of the Test Engine Cluster. With respect to the test engine’s operating sys-tem, a standard Linux is used with specific kernel patches and a Myrinet communication library.The modifications of the standard Linux kernel ensure the hard real-time capabilities of the cluster.The kernel patch for CPU reservation and interrupt routing allows to reserve some CPUs of eachmulti-processor PC to be used exclusively by the test system and thus directs all normal operatingsystem tasks including interrupt handling to the unreserved CPUs of each PC. The Myrinet com-munication library provides Myrinet ringbuffers and time synchronization which is both requiredfor hard real-time communication between the cluster nodes.

Costs and Extensibility. Common test engines are based on VME, VXI platforms, or otherspecialized hardware which, on the one hand, leads to high hardware costs and, on the other hand,often causes performance problems due to limited VME and VXI bus capacities. In contrast, thecluster-based test engine offers the opportunity to utilize high performance standard PC hardwareand the services of the Linux operating system which (a) helps to reduce test engine costs and (b)allows to cost-effectively extend the cluster with additional cluster nodes when further computingpower, more bus capacity, or additional interfaces are required.

Page 130: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

110 CHAPTER 5. TESTING OF SYSTEMS BASED ON IMA

5.4.2 Test System

When considering a test system for IMA module testing, several general requirements have tobe met which mainly depend on the type of system under test (SUT) and the type of tests to beperformed. With respect to the former, it has to be taken into account that the test system shallperform functional testing of a highly configurable, hard real-time, and safety-critical reactivesystem with several interfaces. With respect to the type of tests, it has to be considered thatIMA platform testing (as described in the previous sections) is hardware-in-the-loop testing butmay start with tests on a platform simulator (e.g., a software simulation of the target hardwarebut with the real operating system software). Such tests are executed either as long as the finalhardware is not available or in order to focus on functional testing of the operating system software(i.e., for software integration testing). In addition, for IMA module testing, several tools have tointeroperate in order to configure the modules and to load the application software. The aboveleads in the following requirements:

• It shall easily be possible to re-execute the tests for regression testing or for differentlyconfigured IMA platforms.

• It shall be possible to give simultaneous test inputs to the system under test and cope withsimultaneous outputs.

• It shall be possible to supply inputs with a high frequency.• It shall be possible to perform precise time measurements which allow to give inputs atspecific points in time as well as to evaluate the shown behavior with respect to timing.

• It shall easily be possible to evaluate the SUT outputs with respect to the given test inputs.• The test system shall support easy re-use of test cases for different interfaces to the SUT andfor different testing levels.

• It shall be possible to interoperate with the tool chain for configuring and loading an IMAmodule.

• It shall be possible to qualify the test system to comply with the relevant avionics standards(e.g., RTCA/DO-178B).

• Preferably, the test system should be able to consider the outputs of the system under testwhen determining the next test inputs in order to produce appropriate test data.

• Preferably, the test system should evaluate the SUT outputs on-the-fly in order to stop longtest execution runs in case of major SUT errors (e.g., during the initialization phase).

Analyzing these requirements shows that a test system is required that provides a high degree ofautomated test execution with on-the-fly test data generation (also for complex input scenarios)and on-the-fly test evaluation which can both be performed within bounded time. Furthermore,adequate test specification means have to be supported.

In general, several test systems are available supporting testing based on formal methods (seereferences in [Pel03] and [Mey01], p. 19ff). For the test suite for bare IMA module testing whichis described in Chap. 6, the test tool RT-Tester (sometimes also referred to as RT-Tester 5) has beenchosen which was the most recent version at that time. It is introduced in the next subsection.However, for testing a network of IMA modules as addressed in Chap. 7, the latest RT-Testerversion (RT-Tester 6) appears to be more appropriate. It is shortly addressed in Sect. 5.4.2.2.

Page 131: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

5.4. TEST BENCH FOR IMA PLATFORM TESTING 111

5.4.2.1 Test Tool RT-Tester

The RT-Tester test tool (version 5) is a generic system for automated hardware-in-the-loop andsoftware integration tests for embedded real-time systems. It has been developed by VerifiedSystems International GmbH in co-operation with the research group Operating Systems and Dis-tributed Systems at Universitat Bremen (department for Mathematics /Computer Science). Fortests of several controllers used in the Airbus aircraft family, the RT-Tester has been qualified ac-cording to RTCA /DO178B which is the applicable standard for airborne software-based systems([PZ03]). Several publications (see references at the end of the section) describe in detail its struc-ture and the methodology elaborated for specification based testing with the RT-Tester test tool.The basic concepts as required for this thesis are summarized in the following.

The RT-Tester is structured into four subsystems as depicted in Fig. 5.3. Its core component – theReal-Time Test Subsystem – is responsible for on-the-fly test data generation, execution of testsin real time, and on-the-fly test evaluation. Therefore, it uses compiled test specifications whichhave been developed and validated by the Test Specification Subsystem. Various formalisms arepossible but, most commonly, decomposed Timed CSP specifications are used which have beendescribed in Sect. 2.3.2.3. Using the FDR tool, such specifications can be transformed into tran-sition systems whose binary representation can then be interpreted in real-time. Testing is furthersupported by the Test Visualization Subsystem (which provides several means to visualize the testlogs during test execution) and the Test Management Front-End (which supports test specificationdevelopment, test starting and stopping, and test documentation via interfaces which can also beaccessed remotely). The components of the Real-Time Test Subsystem – the Abstract MachineLayer (AML), the Communication Control Layer (CCL), and the Interface Module Layer (IFML)– are addressed in the following.

Abstract Machine Layer

Communication Control Layer

Interface Module Layer

Test Management Front-Ends

Test Visualization Subsystem

Real-Time Test Subsystem

Test Specification Subsystem

System Under Test

Figure 5.3: Structure of the RT-Tester components

Testing with Decomposed Timed CSP Test Specifications. Each test procedure is defined bya collection of test specifications which are executed in parallel during the test. For testing non-terminating systems, the decomposed Timed CSP specifications describe a network of state ma-chines with timing conditions which are executed in parallel to allow simultaneous interactionswith the SUT. The test specifications are used (a) to model the behavior of the environment tobe simulated by the RT-Tester during the test execution, (b) to generate the test data to be passedfrom the testing environment to the system under test, and (c) to describe the expected behavior ofthe SUT in the test case. The inputs to be exercised on the system under test (SUT) and expectedSUT responses are described in an abstract way (without referring to concrete data formats). The

Page 132: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

112 CHAPTER 5. TESTING OF SYSTEMS BASED ON IMA

mapping to concrete data (and vice versa) is performed by so-called Interface Modules (IFMs) inthe Interface Module Layer. Using this interface abstraction, the test specifications can easily bere-used for different types of hardware interfaces or testing levels by using different interface mod-ules as adapting devices between the test system and the SUT. For test execution, the compiledtest specifications are each executed by so-called Abstract Machines (AMs) that generate abstractCSP events to be mapped by the interface modules. The events generated by the AMs or the IFMsare relayed automatically by the Communication Control Layer (CCL) to all other AMs or IFMswhich have declared an input interface containing this event. Therefore, the CCL uses mappingtables which are automatically produced by a compiler that interprets the interface definitions ofthe test specifications (i.e., the input and output CSP channel declarations) and the IFMs.As described above, the compiled test specifications are executed in parallel and in real-time eachby a separate abstract machine. Each test specification interacts with its environment, i.e., it syn-chronizes its behavior with other test specifications and, in particular, with the system under test.The synchronization between test specifications and with the system under test is non-blockingbecause most SUTs use non-blocking interfaces.3 For practical testing, this means that a test spec-ification generates events for the SUT whenever indicated by the test design. Then, either thesame or another test specification waits for the expected SUT output event until a timer expireswhich has been set such that the test specification waits the maximum period of time in whichthe respective SUT output is not too late (according to the SUT’s specifications). Whenever thecorrect SUT output event is not received within this time period or an unexpected (i.e., presumablyincorrect) SUT output event is received, a warning or an error event is produced automatically bythe abstract machine.Test execution with the RT-Tester can be performed in a completely automatic way because allrelevant information about the test specifications to be executed, the event mappings to be utilized,the interface modules, etc. are contained in a test procedure configuration file. During test exe-cution, the abstract machines execute the compiled test specifications, i.e., the abstract machinegenerates test data on-the-fly by using the various specifications and adapts (or even stops) thegeneration according to the SUT responses and the test coverage strategy. The test evaluation isalso performed by the AMs on-the-fly during test execution, i.e., it checks the correctness of theSUT outputs with respect to data values, sequencing of inputs and outputs, and timing.If hard real-time test data generation is not required, it is also possible to combine the fully auto-mated test generation with manual interactions of the tester (e.g., manually given test inputs). Inthe test suite for bare IMA module testing, this mechanism is used whenever constraints (or limi-tations) of the test bench require manual interaction of the tester such as loading another moduleconfiguration or reading the output of a measuring instrument whose data are not automaticallytransmitted to the RT-Tester.For tracking the requirements coverage achieved during a test execution, the RT-Tester uses theconcept of requirement tags. Requirement tags defined in the test procedure design are representedas auxiliary events in the test specifications (named like the requirement tag) which are generatedeach time the corresponding test cases are checked during a test execution. Thus, the requirementtags are part of the recorded test execution logs and the automatic test documentation can listthe requirement tags that have been reached or have not been reached while executing the testprocedure.For documenting the test execution, the RT-Tester monitors the communication behavior of allabstract machines and the SUT and saves the test execution logs for documentation as well as foroffline evaluation. To facilitate the manual detection of irregular behavior, the recorded data canbe visualized by means of the Test Visualization Subsystem. More complex visualizations (e.g.,by 3D animations in virtual reality) are also possible depending on the availability of adequatetools.

3In contrast to the non-blocking synchronization of different test specifications (i.e., of their main CSP processes),parallel CSP processes belonging to the same test specification use blocking synchronization among each other.

Page 133: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

5.4. TEST BENCH FOR IMA PLATFORM TESTING 113

Interfaces to Other Tools. The RT-Tester provides several means to realize interfaces to othertools in the test bench and it depends on the purpose for the interface and the interface providedby the other tool to determine which one can be chosen.

• For interaction with a data loader or analyzing tools, a specific interface module can bedeveloped. For example, whenever the test specifications want to indicate that the othertool shall perform a specific task (e.g., the data loading tool to load another module con-figuration), it generates an abstract CSP event which is then mapped by the specific IFM toconcrete triggers such as command-line function calls. Similarly, to indicate the completionof the other tool’s task and to indicate errors, the IFM maps the tool outputs to abstract CSPevents. However, due to limitations in the interfaces of the data loader and some of theanalyzers, the test suite for bare module testing (see Chap. 6) does not include such tailoredIFMs for the data loader.

• For interaction with simulators, the RT-Tester can integrate pre-compiled simulations invarious formats (including but not limited to C code and test scripts in CSV format).

• For generic interfaces to other tools (and generally for a better understanding of the eventnames in the test specifications), the events and signals as defined in a system databasecan be integrated. One option for their integration is to parse their global event and signaldescriptions and define adequate CSP channels. An IFM can then transform CSP events tosignals and vice versa based on a mapping table which has been generated while parsing thesystem database.

• For visualization of the test execution, the events monitored during test execution can bemapped on-the-fly to respective events and signals in the visualization model. A commonsignal and event database can help to generate the mapping table which is required by thespecific interface module.

RT-Tester running on a Test Engine Cluster. For executing the test specifications in real-time and for accessing all interfaces of an IMA module, the RT-Tester has to be executed onan adequate test engine such as the hard real-time test engine cluster (Sect. 5.4.1.1). Thus, theinterface modules as well as the abstract machines can be distributed to the different cluster nodes.This is depicted in Fig. 5.4 for a test engine cluster with two cluster nodes which each provide alltypes of interfaces.

5.4.2.2 Test Tool RT-Tester 6

The test tool RT-Tester 6 is the latest version of the RT-Tester. It has been developed by VerifiedSystems International GmbH in co-operation with the research group Operating Systems and Dis-tributed Systems at Universitat Bremen (department for Mathematics /Computer Science). Similarto the RT-Tester 5, the core component of the RT-Tester 6 is structured into an abstract machinelayer executing the test specifications, an interface module layer providing the mapping betweenabstract and concrete data, and a communication control layer (CCL) for data exchange betweenabstract machines and with the interface modules. In contrast to the RT-Tester 5, the core testspecification formalism of the RT-Tester 6 is the real-time test language RTTL which has beendeveloped by Verified Systems International GmbH. It is an embedded language using C/C++ asa host language that is equipped with specific test support commands for generating input data andchecking SUT outputs against expected results. Additionally, RTTL provides means for commu-nicating test-related data that are based on a channel and port concept. The channels represent anasynchronous transmission medium that can be accessed by every abstract machine or interface

Page 134: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

114 CHAPTER 5. TESTING OF SYSTEMS BASED ON IMA

Myrinet

RTT Test Engine ClusterRTT Cluster Node 1 RTT Cluster Node 2

AFDX

ARINC429

CAN

DiscreteI/O

AFDX

ARINC429

CAN

DiscreteI/O

AnalogI/O

AnalogI/O

IFM

IFM

IFM

IFM

IFM

IFM

IFM

IFM

IFM

IFM

AFDX

ARINC429

CAN

AnalogI/O

DiscreteI/O

AFDX

ARINC429

CAN

AnalogI/O

DiscreteI/O

Simulator

Communication Control Layer

Simulator

Communication Control Layer

Test Control Specification /Test Checker Specification /

Test Control Specification /Test Checker Specification /

Figure 5.4: RT-Tester Test Engine Cluster connecting to the interfaces of the system under test

module that declares an input or output port on the channel. Within the abstract machines, out-put ports are mainly used to send test data to the system under test whereas the input ports areessentially required to receive SUT outputs. These outputs can then be evaluated on-the-fly withrespect to the given input data. To facilitate checking of the SUT responses, it is possible to definefilters on the input ports which define pre-determined actions, evaluations, or even error handlingwhenever specific SUT responses (that match the filter conditions) are received.In addition to using RTTL for defining test specifications, the RT-Tester 6 test tool allows simula-tions, stimulators and checkers to be defined in various high-level specification formalisms such asdecision tables (for combinational systems) or HybridUML (for hybrid systems with discrete andtime-continuous behavior, see Sect. 2.3.2.3). For specific application domains (e.g., avionics orrailway), customized formalisms can also be supported. Particularly for hybrid systems and mod-ule testing of C/C++ programs, recent developments concerning static analysis and automatedgeneration of test cases (cf. [BFPT06] and [PLK07]) have been exploited in the RT-Tester 6.2.With this enhancement of the test tool, it supports additionally test case generation for hybrid sys-tems, structural module testing of C/C++ programs, and module testing against functional speci-fications containing complex constraints as pre- and post-conditions and intermediate assertions.For automated testing, the SUT and/or its specification are at first transformed into a so-calledintermediate model representation which is independent of the SUT’s code or the specification’ssyntax. This abstracted representation is then used by the core components of the test case gen-eration tool (i.e., by the symbolic test case generator, the constraint generator and the constraintsolver) which select suitable test cases and calculate the specific set or sequence of test input data

Page 135: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

5.4. TEST BENCH FOR IMA PLATFORM TESTING 115

such that the given constraints are satisfied. The particular selection of test cases can be done usingvarious test strategies (e.g., the structural coverage strategy modified condition/decision coverageor the randomized coverage strategy uniform statistical test case distribution). However, these lat-est enhancements (which are incorporated into the RT-Tester 6.2) have been made available whilefinalizing this thesis and have therefore not been considered when developing the test suite forbare IMA module testing and the approach for testing a network of IMA platforms. Nevertheless,this recent work can be leveraged for some optimizations and may also be considered in futurework (see future work addressed in Chap. 10).

One of the main advantage of the RT-Tester 6 test tool (in particular when compared to the RT-Tester 5 test system) is its handling of large data structures which is much easier using the meansof RTTL or other high-level specification formalisms than using CSP. For this reason, the RT-Tester 6 appears to be the more appropriate test system for testing the communication flow in anetwork of IMA modules. Following the approach described in Chap. 7, the test evaluation isperformed by the test checker specifications based on large test execution logs compiled by eachtest application.

ReferencesThe test bench for IMA module testing has been addressed by several publications. In particular,the architecture of the test engine as a hard real-time test engine cluster has been described (amongothers) in [Pel03] and [PZ03]. The descriptions incorporate the results of the student projectHaRTLinC - Hard Real-Time Linux Clusters and details with respect to the Linux kernel patch forCPU reservation and interrupt routing can be found in [HaRTLinC] (for Linux kernel 2.4) and in[Efk05] (for Linux kernel 2.6).The requirements on a test system to be used for hard real-time testing are discussed in [Pel02a]and further detailed with respect to interface abstraction in [PT02] and [Pel02b]. Automated testdata generation and evaluation is addressed by several publications including (but not limited to)[Pel02a], [DMP00], and [Pel96].The structure of the RT-Tester 5 test tool and the methodology elaborated for specification basedtesting with the RT-Tester is described in detail in several publications, for example, [RT-Tester 5],[Pel02b], [Pel02a], [Mey01], [DS04], and [PT02]. The details of the RT-Tester 6 are providedin [RT-Tester 6.0] and [PZ03]. The recently developed mechanisms for automated test generationand static analysis which are embedded in the RT-Tester 6.2 ([RT-Tester 6.2]) are introduced in[BFPT06] and [PLK07]. The test strategies mentioned – namely modified condition/decisioncoverage and uniform statistical test case distribution – are described in [DO-178B] (p. 83) and[DGG04], respectively.For visualizing the test execution in a virtual reality model, [BT02a] and [BT02b] describe furtherdetails although the described method is primarily intended for intuitively defining test cases in avirtual reality model of the system under test’s functional interfaces.

Page 136: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

116

Page 137: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Part III

Case Studies

117

Page 138: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das
Page 139: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Chapter 6

Automated Bare IMA Platform Testing

The test approach for systems and domains using integrated modular avionics has been describedin the previous chapter (in particular Sect. 5.1). Testing starts at the equipment level which in-cludes functional testing of the IMA platforms themselves and results finally in a fully integratedand tested aircraft. With respect to the equipment-level functional IMA module tests, the tests tobe performed have to consider the full range of services and performance characteristics for allpossible configurations because it shall be possible to use the IMA platforms for different pur-poses and for many applications of different criticality. The approach outlined in Sect. 5.2.1 usestest applications instead of real avionics applications and a wide range of possible (i.e., legal) testconfigurations. The test applications together cover all possible behaviors and are not limited tothe real applications’ behavior. The set of legal test configurations considers all combinations ofconfiguration parameters – including the final module configurations. A conventional approach isthat each test procedure consists of one of the test configurations and a set of tailored test applica-tions (one test application for each configured partition) and thus tests one particular functionalityfor the chosen module configuration. If another test procedure tests the same functionality withanother (potentially only slightly changed) test configuration, the set of test applications is adaptedas required. Considering, on the one hand, the number of possible configuration parameters andthe possible range of each such parameter and, on the other hand, the number of different API ser-vices particularly when considering different parameters, it is obvious that the number of requiredtest procedures is very large and would require an enormous effort for manually generating eachtest application and each configuration. Furthermore, it becomes clear that manual testing is notan option because it would be too time consuming compared to automated test execution.The approach introduced in detail in this chapter aims to overcome some of the difficulties men-tioned above:

• By using a generic test application (TA)which implements an interpreter for commands, it isnot necessary to implement many different tailored applications. Furthermore, the generictest applications can show very large behavioral variations. In fact, the real avionics ap-plications are refinements of the generic test application since real applications show lessvariations of behavior (i.e., are more restrictive).

• By controlling the generic test applications from external test specifications using dedicatedcommand links, it is possible to use the expressiveness of specific test languages or formalmethods. These external test specifications can also check the correct behavior with re-spect to the current status of the whole IMA platform, i.e., regarding all avionics partitions.Unlike the test applications which can only view the memory, scheduling and I/O commu-nication of the respective partition, external test specifications can evaluate the replies ofall test applications and the behavior at the interfaces of the IMA platform. External testspecifications are thus an important step towards automated test evaluation.

119

Page 140: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

120 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

• By using test specification templates, it is possible to automate the generation of many testspecifications by instantiating the template for a set of adequate configurations. This helpsto reduce the number of manually generated test specifications.

• By using a rule-based configuration data generator, it is possible to vary several param-eters in an automated and controlled way without explicitly generating the configurationvariations manually.

• By providing a higher degree of automation for test preparation, test execution, and testevaluation, it is possible to increase the probability of error detection and the efficiency oftesting. Automated test execution and evaluation also helps to reduce the costs for testing(in particular for regression testing).

Figure 6.1 shows the separation between the test application and the external test specifications.An instance of the generic test application is executed in each configured avionics partition and thecommand interpreter behavior is available in each process. The behavior is controlled by externaltest control specifications running on the test engine which send commands to the test applicationsvia reserved communication links. The commands trigger API calls with specific parameters orpre-programmed sequences of API calls, so-called scenarios. After receiving the command, theTA executes the desired API call or scenario and immediately sends the return values back tothe external test checker specification which checks if the results are correct with respect to thecurrent status of the IMA platform and partition. Each test application (i.e., each partition) can beaddressed separately which allows to check the timing and partitioning properties of concurrentlyexecuted applications. The test specifications can also simulate external communication partners

AFDX ARINC 429 CAN Analog I/O Discrete I/O

AFDX ARINC 429 CAN Analog I/O Discrete I/O

Test Application 1 Test Application 2

IFMAFDX

IFMARINC 429

IFMCAN

IFMAnalog I/O

IFMDiscrete I/O

TestEngine

IMAModule

TA Process 2 TA Process 2TA Process 1 TA Process 1

HW Interface System

Operating System

ARINC 653 API

Tables

Configuration

Test Control Specification / Test Checker Specification / Simulator

Communication Control Layer

Figure 6.1: Test bench for testing a single IMA platform consisting of the IMA module as thesystem under test and the test engine

Page 141: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

121

of the applications to test the I/O communication of the IMA module. Neither the test applicationsnor the test specifications access the hardware interfaces directly: the TAs use the API services ofthe operating system (in particular ports) and the test specifications use an abstraction provided bythe Interface Modules (IFMs) of the test system (see Sect. 5.4.2.1).Figure 6.2 depicts the test suite for bare IMA module testing. It consists of the four main parts:

• the generic test application,• the IMA configuration library which is a set of configuration rules,• the IMA test specification template library which contains all test specification templatesand the respective generic configuration information, and

• the IMA test execution environment.

Test Application 2Test Application 1

(partitions and config table)Data Load

Configuration Data Parser Configuration DataTest Relevant

Configuration DataModule and Partition

Generated

Configuration Data GeneratorScenarios

Command Processing

Standard Behavior

IMA Test ExecutionEnvironment

Interface Modules

CSP Channels

CSP Types

CSP Macros

TestsPartitioning

Inter-Partition Com-munication Tests

Intra-Partition Com-munication Tests

Data LoadingTests

Operating System(API) Tests

Health MonitoringTests

Operational ModesTests

ConfigurationTests

IMA Test Specification Template Library

IFMs Test Procedure(Controller / Checker / Simulator)

Load GenerationTool Chain

ConfigXXXXrules

IMA Configuration Library

Config0001rules . . .

Test Instantiation Tool

ApplicationGeneric Test

TestEngine

IMAModule

Config

Tables

Test Control Spec / Test Checker Spec / Simulator

Figure 6.2: Testing environment for testing an IMA module in a hardware-in-the-loop approach

It also comprises the following tools and tool chains:

• a configuration data generator that generates module and partition configuration data basedon the configuration rules in the configuration library,

• a load generation tool chain which generates data loads using the test application code andthe generated configuration data,

• a configuration data parser which extracts the test relevant configuration data in a formatusable by the test specifications, and

• a test instantiator which instantiates the test specification templates using appropriate testrelevant configuration data extracts, and thus generates executable test procedures (that con-tain the specifications for test control, test checker and simulator).

For testing, the data loads consisting of the configuration and the partition data are loaded into themodule’s configuration table area and the partitions’ memory areas, respectively. Then the testprocedures are executed on the test engine using the appropriate test system (see Sect. 5.4).

Page 142: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

122 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

The remainder of this chapter describes the test suite for testing bare IMA platforms. It is struc-tured as follows: Section 6.1 describes the internals of the test application and the communicationprotocol between the test application and the test specifications. It also considers the minimumconfiguration requirements in order to execute the test application in a configured partition. Asa basis for the description of the test specification template library, the IMA test execution en-vironment and the IMA configuration library are described in Sections 6.2 and 6.3. The IMAtest execution environment defines the files and information associated with each test specifica-tion template, in particular the abstraction used to send commands, receive replies, and access theexternal interfaces of the IMA module. The IMA configuration library description includes theconfiguration rule syntax and the relation to the formats to be produced by the configuration datagenerator and the configuration data parser. Section 6.4 introduces the structure of the test spec-ification template library and provides two examples of test specification templates. Thereafter,Sect. 6.5.1 describes the tool chain for preparing the data to be loaded onto the module and alsodetails the necessary steps for generating a load. Section 6.5.2 discusses to the test specificationinstantiation tool which generates executable test procedures by instantiating a test specificationtemplate for a specific configuration (using the extracts provided by the parser tool). Section 6.6discusses test execution and test evaluation. During test execution, the test specifications of the in-stantiated test procedure are used for on-the-fly test data generation and on-the-fly test evaluation(see Sect. 2.4) which requires an appropriate test bench. Such a test bench, i.e., a test system andtest engine, has already been outlined in Sect. 5.4.

The test suite described below has been developed and applied in the VICTORIA project usingearly prototypes of IMA modules and the tool environment. The observed errors and problemsare classified in Sect. 6.7 but are not a representative list of problems found on later prototypesor the current IMA platforms. For obvious reasons, that information is confidential and cannot beincluded in this thesis.

The developed test suite contains more than 200 test procedure templates (which are each in-stantiated for one or more test configurations) and almost 100 configuration rules (each resultingin several test configurations) and a comprehensive test execution environment and tool chain.However, this chapter aims at describing the concepts of the test application, the test executionenvironment, the configuration library, the test specification template library, the data load gen-eration tool chain, and the test instantiation tool. It is not intended to provide a complete list ofnecessary configurations, or test specification templates, or to provide a technical manual how toexecute the test procedures or how to use the respective tools.

Furthermore, the described test suite concept has been used for confidence testing to complementthe module supplier’s tests and had no relevance for certification of the IMA modules. Neverthe-less, the concept can also be used for certification. Then it is required to validate all test specifica-tion templates and configurations, and to qualify the test system and all associated test preparationtools. Moreover, it might be necessary to add further tests procedures, configurations or evenother verification means (e.g., model checking or code reviews). More detailed considerationswith respect to test suite certification are beyond the scope of this thesis.

ReferencesThe concept of the approach for automated bare IMA module testing – as it is discussed in thefollowing – has been introduced first in [Pel03]. For the implemented test suite, a technical manualdescribes the details [TMM+03]. The approach has also been addressed in [MTB+04].

Page 143: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.1. TEST APPLICATION 123

6.1 Test Application

The test application (TA) is a generic (avionics) application to be executed in the avionics parti-tions1 of an IMA module and, thus, has to comply with the configuration possibilities and limita-tions as discussed in Sect. 3.3. The test application’s behavior is mainly determined by commandswhich are either single API calls or pre-defined scenarios, i.e., shortcuts for a sequence of APIcalls. In addition, a minimum standard behavior is implemented within the TA to ensure the cor-rect initialization of the necessary objects (e.g., the creation of the command ports) and to checkfor new commands to be executed by the active process. After executing the desired API call orscenario, the return values are immediately checked internally (if possible) and then sent back tothe caller. The commands are messages with a fixed format received via a command port. Thecommands are not referenced by name in order to reduce communication bandwidth between TAand test specification. Unique identifiers are used instead to refer to a specific API call or a specificscenario. Furthermore, commands may contain a set of parameters used to provide the necessaryparameters to the API services or scenarios. The parameter values are determined by the testspecifications and allow legal as well as illegal values. Thus, it is possible to perform normal be-havior tests as well as API robustness tests (e.g., API calls with illegal parameters). For robustnesstesting, it is also possible to command each API call in each operating mode.Each command is addressed to a specific process of a particular partition. Since the commandsare generated outside the IMA module where, for example, partition-local process IDs are notknown, abstract process indices are used for addressing the commands and for denoting the returnaddress of the replies. These indices are mapped inside the test application to concrete IDs ornames. The same index-identifier mapping mechanism is also used for other objects (e.g., ports,buffer). The mapping is stored in so-called mapping tables which can also store other parametersof the objects, for example, the priority for processes and the maximum message size for ports.The mapping tables are kept permanently up-to-date to represent the current status of the objectsand can thus also be used to perform some partition-internal checking of API return values – inparticular of values which are not known by the test specifications. The mapping tables used totranslate indices to IDs and vice versa also contain illegal entries. This allows to call, for example,GET SAMPLING PORT ID with an illegal port name (empty string or unconfigured name).The following subsections will detail these characteristics of the test application and conclude withthe resulting minimum requirements regarding the module and partition configuration tables.

6.1.1 Minimum Standard Behavior of the TA Processes

After starting an IMA module (e.g., after loading the application code and the configuration tablesor after a hardware reset), each partition is in the operating mode COLD START and only the so-called main process is running to perform the partition’s initialization (e.g., creation of processesand communication means). After switching to operating mode NORMAL, the created processes andthe error handler (if created) are running. The test application implements a minimum standard be-havior for the main process, for the periodic and aperiodic processes created during initialization,and for the error handler which is described in the following paragraphs. In addition, all processesimplement the communication protocol described in Sect. 6.1.2.

6.1.1.1 Standard Behavior of the Main Process

The main process is the only process running during the initialization phase (i.e., while the parti-tion is in the operating modes COLD START or WARM START) which is entered after a power interrupt,

1It is not intended to load the TA into the system partitions since these are considered as part of the system undertest. Furthermore, system partitions can use an extended set of API services which are not foreseen to be called by theTA nor to be commanded from test specifications.

Page 144: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

124 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

a hardware reset, specific health-monitoring actions, and as the result of the API service callsSET PARTITION MODE (COLD START) and SET PARTITION MODE (WARM START).At first, the main process initializes the mapping tables using pre-defined names for the objectswhich comply to those used in the configuration tables. Then it opens the ports for communicationwith the test specifications: the port used to receive commands from the test specifications (defaultname COMMAND PORT) and the port used to send the return values back to the test specifications (de-fault name RESULT PORT). These ports have to be configured for each configuration with specifiedattributes (i.e., pre-defined names, queue length and maximum message size). The main processwill not open/create any other objects unless commanded by the test specifications. In order toreceive commands, the main process polls the command port and executes the commands to itself.The results are then returned using the result port. Commands addressed to other processes can bestored under certain circumstances (see Sect. 6.1.2).The main process relinquishes control by the command SET PARTITION MODE (NORMAL), i.e., byswitching to operating mode NORMAL, which is only possible if the creation of at least oneaperiodic or periodic process has been commanded before. The main process is restarted bySET PARTITION MODE (COLD START) or SET PARTITION MODE (WARM START).

6.1.1.2 Standard Behavior of the Error Handler

The error handler is a specific process that – when created during initialization – can handleprocess-level errors during operating mode NORMAL.2 The error handler is started if process ex-ecution errors occur or by the API service RAISE APPLICATION ERROR. As part of its standardbehavior, the error handler calls the API service GET ERROR STATUS to determine the error code,the identifier of the faulty process, and the associated message. It then sends the results to the testspecifications using the port RESULT PORT. This behavior ensures that the test specifications areaware of the error handler being started and thereby uncover unexpected behavior. Furthermore,the error handler polls the partition’s command port COMMAND PORT and executes all commandsaddressed to the error handler. All commands to other processes are discarded. This can occur incase of unexpected behavior when the test specifications expected other processes to be running.The error handler remains active until the command STOP SELF addressed to the error handler hasbeen given and the previous process (usually the process raising the error) becomes active again(otherwise a process rescheduling is triggered). The commands addressed to the error handler incombination with health monitoring mechanisms can also lead to a partition restart or the like (seeSect. 3.1.2.6 for a description of the API service behavior).

6.1.1.3 Standard Behavior of Aperiodic and Periodic Processes

The aperiodic and periodic processes are created and started by commands to the main processand are running during the operating mode NORMAL. By default, each process checks the partition’slist of commands for those addressed to its process index and executes them in the sequence com-manded by the test specifications. After each command, the return values are sent back using theresult port RESULT PORT. If no further commands addressed to the process have been received, itreleases the CPU by either calling PERIODIC WAIT (periodic processes) or REPLENISH (aperiodicprocesses). This means that, if several commands shall be executed by one process in rapid suc-cession, the process type and the process’ attributes PERIOD, TIME CAPACITY and DEADLINE as wellas the expected duration of the respective commands should be considered when designing thetests in order to avoid missed deadlines. This is of particular importance if some of the commandsare scenario triggers which usually execute a sequence of API calls without allowing commandsin between.

2While in operating mode COLD START or WARM START, the process-level errors are treated by the partition-levelerror response mechanisms as configured in the configuration tables.

Page 145: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.1. TEST APPLICATION 125

The partition’s list of commands is sent to a queuing port. It stores all commands addressed to thepartition in the sequence of their reception. Since it is not possible to pick out only the commandsaddressed to a particular process without deleting commands to other processes, other means havebeen developed that are described in the next subsection (in particular in Sect. 6.1.2.2).

6.1.2 Communication Protocol between TA and Test Specifications

The protocol that has been defined for communication between the test specifications and the TAprocesses consists of two parts:

1. the message format encoding the commands and the result messages and the mechanismsnecessary to interpret the messages, and

2. the message distribution rules inside the TA which are necessary since all processes of apartition use the same pair of ports.

Two types of messages can occur:

• command messages sent by the test specifications and interpreted by the TA processes, and• result messages (which may also be error report messages) generated by the TA and inter-preted by the test specifications.

In fact, test specifications trigger the generation of the command messages and determine the mes-sages’ parameters. The messages are then generated and transmitted by the respective interfacemodules. Similarly, the result messages received by the test engine are interpreted by specific in-terface modules which provide abstract data (in form of CSP events) to the test specifications. Thedetails of the test execution environment are described in Sect. 6.2. Examples of test specificationsare provided in Sect. 6.4.

6.1.2.1 Message Format

The command messages as well as the result messages or error reports comply to a fixed format.The aim of this format is to transport the necessary information without spending to much commu-nication bandwidth. Furthermore, interpretation of the message shall be fast. As a consequence,the message format contains only integers. Whenever strings are necessary to identify an object(e.g., the process name for API services like GET PROCESS ID), the information is encoded by anappropriate index (e.g., the process index) and mapped to the string (e.g., the process name) insidethe TA. The same indices are also used for object identifiers (e.g., process ID) necessary for mostAPI services (e.g., for GET PROCESS STATUS). The indices, IDs, names and other object parametersare stored in the mapping tables.Generally, using only a small and predefined range of indices instead of names and identifiers(i.e., strings and integers, respectively) helps to reduce the amount of transmitted data volume andit allows to have a fixed message size. Moreover, the names and identifier are not necessary to beknown at the test specification’s side because all checks regarding names, identifiers, etc. can beperformed internally by the test application process using the mapping tables. For example, aftercreation of a new process it is always checked that the returned ID is unique within the partitionby comparing it with the IDs of the processes that have already been created and stored in themapping tables. In addition, the CSP test specifications used by the RT-Tester can only cope withchannel component data types of limited size to avoid the state explosion problem (as describedin the paragraph about CSP in Sect. 2.3.2.3). For example, it is not possible (or rather should beavoided) to define a channel component allowing all possible IDs since this means to use CSP datatype Int.The message format contains the following integer parameters resulting in a fixed message size:

Page 146: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

126 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

• a unique identifier for the module and the partitionThis information is necessary for addressing the commands and for knowing the sender ofa result message or error report.

• a process indexThe process index is necessary to indicate the recipient process or to indicate the senderprocess. It is also used for partition internal routing of the commands.

• a unique identifier number for the commandThis parameter encodes the command itself. The command number is translated to thecorresponding API call or scenario using static mapping tables. The state mapping func-tion (and its reverse function) are used by the test application as well as by the respectiveinterface modules of the test system.

• a parameter listEach command has a fixed-length array for integer parameters to be used as parameters forthe respective API call or scenario. As described above, indices are used instead of partition-internal IDs or strings. For each API call and each scenario, it is determined how to interpretthe list of command parameters. Equally, the parameters to be encoded in the result messageof an executed command are determined. However, the relation between command andAPI parameters is not necessarily one to one. The command parameters may also containabstract information about the API call parameters. For example, to trigger the sendingof a message using inter-partition or intra-partition services, the command contains twoparameters defining the type and the size of the message. A (parameterized) helper functionwithin the test application then creates the message to be used as one of the parameters ofthe commanded API call before executing the command. Similarly, to trigger the receptionof a message, the command defines the size and type of the message to be received. Afterreceiving the message, a (parameterized) helper function checks that the received messagecomplies with the expected one. Using this approach, it is possible to trigger big messages(up to the system limit message size of 8192 bytes) to be sent, received and checked withoutincreasing the size of command messages and without generating much command traffic bysending a sequence of normal (i.e., small) commands.

6.1.2.2 Communication Protocol Implemented within the TA Processes

For communication with the test specifications, all TA processes of one partition share a pair ofqueuing ports – one for receiving commands and another for sending the result messages or errorreports. Queuing ports have been chosen instead of sampling ports because they ensure that nocommands or results are lost, i.e., that a newer command cannot overwrite a previous commandwhich has not yet been executed. Moreover, queuing ports with a queue length greater than oneallow to “pre-program” a sequence of commands without waiting for the results of previous com-mands. The drawback of choosing queuing ports for commanding is that reading from a queuingport is destructive. This means that if a process reads a command from the command port and theread command is for another process, it is not possible to undo reading. But using a dedicated com-mand port per possible process would mean to have up to SYSTEM LIMIT NUMBER OF PROCESSESqueuing ports for commanding in the minimum configuration requirements (and an additional oneshared by all processes for sending the results). This kind of minimum configuration restrictionwould be unacceptable.

Therefore, the approach chosen suggests to have one shared command port for all TA processes ofthe partition. In addition, each normal mode process has its own command list – a buffer. Duringthe initialization phase, i.e., while the main process is running, the command for creating a processis accompanied by a command to create the process’ command list buffer. For convenience when

Page 147: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.1. TEST APPLICATION 127

writing the test specifications, the same index is used for the process and the buffer (and mapped toconcrete buffer or process IDs using the above described mapping tables). The main process andthe error handler have no command list buffer, because (a) the main process has exclusive access tothe command port while performing initialization commands, (b) pre-programming commands forthe main process while in operating mode NORMAL is impossible (all objects including the commandlist buffers are deleted when switching back to operating mode COLD START or WARM START), and (c)pre-programming of error handler commands seems to be unnecessary since the error handler’sstandard behavior already covers a notification that the error handler is running and while it isrunning not other processes are scheduled.The command messages are distributed by the different process types as follows:

1. The main process reads the commands and checks the receiver field of the messages. If acommand is addressed to another process that has already been created and has a commandlist buffer, the command is written into the recipient’s buffer. Commands to the main processare executed immediately. Commands to the error handler and non-existing processes arediscarded because they have no command list buffer.

2. The error handler process reads the commands and discards all commands to other pro-cesses. Commands to the error handler process are executed immediately.

3. All other processes first read the commands from the shared command port and distributethem to the recipient’s command buffers. Messages to the main process or error handlerneed to be discarded because these specific processes have no command buffers.The active process then reads its own command list buffer and executes the respective com-mands. After handling all commands in the buffer, the process releases the CPU (see stan-dard behavior described above).

Pre-programming means that the sequence of commands for each TA process is preserved but notnecessarily the sequence of commands sent to different processes. The latter depends entirely onthe commanded API calls and the scheduling of the processes or partitions.The communication protocol implemented within the TA processes has an impact on the execu-tion time needed by each process: (a) if more commands are in the shared command port, morecommands have to be distributed, (b) if more commands are in the command list, the sequenceof commands may take longer that the process’ defined time capacity. These problems have to betaken into account when writing the test specifications to avoid undesirable missed deadlines.The command distribution protocol in operating mode NORMAL, i.e., when normal processes andthe error handler are running, is depicted in Fig. 6.3 (a)-(c).For sending result messages and error reports, it is not necessary to define such an elaborated pro-tocol because all processes of a partition can share the result port RESULT PORT since queuing portscan always be written by all processes of the respective partition. This is depicted in Fig. 6.3(d).

6.1.3 Minimum Configuration Requirements

The test application requires a minimum of specific settings in the configuration table. Somerestrictions and constraints have already been pointed out in the previous sections. Others are dueto the fact that all processes in avionics partitions have usually no read access to the configurationtables and thus cannot inquire certain configuration parameters (e.g., pre-defined parameters of thecommand port), but at the same time it shall be avoided to transmit these parameters as commandparameters (e.g., the names of configured ports). Yet others are necessary to allow the minimumstandard behavior of the main process.The following list summarizes the restrictions and constraints:

Page 148: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

128 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

Test Application 1

BUF1 BUF2 BUF3

IMAModule

ARINC 429 CAN Analog I/O Discrete I/OAFDX

ARINC 429 CAN Analog I/O Discrete I/OAFDX

ARINC 429 CAN Analog I/O Discrete I/OAFDXIFM IFM IFM IFMIFM

TestEngine

COMMANDPORT

RESULTPORT

QP

Error HandlerTables

Configuration

TA Process 1 TA Process 3TA Process 2

QP

Test Control Specification / Test Checker Specification / Simulator

Communication Control Layer

(a) 1. Read commands from the partition’s command port

Test Application 1

BUF1 BUF2 BUF3

IMAModule

CAN Analog I/O Discrete I/O

CAN Analog I/O Discrete I/O

TestEngine

CAN Analog I/O Discrete I/OIFM IFM IFM

ARINC 429AFDX

IFMAFDX

ARINC 429AFDX

IFMARINC 429

COMMANDPORT

RESULTPORT

QP

Error Handler

Tables

Configuration

TA Process 1 TA Process 3TA Process 2

QP

Test Control Specification / Test Checker Specification / Simulator

Communication Control Layer

(b) 2. Route commands to the receivers’ buffer

Test Application 1

BUF1 BUF2 BUF3

IMAModule

ARINC 429 CAN Analog I/O Discrete I/OAFDX

ARINC 429 CAN Analog I/O Discrete I/OAFDX

ARINC 429 CAN Analog I/O Discrete I/OAFDXIFM IFM IFM IFMIFM

TestEngine

COMMANDPORT

RESULTPORT

QP

Error Handler

Tables

Configuration

TA Process 1 TA Process 3TA Process 2

QP

Test Control Specification / Test Checker Specification / Simulator

Communication Control Layer

(c) 3. Read TA process’ buffer and handle command

Test Application 1

BUF1 BUF2 BUF3

IMAModule

ARINC 429 CAN Analog I/O Discrete I/OAFDX

ARINC 429 CAN Analog I/O Discrete I/OAFDX

IFM IFM IFM IFMIFMARINC 429 CAN Analog I/O Discrete I/OAFDX

TestEngine

COMMANDPORT

RESULTPORT

QP

Error Handler

Tables

Configuration

TA Process 1 TA Process 3TA Process 2

QP

Test Control Specification / Test Checker Specification / Simulator

Communication Control Layer

(d) 4. Send results back using the partition’s result port

Figure 6.3: Command distribution protocol implemented in the TA processes

• One pair of AFDX queuing ports per partitionOne of the ports is for receiving commands and the other for sending result messages and er-ror reports. They need to have a defined queue length, a defined message size complying tothe message format, and use the pre-defined names COMMAND PORT and RESULT PORT, respec-tively. Compliance with these requirements is important since these ports shall be openedby the main process during initialization without the possibility to read the configurationtables or to receive commands stating the parameters.

• Minimum code area per partitionThe size of the code area depends on the size of the test application code.

• Minimum stack size per partitionStack size has to be reserved for the main process as well as for all processes to be running inoperating mode NORMAL. The total stack size is the sum of process’s stack size and dependson the size of the test application’s executable. As a minimum it is required to provide stacksize for the main process.

Page 149: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.1. TEST APPLICATION 129

• Minimum data areaThe minimum requirements depend on the number of commandable processes because eachsuch process needs an accompanying command list buffer with memory reserved in thepartition’s data area. To some extent this parameter depends on the test objective.

• Naming requirements for portsAll configured ports have to comply to the naming rules defined for the mapping tablesbecause the mapping tables are automatically initialized by the main process without thepossibility to enquire the configuration parameters.

It is possible to determine these minimum configuration requirements since the configurationsused for testing are part of the test suite. For testing with a final configuration (e.g., for configuredIMA platform testing), it is necessary to tailor each partition’s test application before preparingthe data load by including configuration data extracts of the respective partition.

Page 150: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

130 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

6.2 IMA Test Execution Environment

The test execution environment provides the CSP types and channels to be used by the test spec-ifications for sending commands to and receiving replies from the test applications. Moreover, itprovides CSP macros which are parameterized CSP processes and provide a function-like inter-face for the test specifications. Using the CSP macros, the test specifications can abstract fromthe concrete sequence of CSP events to be generated for triggering a command and checking therespective return code. The CSP events are abstract terms for inputs, outputs and errors, warnings,requirement tracing information, etc. The mapping to concrete interfaces of the SUT is providedby so-called Interface Modules (IFMs). The test execution environment provides a set of differentIFMs which allow simultaneous triggering of the different IMA module interfaces.The CSP types, channels, macros, and IFMs contained in the IMA test execution environment canbe grouped according to their usage:

1. for sending commands to and receiving results and return codes from the TAs,

2. for manual interaction of the tester using the Test Visualization Subsystem,

3. for sending and receiving messages via AFDX,

4. for sending and receiving messages via ARINC 429,

5. for sending and receiving messages via CAN,

6. for sending and receiving messages via analog I/O, and

7. for sending and receiving messages transmitted via discrete I/O.

In addition, a set of test procedure templates can use a private set of CSP types, channels andmacros which are based on those in the test execution environment. If these test procedure tem-plates belong to the same test objective the respective definitions can be part of the test specifica-tion template library. Otherwise, the definition files are contained in the test execution environ-ment. In the latter case, it is also possible to introduce abstraction means which are not based onthe CSP types and channels defined above but are directly mapped by additional IFMs. For exam-ple, for some test objectives, it is necessary to perform a so-called communication flow scenario(see also the test procedure template described in Sect. 6.4.2) for which the respective CSP types,channels and macros are defined in the test execution environment. Moreover, the event mappingfor the communication flow channels is not provided by a separate IFM but has been integratedinto the IFM AFDX.The following table (Table 6.1) summarizes the different definition files for types, channels andmacros and denotes which IFM is used for which purpose. Further details are provided in sub-sequent sections and, for those definitions typeset in bold, the respective files are provided inAppendix B. Note that the test procedure templates can use different sets of interface modulesand, therefore, each one specifies its required interface modules in the RT-Tester configurationtemplate.

6.2.1 CSP Environment for Commanding

CSP Types. The CSP types for sending commands and receiving return codes and results pro-vide the constants and enumeration types as defined in the ARINC 653 API (e.g., the system limitsfor number of ports) and the CSP data types representing the types of the API function parame-ters. Using a specific CSP data type for each API function parameter type means that differentparameter types which use the same type in the API specification (e.g., int) are represented by

Page 151: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.2. IMA TEST EXECUTION ENVIRONMENT 131

CSP Types CSP Channels CSP Macros InterfaceModules

Commands toandresults fromTA

IMA typesX IMA input channels

IMA output channels

IMA API handling

IMA API macros

IMA scenario handling

IFM TA X

ManualInteraction

IMA manual types IMA manual in

IMA manual out

IMA manual handling TestVisualizationSubsystem

AFDX IMA AFDX types IMA AFDX in

IMA AFDX out

IMA AFDX handling IFM AFDX

ARINC 429 IMA A429 types IMA A429 in

IMA A429 out

IMA A429 handling IFM A429

CAN IMA CAN types IMA CAN in

IMA CAN out

IMA CAN handling IFM CAN

Analog I/O IMA ANALOG types IMA ANALOG in

IMA ANALOG out

IMA ANALOG handling IFM ANALOG

Discrete I/O IMA DISC types IMA DISC in

IMA DISC out

IMA DISC handling IFM DISC

Communicationflow

IMA com flow types IMA com flow in

IMA com flow out

IMA com flow handling part ofIFM AFDX

General IMA macros

Table 6.1: Overview of definitions for CSP types, channels, macros and interface modules

different CSP data types. This allows that each CSP data type can be tailored to contain only thelegal parameter values and, for robustness testing, additionally some illegal ones. The definitionsare compiled in IMA types and are used for defining the CSP channels. Two simple examples areprovided in the following.

Constant values defined in the API – in particular the system limits – are specified as CSP con-stants:

SYSTEM_LIMIT_NUMBER_OF_SAMPLING_PORTS = 512 -- partition scope

For addressing the processes, ports, buffer, etc., the type definitions contain sets with possibleindices. The test application internally has a record of indices called mapping table which allowsto get the respective pre-defined name or returned identifier for each index. The mapping tablesalso contain records for robustness testing.

-- set of sampling port indices

-- index 0 : mapped to sampling port with empty name (for

-- robustness testing)

-- index 1..512: mapped to respective sampling port in mapping table

-- index 513 : for robustness testing when creating more than

-- SYSTEM_LIMIT_NUMBER_OF_SAMPLING_PORTS

sampling_port_idx_t = {0..513}

CSP channels. The CSP channels are grouped into input channels and output channels. Therespective input events are generated by the test specifications as input for the IMA module. Theevents are mapped to concrete command messages by the IFM TA and sent to the IMA module us-ing the interfaces provided by the hardware driver of the test engine. As described in Sect. 6.1.2.1,the command messages are then interpreted by the test application processes. The output valuesand return codes of the commands are returned as reply messages and then mapped to events by

Page 152: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

132 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

the IFM TA. These events are the outputs of the SUT and the respective channels are called out-put channels. However, the so-called input channels are outputs of the test specifications and theso-called output channels are their inputs.The natural idea for generating the command messages would be to have one channel that allowsthe generation of all possible command messages. Therefore, a structured channel would be usedwhich provides one data component for each command message parameter, e.g., for a commandmessage with four parameters one could usechannel generate_command_message : param1.param2.param4 .

However, this would require to have common types for all parameters (usually integer sets) andwould prohibit enumeration types. This would obviously not be very intuitive when writing andvalidating the test specifications. As a consequence, one could consider one channel for each com-mand, i.e., for each API call to be triggered, which would use enumeration types and useful setswhere appropriate. For example, for generating command messages for two different API callswith 3 and 4 parameters, respectively, one could define two channels:channel generate_command1 : param1_c1.param2_c1.param3_c1 andchannel generate_command2 : param1_c2.param2_c2.param3_c2.param4_c2 .

This would allow to tailor each parameter’s type, i.e., to use subsets of integers or specific enumer-ation data types. When implementing this approach, the problems of the tools dealing with suchspecifications (which is in this case the test system) become apparent immediately because thenumber of different events defined by a channel is related to the number of possible states whenconsidering operators providing a choice of events (e.g., external choice operator) and the numberof states in the transition system representation of the respective process is critical. This problemis known as the state explosion problem. One solution is to analyze the effects of each channelor process definition. The solution chosen in this thesis is to define separate channels for settingeach parameter which, in total, reduces the number of events. For example for the above channelgenerate command1, three channels can be defined each of which sets one particular parameter:channel generate_command1_set_p1 : param1_c2 ,channel generate_command1_set_p2 : param2_c2 , andchannel generate_command1_set_p3 : param3_c2 .

The generated command message is either sent when setting the third parameter or by a specificevent defined, for example, aschannel send_command .

The difference between this and the previous solutions is the resulting number of events: Whenconsidering one channel for a command, the overall number of events is the product of the numberof elements in the parameter types. When considering one channel for each parameter, it is thesum. This means if each parameter type contains 100 elements, the overall number of events isreduced from 1 000 000 to 300 in this example.Additional parameters are added to each channel to denote the command receiver because eachcommand shall be executed by a specific process in a specific partition. The command receiverfield has three parameters MOD, PART and PROC which represent the possible modules, partitionsand processes to be triggered to execute the command. According to the constants defined in theARINC 653 API specification, a module can have up to 32 partitions and each partition can haveup to 128 different processes running in normal mode. In addition, it is possible to send commandsto the main process and the error handler process. Using the above channel set for triggering theartificial command command1, each channel contains the command receiver field (MOD.PART.PROC)as follows:

channel generate_command1_set_p1 : MOD.PART.PROC.param1_c2

channel generate_command1_set_p2 : MOD.PART.PROC.param2_c2

channel generate_command1_set_p3 : MOD.PART.PROC.param3_c2

channel send_command : MOD.PART.PROC

Page 153: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.2. IMA TEST EXECUTION ENVIRONMENT 133

Similarly, output channels contain the MOD.PART.PROC and then denote the result sender process.The channels defined in IMA input channels and IMA output channels follow the latter solution.The definition files are provided in Appendix B.2.1. In the following, some examples are given de-scribing the design principle of these channels. Example 1 considers the creation of a semaphore,example 2 triggers writing a message into a sampling port, example 3 describes receiving the re-turn code and the output parameters of GET QUEUING PORT ID, and example 4 deals with receivinga message via a queuing port.

Example 1:

------ API call: CREATE_SEMAPHORE

-- set API call parameters

channel CREATE_SEMAPHORE_set_semaphore_name : MOD.PART.PROC.semaphore_idx_t

channel CREATE_SEMAPHORE_set_current_value : MOD.PART.PROC.semaphore_value_t

channel CREATE_SEMAPHORE_set_maximum_value : MOD.PART.PROC.semaphore_value_t

channel CREATE_SEMAPHORE_set_queuing_discipline : MOD.PART.PROC.queuing_discipline_t

-- perform API call with previously given parameters

channel API_call_CREATE_SEMAPHORE : MOD.PART.PROC

For triggering the creation of a semaphore, five different channels are defined: four forsetting the input parameters and one for triggering the command message transmission.The four input parameters of the API call create semaphore() are the semaphore’s name,its current value, its maximum value and the queuing discipline. As already described inSect. 6.1, the test application internally uses pre-defined mapping tables to map objectnames to object indices and vice versa. This means that, instead of providing a semaphorename in the command message, a semaphore index is transmitted and then mapped to aname by the TA. The CSP type semaphore idx t thus contains 512 (i.e., system limit)different values for normal range testing and two additional ones for robustness testing(i.e., 1 to 512, 0 and 513). For setting the parameters current value and maximum value,CSP type semaphore value t is defined. Possible values according to the ARINC653 API are 0 through 32 767 (i.e., the range from 0 to MAX SEMAPHORE VALUE ). Theenumeration type queuing discipline t defines the two different queuing disciplines.Finally, for sending the command message filled with the given input values, channelAPI call CREATE SEMAPHORE is defined. All channels contain the command receiver fieldsMOD.PART.PROC which allows that different test specifications can simultaneously triggersimilar commands to different processes.For justifying the chosen approach of separate channels for each input parameter, considerthe number of events: The sum of events defined by the above group of channels is274 780 480, while the number of events for a single channel allowing to set all four inputparameters is 4 591 835 435 499 520. This means the number of events has been reduced bya factor of 16 million. The number of events is further discussed after the examples.

Example 2:

------ API call: WRITE_SAMPLING_MESSAGE

-- set API call parameters

-- Note: the parameters define only the message size and the encoded sequence ID,

-- the message to be sent is generated by a helper function in the TA.

channel WRITE_SAMPLING_MESSAGE_set_sampling_port_id : MOD.PART.PROC.sampling_port_idx_t

channel WRITE_SAMPLING_MESSAGE_set_msg_size : MOD.PART.PROC.sampling_port_msg_size_t

channel WRITE_SAMPLING_MESSAGE_set_msg_seq_id : MOD.PART.PROC.msg_seq_id_t

-- perform API call with previously given parameters

channel API_call_WRITE_SAMPLING_MESSAGE : MOD.PART.PROC

To trigger writing a message into an existing sampling port, four different channels are

Page 154: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

134 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

defined which are not in one to one correspondence with the input parameters of theAPI service because the message to be sent is not transmitted as a command parameter.The respective message is described by its size and a sequence identifier instead. Beforetriggering the API service, the test application process generates a standard message usingthe parameters provided and additionally the sender’s address MOD.PART.PROC and a CRCof the message. As described in example 1, each command message parameter is set by aseparate channel and sending of the command is triggered by a fourth channel.The group of channels for WRITE SAMPLING MESSAGE defines 38 355 200 events whenconsidering the size of the types as described in example 1 and as defined in the ARINC653 specification.

Example 3:

------ GET_QUEUING_PORT_ID

-- get return code

channel API_out_GET_QUEUING_PORT_ID_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_QUEUING_PORT_ID_queuing_port_id : MOD.PART.PROC.queuing_port_idx_t

When triggering the command GET QUEUING PORT ID, the respective API service outputparameters are provided by two types of channels: One for the return code which isprovided by almost every API service and one for each output parameter. The possiblereturn codes are subsumed in the data type retcode t which is used by all return codechannels. The output parameter when requesting the port’s identifier is its associated index.The group of channels for receiving the return code and the output parameters ofGET QUEUING PORT ID defines 2 159 040 events when considering the size of the types asdescribed in example 1 and as defined in the ARINC 653 specification.

Example 4:

------ RECEIVE_QUEUING_MESSAGE

-- get return code

channel API_out_RECEIVE_QUEUING_MESSAGE_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_RECEIVE_QUEUING_MESSAGE_msg_size : MOD.PART.PROC.queuing_port_msg_size_t

-- Note: The output message is not returned here. Instead information provided

-- when sending the message is extracted and provided here.

channel API_out_RECEIVE_QUEUING_MESSAGE_msg_seq_id : MOD.PART.PROC.msg_seq_id_t

channel API_out_RECEIVE_QUEUING_MESSAGE_src_mod : MOD.PART.PROC.MOD

channel API_out_RECEIVE_QUEUING_MESSAGE_src_part : MOD.PART.PROC.avionics_part_t

channel API_out_RECEIVE_QUEUING_MESSAGE_src_proc : MOD.PART.PROC.process_number_idx_rob_t

For receiving a message via a queuing port, the API service RECEIVE QUEUING MESSAGE istriggered which returns a result code and the received message and its size. As describedin example 2, the messages are not transmitted as part of command or result messages.Instead, the test application checks the CRC of the message and extracts the encoded infor-mation sequence identifier and sender address and provides them to the test specification.The other output parameter message size is returned as usual.The group of channels for receiving the return code and the output parameters ofRECEIVE QUEUING MESSAGE defines almost 37 million events when considering the size ofthe types as described in example 1 and as defined in the ARINC 653 specification.

Restricting the Parameter Types. Considering the above examples, allows projecting the num-ber of events required for commanding all possible API calls. The estimated result of several

Page 155: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.2. IMA TEST EXECUTION ENVIRONMENT 135

billion events only for triggering the commands shows that this solution has reduced but not elim-inated the problem. As a consequence, it is necessary to further reduce the number of events tobe handled in each test specification by restricting the parameter types. This can be done by acombination of the following means:

• by selecting only relevant, interesting and standard values (e.g., according toan analysis of equivalence classes) instead of using large integer ranges, e.g.,by choosing as possible queuing port message sizes queuing port msg size t ={0,1,4,8,12,16,20,64,128,512,1024,2048,4096,8192,8193}with 0 and 8193 for robust-ness testing.

• by reducing the number of partitions or processes per partition which can be commandedby the test specifications since most test procedures require only one or two partitions andonly few processes per partition. This means restricting the data types PART and PROC, e.g.,to PART = {1,2} and PROC = {1,2,3,4,5,129,130}.

• by allowing different restrictions for different test procedures since some tests have to checkthe correct implementation of the system limits, e.g., by allowing up to 128 different pro-cesses in one test procedure and the maximum number of possible partitions in another one.

For the described test suite, eight different sets of restriction combinations are sufficient whicheach restrict the number of events to match the maximum supported by the FDR tool. Their maincharacteristics are summarized in Table 6.2. The table shows that, when using IMA types1, thetest specifications can address only five different processes (in addition to the main process andthe error handler) in only two partitions, but still allows to create and use the maximum number ofports, buffers, blackboards, semaphores and events. It also shows that, for example, IMA types7allows addressing of all possible processes in two different partitions but has to restrict most otherparameters to a minimum (e.g., supports only one message size). The CSP types definition fileIMA types1 is provided in Appendix B.1.1. The other types definitions vary as described in thetable but are not included in this thesis. Of course, this approach limits the test possibilities of eachtest procedure but it has been chosen such that still all parameter combinations could be tested bydifferent test procedures. In practice, this would require to have additional sets of restrictioncombinations.

Number ofmaximum addressable different

CSP Type Type of modules partitions processes ports buffer black- messageDefinition IFM boards, sizes

etc.IMA types1 TA 1 1 2 5 max max max 10-15IMA types2 TA 2 1 1 128 5 1331 5 10-15IMA types3 TA 3 1 3 3 max max max 10-15IMA types4 TA 4 1 4 3 max max max 10-15IMA types5 TA 5 1 8 3 max max max 10-15IMA types6 TA 6 1 max 3 max max max 10-15IMA types7 TA 7 1 2 128 5 1331 5 1IMA types8 TA 8 1 1 64 66 1331 5 51 The number of allowed buffers is 128+ 5 because 128 buffers are needed for command routing inside the partition.

Table 6.2: Overview of different type restrictions and the related interface module type

Page 156: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

136 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

CSPMacros. As described above, it is necessary to generate a sequence of events for triggeringa command since each parameter of the command is set by a separate event. Also, for each com-mand triggering the same command (either with the same or different parameters), the sequencehas to be repeated. The aim of the macros is to simplify each command trigger by providing afunction-like interface to set the parameters which is realized by a parameterized CSP process.These macros are compiled in IMA API handling (see Appendix B.3.1) which provides one CSPmacro process for each API call. For example, to trigger the API call CREATE PROCESS the follow-ing macro is provided:

-- set parameters and perform CREATE_SAMPLING_PORT call

-- (TA mapping table: process index -> process name)

CREATE_PROCESS (tapid, process_idx, stack_size, base_priority,

period, time_capacity, deadline) =

CREATE_PROCESS_set_attribute_process_name.tapid.process_idx ->

CREATE_PROCESS_set_attribute_stack_size.tapid.stack_size ->

CREATE_PROCESS_set_attribute_base_priority.tapid.base_priority ->

CREATE_PROCESS_set_attribute_period.tapid.period ->

CREATE_PROCESS_set_attribute_time_capacity.tapid.time_capacity ->

CREATE_PROCESS_set_attribute_deadline.tapid.deadline ->

API_call_CREATE_PROCESS.tapid ->

SKIP

The test specifications can use the macro similar to a C function call, e.g., for commanding thatthe main process (index 130) shall create an aperiodic process with process name PROCESS 1, stacksize 32768, priority 3, infinite period, time capacity 500 and a hard deadline the CSP specificationcan contain CREATE PROCESS (1.1.130,1,32768,3,-1,500,deadline HARD); which resultsin the sequence of events denoted in the macro.In addition to the macros in IMA API handling, the test execution environment provides macrosfor triggering a command and afterwards checking the return code and the output parameters.These additional macros use the abovementioned macros and are contained in IMA API macros(see Appendix B.3.1). The trigger and check macros can be quite simple if only the return codehas to be checked – or fairly elaborate if the expected events depend on the expected return codeand/or the expected return values. The following macro is on of the simple ones which triggerswriting a message into a sampling port and checks that the return code is as expected. The inputparameters as well as the expected return code are input parameters for the macro.

-- perform WRITE_SAMPLING_MESSAGE and check return values

check_WRITE_SAMPLING_MESSAGE (tapid, sampling_port_idx, msg_size, msg_seq_id,

ret_code) =

-- trigger API call

WRITE_SAMPLING_MESSAGE(tapid,

sampling_port_idx,

msg_size,

msg_seq_id);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_WRITE_SAMPLING_MESSAGE_ret_code.tapid.ret_code>);

SKIP

For commanding that the process with process name PROCESS 3 in partition 2 shall write amessage of size 512 with sequence identifier 4 into the sampling port with name SP 123and expecting that this API call will return with an error because the sampling port has not

Page 157: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.2. IMA TEST EXECUTION ENVIRONMENT 137

been created (i.e., expected return code is INVALID PARAM), the test specification can containcheck WRITE SAMPLING MESSAGE (1.2.3, 123, 512, 4, ret INVALID PARAM); .

A more elaborate macro is necessary, for example, for reading a message from a sampling port asshown below. For this CSP macro example, the input parameter, the expected return code as wellas output parameters have to be provided.

-- perform READ_SAMPLING_MESSAGE and check return values

check_READ_SAMPLING_MESSAGE (tapid, sampling_port_idx, ret_code, msg_size,

msg_seq_id, src_mod.src_part.src_proc, validity) =

-- trigger API call

READ_SAMPLING_MESSAGE(tapid, sampling_port_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_READ_SAMPLING_MESSAGE_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

(if (msg_size >= 4)

then (-- the message is big enough to contain a sequence identifier and

-- information about the sender

WAITFORSEQ(TM_RETVAL,

<API_out_READ_SAMPLING_MESSAGE_ret_code.tapid.ret_code,

API_out_READ_SAMPLING_MESSAGE_msg_size.tapid.msg_size,

API_out_READ_SAMPLING_MESSAGE_validity.tapid.validity,

API_out_READ_SAMPLING_MESSAGE_msg_seq_id.tapid.msg_seq_id,

API_out_READ_SAMPLING_MESSAGE_src_mod.tapid.src_mod,

API_out_READ_SAMPLING_MESSAGE_src_part.tapid.src_part,

API_out_READ_SAMPLING_MESSAGE_src_proc.tapid.src_proc>);

SKIP)

else (-- if msg_size is less than 4 bytes it is not possible to encode

-- the sender of the message and a sequence identifier

WAITFORSEQ(TM_RETVAL,

<API_out_READ_SAMPLING_MESSAGE_ret_code.tapid.ret_code,

API_out_READ_SAMPLING_MESSAGE_msg_size.tapid.msg_size,

API_out_READ_SAMPLING_MESSAGE_validity.tapid.validity>);

SKIP));

SKIP

For commanding that the process with process name PROCESS 2 in partition 1 shall read avalid message of expected size 512, with expected sequence identifier 4, sent from process 3in partition 2 from the sampling port with name SP 20, and further expecting that this APIcall will succeed (i.e., expected return code is NO ERROR) and read the expected and correctlytransmitted message (i.e., also the CRC check is correct), the test specification can containcheck READ SAMPLING MESSAGE (1.1.2, 20, ret NO ERROR, 512, 4, 1.2.3, vt VALID); .

CSPChannels andMacros for Scenario Handling. In addition to performing commanded APIcalls, the test application provides a set of scenarios which are pre-programmed sequences of APIcalls. The parameters for the API calls are either pre-defined (i.e., depend on the scenario only)or can be defined by parameters transmitted with the scenario trigger command. These commandmessages have the same format as those for triggering an API call but are generated by different

Page 158: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

138 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

channels. Since scenarios are usually triggered by one or two test procedures only and thus theirusage can be less intuitive, there is only one group of channels used for all scenarios. Similar tothe approach for commanding API calls, there are channels for setting the scenario parameters anda channel to send out the generated command message. The main characteristics of these channelsare:

• Each parameter setting channel requires to explicitly define the parameter number. In con-trast, for API channels, the parameter number is implicitly defined by the channel name(i.e., by the API parameter).

• There are two types of channels for setting parameters: One for setting the most commonvalues (channel SCENARIO set parameter) and another one for setting specific values whichcannot be set with the other channel (channel SCENARIO set ext parameter). The formerallows to set all integer values between 1 and 1000 which has been revealed to cover mostparameter values. The latter supports only single values but, if necessary, can easily beextended to facilitate setting of additional values. Both sets of possible values are defined inIMA types and are disjoint.

• Each scenario is identified by its scenario number (possible scenario numbers are defined inscen param num t).

------ Scenario: Set scenario parameter

-- set scenario parameters (values between 1 and 1000)

channel SCENARIO_set_parameter : MOD.PART.PROC.scen_param_num_t.scen_param_value_t

-- set scenario parameters (values defined in a specific set of allowed values)

channel SCENARIO_set_ext_parameter : MOD.PART.PROC.scen_param_num_t.scen_ext_param_value_t

-- perform scenario call

channel SCENARIO_activate : MOD.PART.PROC.scen_number_t

For receiving the return code and the output parameters of a scenario, three channels are defined:One for receiving the return code, one for receiving output parameter values between 1 and 1000,and one for receiving specific output parameter values. Like API services, each scenario issuesa return code which is defined by the scenario’s specification. For one scenario, the number ofreply message is variable and depends on sequence of API calls to be performed by the scenario.

------ Scenario

-- get return code

-- Note: The return code is issued by the scenario and depends on internal

-- checking results (e.g., when all triggered API calls have failed

-- as expected the return code can still be NO_ERROR).

channel SCENARIO_out_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

-- Note: The scenario output parameters can either be in the range of type

-- scen_ret_param_num_t or scen_ext_ret_param_value_t. The channel

-- used depends on that.

channel SCENARIO_out_ret_value : MOD.PART.PROC.scen_param_num_t.scen_ret_param_value_t

channel SCENARIO_out_ext_ret_value : MOD.PART.PROC.scen_param_num_t.scen_ext_ret_param_value_t

Triggering a scenario requires generating a sequence of events which depends on the pa-rameter values and the number of parameters. In particular, it might be necessary to usedifferent channels for setting the parameters, for example, if a scenario with two param-eters is triggered and the first parameter value is 500 and the second one 1024, the firstone has to be set by event SCENARIO set parameter.1.500 and the second one by event

Page 159: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.2. IMA TEST EXECUTION ENVIRONMENT 139

SCENARIO set ext parameter.2.1024. To simplify issuing scenario commands, macros are de-fined for each number of scenario parameters which internally choose the right channel for settingthe respective parameter values. These macros are defined in IMA scenario handling; an examplefor calling a scenario with two parameters is given below. Checking the return code and the returnvalues of the scenarios has to be implemented explicitly in the test specifications since the numberof reply messages for a scenario is variable.

-- call scenario with two parameters

START_SCENARIO_with_2_parameters (tapid, scenario_no, p1_value, p2_value) =

(if (member(p1_value, scen_parameter_value_t))

then SCENARIO_set_parameter.tapid.1.p1_value ->

SKIP

else SCENARIO_set_ext_parameter.tapid.1.p1_value ->

SKIP);

(if (member(p2_value, scen_parameter_value_t))

then SCENARIO_set_parameter.tapid.2.p2_value ->

SKIP

else SCENARIO_set_ext_parameter.tapid.2.p2_value ->

SKIP);

SCENARIO_activate.tapid.scenario_no ->

SKIP

For commanding that the process with process name PROCESS 1 in partition 1 executesscenario 17 with the parameters 512 and 1024, the CSP test specification can containSTART SCENARIO with 2 parameters (1.1.1, 17, 512, 1024); .

Interface Module for Commanding API Calls and Scenarios. The interface module IFM TAhas two main tasks:

1. The IFM interprets events generated by the test specifications for setting the parametersand triggering the indicated command or scenario. After receiving a sequence of relatedevents (i.e., the events to set the parameters and trigger the command), the IFM generates acommand message as input for the SUT.

2. The IFM accepts reply messages of the SUT containing the return code and the outputparameter values of a command or scenario. For each received reply message, it generatesthe respective sequence of events as input for the test specifications.

Different versions of this interface module are necessary depending on the CSP type definition fileused for declaring the channels. For example, when using IMA types2 for addressing the maxi-mum number of processes, IFM TA 2 is specified in the RT-Tester configuration template.The communication with the SUT uses the AFDX hardware interface and thus IFM TA X collab-orates with IFM AFDX.

6.2.2 CSP Environment for Semi-Automated Testing

Most test procedures can be executed in a fully automated way if a correctly configured systemunder test is provided. However, there are some test procedures that require manual interactionof the tester during test execution, e.g., to load another configuration as part of the test procedureor to read measuring instruments before continuing the test. To cope with such requirements,the test execution environment provides specific channels and macros and uses the RT-Tester Test

Page 160: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

140 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

Visualization Subsystem for interaction with the tester. The interactions can either be command-line-oriented or take place via a graphical user interface. Whenever required by the test design,the test execution is interrupted automatically at the specified points and the tester is informed toperform the necessary tasks. After the manual interactions, the tester can trigger continuation ofthe test execution.The respective CSP types, channels and macros are defined in IMA manual types, IMA manual in,IMA manual out and IMA manual handling, respectively. The graphical RT-Tester Test Visu-alization Subsystem as well as the format of the required configuration file are described in[RT-Tester 5] (p. 96ff). Further details are outside the scope of this thesis.

6.2.3 CSP Environment for AFDX, ARINC 429, CAN, Analog and Discrete I/O

The CSP types, channels, macros and interface modules used for addressing the hardware inter-faces of the test system which are connected to the hardware interfaces of the system under testare defined separately and the respective definition files are listed in Table 6.1. The CSP channeland type definitions depend mainly on the type of interface and the characteristics of the hardwaredriver (further details are outside the scope of this thesis). In general, the design principles de-scribed above are followed. For example, for sending an AFDX message the message size andthe sequence identifier are defined by separate CSP events and a further CSP event then triggersgeneration of the message and its transmission via the interface module IFM AFDX.

The RT-Tester configuration which is part of each test procedure denotes which interface modulesare required for the particular functional tests.

6.2.4 CSP Environment for the Communication Flow Scenario

For different test objectives related to partitioning, a specific scenario is required for showing that,for example, a partition failure, a partition reset, or a partition shutdown does not disturb the otherpartitions. This so-called communication flow scenario is activated in each participating partitionseparately and is defined as follows (see also Fig. 6.7 and Fig. 6.8): The test specifications sendspecific communication flow messages to the partitions. Each message contains a sequence ofmessage communication ports (i.e., queuing ports, sampling ports, buffers, blackboards) whichare used to route the message within the IMA module from process to process or even to anotherpartition and finally back to the test specification. A set of TA processes is listening on the re-spective destination ports and send the message to the respective next source port specified in thecommunication flow message. In addition, each receiver adds the time stamp of message receptionto verify that communication is uniform and not dependent on other activities of the IMA module.To ensure comparability of the time stamps, it is required to send the communication flow mes-sages with a specified frequency and without changing the receivers or the size of the message.This means the same message is send repeatedly by the test specifications with different sequenceidentifier. The verification of uniformity is always performed outside the IMA module and thusthe last port has to be connected to the test system. The following paragraphs shortly addressesthe CSP types, channels, and macros which are defined in the test execution environment for thecommunication flow scenario.

CSP Types. The CSP types are defined in IMA com flow types (see Appendix B.1.2) and specifythe details of the communication flow message, i.e., the maximum number of possible receiversper message, possible communication flowmessage sizes, possible sequence identifiers, and whichport types can be used for relaying the communication flow messages. The CSP type definitionsare used for defining the CSP channels.

Page 161: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.2. IMA TEST EXECUTION ENVIRONMENT 141

CSP Channels. The CSP channels for generating and sending a communication flow messagehave one particularity: They allow to define a communication flow message and re-send it re-peatedly using a dedicated channel (channel AFDX com flow send message). As a consequence,it is necessary to have a channel that allows to clear the previously defined message (channelAFDX com flow clear message). In addition, it is possible to have several predefined communica-tion flowmessages which can be used for different simultaneously running scenarios (e.g., runningin different partitions).Communication flow messages received by the IFM are interpreted and checked for correctnessusing the CRC. This result as well as the message size and the sequence identifier are then an-nounced using specific channels. However, further checks are not performed by the CSP testspecifications and thus require no specific channels.All channels are defined in IMA com flow in and IMA com flow out (see Appendix B.2.2).

CSP Macros. IMA com flow handling (see Appendix B.3.2) provides CSP macros for generat-ing a communication flow message, sending a previously defined one, waiting for communicationflow message reception, and for starting the communication flow scenario in an involved test ap-plication process which has to listen to a set of ports. The macros use the channels defined inIMA com flow in and IMA com flow out as well as the channels required for scenario handlingwhich are defined in IMA scenario in.

Interface Module. The communication flow messages generated by the interface module aretransmitted to the test application using AFDX. Therefore, IFM AFDX also reacts on communica-tion flow events which set message parameters or trigger sending the message and, after receivinga communication flow message, also generates a sequence of communication flow events to indi-cate the message’s parameters.

6.2.5 General CSP Macros

The test execution environment also provides general CSP macros which can be used by all testspecifications and all other macro definitions. The general CSP macros are defined in IMA macros(see Appendix B.3.3) which provides the following types of macros:

• macros for waiting on a set or sequence of events within a given time interval,• macros for rejecting a set of events within a given time interval,• functions for accessing a sequence or set of elements and finding adequate elements, and• other helper functions.

Page 162: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

142 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

6.3 IMA Configuration Library

The IMA Configuration Library consists of a set of configuration rules which each result in one ormore configurations. A Configuration Data Generator transforms the configuration rules into theconfiguration table format which has been described in detail in Sect. 3.2.1 (for module-level con-figuration data) and Sect. 3.2.2 (for partition-level configuration data). The resulting configurationdata is then input

• to the Load Generation Tool Chain to generate data loads to be loaded on the IMA module,and

• to the Configuration Data Parser which extracts the configuration data relevant to a test ina format readable by the test specifications.3

An overview of this data flow has been depicted in the beginning of this chapter in Fig. 6.2.

This section focuses on the structure of the configuration library and on the respective tools –the configuration data generator and the configuration data parser – which are described in thefollowing subsections. The load generation tool chain is described in Sect. 6.5.1 as part of the testpreparation environment.

Configuration Rules Files. The set of configuration rules files allows to generate many differentconfigurations which are necessary to exercise the widest possible range of IMA capabilities. Allconfigurations are artificial, i.e., not used for any other purpose than testing, but shall contain allconfiguration aspects of final and potential future configurations. The configurations vary with re-spect to the number of avionics partitions and their characteristics, the number and characteristicsof queuing or sampling ports per partition, health monitoring characteristics, scheduling, memoryallocation, and usage of external interfaces and their parameters. Which variations are useful de-pends on the configuration requirements of the test specifications. For example, for testing the testobjectives addressed in the IMA Test Specification Template Library (see Sect. 6.4) more than 100different configurations are used.

In general, each configuration rules file results in one IMA module configuration. In some cases,there are only minor differences between a set of configurations, for example, when only thecharacteristics of the queuing or sampling ports are slightly different. To simplify the genera-tion of such configurations, the configuration rules file may contain special rules which define thederivations and the resulting number of different configurations. For example, the configurationrules file for configuration Config0040 (see Appendix C.3.1) results in two different configura-tions called Config0040 1 and Config0040 2 (the generated configuration tables are depicted inAppendix C.3.2.1 and Appendix C.3.2.2, respectively).

Generated Configurations. Each configuration rules file is transformed into one or more con-figurations by the configuration data generator. All resulting configurations are syntactically cor-rect and define all necessary parameters. Thus, the load generation tool chain can transform theconfigurations into data loads loadable with standard data loaders. For testing the robustness andthe different operational modes of the IMA module, some configurations are inconsistent and maynot pass the basic checks of the operating system software which have briefly been describedin Sect. 3.1.2.7. However, most configurations are legal, i.e., consistent and compliant with themodule’s hardware characteristics and the configuration requirements, and, after loading, the IMAmodule can enter its operational mode.

3Technically, the configuration data parser can also extract the information in a format usable by the test applications.Nevertheless, this information is not used when compiling the test applications.

Page 163: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.3. IMA CONFIGURATION LIBRARY 143

Each configuration is a set of different files which contain the several module- and partition-levelconfiguration tables. The IMA configuration library is structured accordingly into directories –one for each configuration rules file – and one or more sub-directories per directory for each re-sulting configuration. This structure is depicted in Fig. 6.4 showing, for example, that, in directoryConfig0001, there is only one sub-directory called 1 for the first and only generated configura-tion. In directory Config0040, there are two sub-directories for the resulting configurations whicheach contain the complete set of configuration tables. The IMA configuration library also containsconfiguration templates (e.g., in directory TEMPLATE01) to simplify the configuration rules files.

Configuration rules files

IMA Configuration Library

Config0001rules rules

Config0040. . .

ConfigXXXXrules. . .

ConfigurationTemplates

TEMPLATE01 . . .

...

module.csvpartition1.csvpartition2.csv

TEMPLATE 01

rules.igr1module.csvpartition1.csvpartition2.csv

Config0001

IMA Configuration Library

Test Relevant Configuration Data Extracts

Config0001- Config0001 1

. . . - Config0040 2 . . .

ConfigXXXX. . .

Generated Configurations

Config0001- Config0001 1

. . .

Config0040

- Config0040 2 . . .

ConfigXXXX. . .- Config0040 1

*.csv *.csv *.csv

*.csp *.csp *.csp

Configuration Data Parser

Configuration Data Generator

- Config0040 1Config0040

...

rules.igr1module.csvpartition1.csvpartition2.csvpartition3.csvpartition4.csv

Config0040

config extractsIMA Conf Globals.cspIMA Conf PT1.cspIMA Conf PT2.csp

config extractsIMA Conf Globals.cspIMA Conf PT1.cspIMA Conf PT2.cspIMA Conf PT3.cspIMA Conf PT4.csp

module.csvpartition1.csvpartition2.csvpartition3.csvpartition4.csv

2

config extractsIMA Conf Globals.cspIMA Conf PT1.cspIMA Conf PT2.cspIMA Conf PT3.cspIMA Conf PT4.csp

...ConfigXXXX...

Figure 6.4: Structure of the IMA configuration library and the generated configurations

Configuration Templates. Configuration templates contain either a complete set of configura-tion tables at module and partition level for one or two partitions, or a subset of the configurationtables (e.g., only the module-level ones). Each configuration rules file can copy the tables of aspecific configuration template. The configuration rules then – depending on the specified rules –modify the existing tables, add or delete table rows, or add or delete complete tables. As a matterof fact, more modifications are necessary if the copied configuration template is a partial config-uration. Using complete templates simplifies the configuration rules since only the changes (e.g.,adding some queuing ports) have to be specified to achieve complete and legal configurations. Itis also possible (but unusual) to copy several configuration templates which each contain only aspecific non-overlapping subset of the configuration tables.

Page 164: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

144 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

In Appendix C.1, configuration template TEMPLATE 01 is specified which contains all module-levelconfiguration tables as well as the partition-level tables for two different avionics partitions. It is asmall but legal configuration with consistent memory allocation and scheduling, but it includes noAPI ports. As a consequence, if a configuration based on this configuration template shall be usedfor testing, the necessary command ports have to be added by appropriate modifications to allowcommanding of the test applications.

Test Relevant Configuration Data Extracts. In theory, the configurations in the configura-tion library are expected to be a superset of the potential configurations and each test procedureshall be tested with all possible configurations (i.e., with those complying with its configurationrequirements). In reality, the number of different configurations to be tested with each test proce-dure is limited by the overall testing time. However, as a pre-requisite for meeting this objective,the test procedure templates have been designed to be as configuration-independent as possible. Inaddition, each one provides a list with its configuration constraints and a list with matching config-urations. The necessary specific configuration data are linked with the test during test preparationafter deciding with which configuration the test procedure is executed. The respective include filesare configuration extracts containing the configuration data relevant for testing and are generatedby the configuration data parser. The parser examines all configuration files of a configuration(i.e., all csv files) and outputs them in a format usable by the test specifications of the test pro-cedure. Thereby, it generates one file containing the global configuration data and one file foreach specified partition defining the partition-relevant configuration data. Figure 6.4 shows thatthese configuration extracts are contained in the generated configuration directories and are com-piled in a separate directory called config extracts. The file with the global configuration dataextracts is called IMA Conf Globals.csp and the files with the partition-relevant data are calledIMA Conf PTx.csp (where x is the partition identifier). Since the configuration data extracts con-tain only the test relevant configuration data, their representation format suits the requirements ofthe CSP test specifications (e.g., are CSP constants, sets, or sequences). Other test systems wouldprobably require a different extract format.

Configuration Data Generator and Configuration Data Parser The following subsectionsdeal with the configuration data generator and the configuration data parser: Section 6.3.1 de-scribes the approach of the configuration data generator and the format of the configuration rulesfiles and the syntax of the different rules. Section 6.3.2 presents the configuration data parser andthe format of its output files.

6.3.1 Configuration Data Generator

The configuration data generator transforms a configuration rules file into one or more configu-rations. Thereby the complete configuration data as well as the number of configurations to beproduced is derived from the rules. The configuration data generator understands different genericas well as specialized generation rules which allow the generation and modification of module-and partition-level configuration tables. The generic rules can be used for modifying any con-figuration table, while the specialized rules are restricted to modify a pre-defined set of tables orparameter values, but make some modifications easier.

The generic and specialized rules provided are introduced in the following. Their syntax and se-mantics are described in more detail in the subsequent sections which also contain some examples.Complete configuration rules files containing further examples of generator rules are provided inAppendix C.2.1 (for Config0001) and Appendix C.3.1 (for Config0040). The resulting configura-tion tables are shown in Appendix C.2.2 and Appendix C.3.2, respectively.

Page 165: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.3. IMA CONFIGURATION LIBRARY 145

1. generic rules

(a) COPY CONFIG: copies the tables of an existing configuration (usually a template con-figuration),

(b) INSERT: creates new table rows based on table templates,(c) MODIFY: modifies existing table cells (i.e., parameter values),(d) DELETE: deletes existing table rows.

2. specialized rules

(a) CLONE PARTITION: generates a new partition by cloning the configuration tables of anexisting one,

(b) DELETE PARTITION: deletes all configuration tables of an existing partition,(c) GENERATE COMMAND PORTS: generates for each partition a pair of ports for receiving

commands and sending results,(d) CONNECT: generates API ports and connects them to existing API ports, AFDX ports,

messages or signals, respectively,

In the following syntax descriptions, some variables have a predefined meaning and are used forintegers or strings. An integer (int) is a sequence of decimal digits with an optionally prefixed‘-’. A string (string) is a sequence of alphabetic characters, decimal digits and some specialcharacters such as ‘ ’, ‘-’, ‘ ’.

• config name is a string which is the name of an existing configuration whose path is definedin an external makefile. (Usually, the naming rules for configurations prohibit spaces innames.)

• table name is a string representing the name of a configuration table. All possible tablenames are defined in Sect. 3.2.1 and Sect. 3.2.2.

• col name, col name1, and col name2 are strings representing one of the table’s configurationparameters which are also defined in Sect. 3.2.1 and Sect. 3.2.2 for each table.

• tmpl name is a string representing the name of a table template row which is defined in therules file.

• part num, part num1, part num2, part numX are integers representing partition identifiers.• port num, port num1, and port num2 are integers representing API port identifiers.• afdx port num is an integer representing an AFDX port identifier.• max msg size and max msg nb are integers representing the maximum size of a port and, forqueuing ports only, its queue length.

• name is a string.

6.3.1.1 Generic rule COPY CONFIG

COPY CONFIG "config_name"

All rule files start with an empty configuration which contains no configuration tables. This meansthat the configuration has to be generated from scratch using a sequence of different rules. As ashortcut, the COPY CONFIG rule allows to use a configuration template which is either a dedicatedtemplate configuration or a another configuration already defined by a sequence of rules. Allconfiguration tables in the template configuration are copied to the current configuration withoutmodifications. Subsequent rules have to take into account the content of the copied configurationtables. The path for finding the configuration to be copied is defined in and external makefile.

Page 166: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

146 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

Example COPY CONFIG "TEMPLATE01"

When applying this rule, all configuration tables in the template configurationTEMPLATE01 are copied to the current configuration. This template configuration con-tains two partitions and no API ports and the configuration tables are depicted inAppendix C.1.

6.3.1.2 Generic rule INSERT

BEGIN TEMPLATE "tmpl_name"

"col_name1" = value

"col_name2" = value

...

END TEMPLATE

INSERT ((SYSTEM)? PARTITION part_spec | MODULE)

TABLE "table_name" ROW FROM TEMPLATE "tmpl_name"

(AUTOINC columns)?

with

part_spec ::= part_num

| [part_num1 .. part_num2]

| [part_num1, part_num2, ..., part_numX]

| *

columns ::= columns "col_name"

| "col_name"

value ::= int

| "string"

The INSERT rule can be used to append a new row in the specified configuration table based ona previously defined template row. It is specified by the rule if the respective configuration tableis a module-level or a partition-level configuration table and, in the latter case, to which partitionor system partition the table belongs to. By applying the rule, the values defined in the templateare used for the new row. Additionally, the column PARTITION ID contained in all partition-levelconfiguration tables is set automatically. If the template defines only a subset of the configurationtable’s columns, some columns remain undefined (i.e., empty).For partition-level configuration tables, it is further possible to specify that the template shall beinserted to more than one partition’s configuration table, i.e., to all partitions (denoted by *) or toa list of partitions which is specified, for example, by [1..4] or [1,2,3,4] (if these are validpartition identifiers).If the optional parameter AUTOINC is given and the values for the respective columns end with adecimal number, the values inserted for the specified columns shall each be incremented startingat the given value until the cell value is unique in the respective column of the specified partition’stable.

Examples

1. BEGIN TEMPLATE "AFDX_OUTPUT_VL_TMPL""VL_IDENTIFIER" = 100

"VL_NAME" = "AFDX_VL_100"

"NETWORK" = "A&B"

"PORT_ID" = 100

Page 167: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.3. IMA CONFIGURATION LIBRARY 147

"PORT_CHARAC" = "queuing"

"PORT_TRANS_TYPE" = "multicast"

END TEMPLATE

INSERT MODULE TABLE "AFDX_OUTPUT_VL" ROW \

FROM TEMPLATE "AFDX_OUTPUT_VL_TMPL" \

AUTOINC "VL_IDENTIFIER" "VL_NAME" "PORT_ID"

The template AFDX OUTPUT VL TMPL defines possible default values for an AFDX output vir-tual link. By applying the given INSERT rule, a new row is appended to the module-levelconfiguration table AFDX OUTPUT VL and the values in the columns VL IDENTIFIER, VL NAMEand PORT ID are incremented until they are unique within the respective column. Assum-ing that the configuration table is empty before applying the INSERT rule, the values remainas defined in the template. When applying the same INSERT rule twice, the value for thespecified columns is incremented (i.e., is 101 instead of 100).

2. BEGIN TEMPLATE "AFDX_OUTPUT_MSG_TMPL""ASSOCIATED_VL_NAME" = "AFDX_VL_100"

"ASSOCIATED_AFDX_PORT_ID" = 100

END TEMPLATE

INSERT PARTITION 1 TABLE "AFDX_OUTPUT_MESSAGE" ROW \

FROM TEMPLATE "AFDX_OUTPUT_MSG_TMPL"

The template "AFDX OUTPUT MSG TMPL defines possible values for an AFDX output message.When applying the INSERT rule, a new row is appended to the partition-level configurationtable AFDX OUTPUT MESSAGE of partition 1 using the values defined in the template. Thecolumn PARTITION ID of this new row is set to the correct partition identifier.

6.3.1.3 Generic rule MODIFY

MODIFY ((SYSTEM)? PARTITION part_spec | MODULE)

TABLE "table_name" (WHERE conditions)?

SET "col_name" = value_spec (VALNUM int)? (CONFNUM int)?

with

part_spec ::= part_num

| [part_num1 .. part_num2]

| [part_num1, part_num2, ..., part_numX]

| *

conditions ::= conditions condition

| condition

condition ::= "col_name" = value

value ::= int

| "string"

value_spec ::= int

| "string"

| <"string1", "string2", ..., "stringX">

| <int1, int2, ..., intX>

| <int1 .. int2>

| [int1, int2, ..., intX]

| [int1 .. int2]

Page 168: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

148 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

The MODIFY rule can be used to modify the specified column’s values in all or a selected set of rowsof the specified table. The table is either a module-level configuration table (denoted by MODULE)or the partition-level tables of one or more partitions (denoted by PARTITION part spec). Theoptional WHERE parameter allows to restrict the number of rows by conditions which have to betrue for rows to be modified.

Optional parameter VALNUM. When modifying the respective column value, it is possibleto use the same integer or string value for all rows, or to use different ones for differ-ent rows. For defining different string values, a finite sequence of values can be speci-fied (e.g., <"module","partition"> ). For defining different integer values, a finite set (e.g.,[1,2,3,4,5,6] ) or sequence of integers (e.g., <1,2,3,4> ) can be specified. All sequences andsets are internally transformed into an infinite list of values by additionally evaluating the optionalparameter VALNUM which defines the maximum number N of different values to be taken from theset or sequence (the default value is 1).For sequences, the first N values are taken from the sequence and are then repeated in the infinitelist. For the above given example sequence, this results in the infinite list <1,2,1,2,...> (forN = 2).For sets of integers, first the elements in the set are sorted in ascending order and then a sequenceof the respective values is generated which starts with the smallest element followed by the biggestand then the median element4. The other values are chosen from the remaining set, usually by se-lecting the median of the lower and the upper half, respectively. This sequence is then the basisfor generating the infinite list of values as described above. For the above example set, this resultsin the infinite list <1,6,3,2,4,1,6,3,...> (for N = 5).Summarizing, for generating the infinite list of values, a sequence s of values is used which isderived from the specified set or is equal to the given sequence. Starting at the first element ofthe derived sequence, N elements are taken in the order of the sequence and are then repeatedinfinitely in the infinite list. If length(s) < N, the complete sequence is repeated infinitely.

Optional parameter CONFNUM. The optional parameter CONFNUM can be used to generate M differ-ent configurations (default value for CONFNUM is 1). Therefore, M copies of the current configura-tion are generated and then modified separately. The modifications of each configuration dependon the combination of the parameters VALNUM N and CONFNUM M and the number of values in the de-rived sequence s. For modifying the first configuration copy, the infinite list of values is generatedas described above. For the other configuration copies m with m ≤ M, the starting point is elementp = (((m − 1)% length(s)) + 1). The starting point is used for the first element of the infinite list.For the other elements n with n ≤ min(N, length(s)), element (((p + n − 2)% length(s)) + 1) of thesequence s is used. The resulting sequence is then repeated infinitely.

Examples with Integer Values

1. MODIFY PARTITION * TABLE "OUTPUT_DATA" \WHERE "PORT_CHARAC" = "queuing" \

SET "PORT_MAX_MESSAGE_NB" = 10

For all partitions, set the queue length of all queuing output ports to 10.

2. MODIFY PARTITION * TABLE "OUTPUT_DATA" \WHERE "PORT_CHARAC" = "queuing" \

SET "PORT_MAX_MESSAGE_NB" = <1,4,10> VALNUM 3

For all partitions, change the queue length of all queuing output ports to one of the allowednew values which are each taken from the following list <1,4,10,1,4,10,...> . Thismeans that the first port’s queue length is set to 1, the second one’s to 4, the third one’s to

4If the set contains an even number of values, the smaller one of the two middle values is taken.

Page 169: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.3. IMA CONFIGURATION LIBRARY 149

10, the fourth one’s to 1, etc. The rules are applied separately for each partition, i.e., the firstport in each partition uses 1 as the queue length, etc.Note that this generator rule is just an example and will most probably result in an in-consistent configuration, if the queue length of the respective input ports is not changedaccordingly.

3. MODIFY PARTITION * TABLE "OUTPUT_DATA" \WHERE "PORT_CHARAC" = "queuing" \

SET "PORT_MAX_MESSAGE_NB" = <1,10,5,15> CONFNUM 3

Create three configurations and use in each one a value from the specified sequence. In thefirst configuration, set the queue length of all queuing output ports for all partitions to 1 (firstelement in the sequence). In the second configuration, set the value to 10 (second elementin the sequence). In the third configuration, set the queue lengths to 5 (third element in thesequence).If more configurations shall be created than different value lists are possible, the modifica-tions result in identical configurations. For example for CONFNUM 5, the modifications forthe first and the fifth configuration are the same.The rules are applied separately for each partition as described above.

4. MODIFY PARTITION * TABLE "OUTPUT_DATA" \WHERE "PORT_CHARAC" = "queuing" \

SET "PORT_MAX_MESSAGE_NB" = <1,10,5,15> VALNUM 3 CONFNUM 3

Create three configurations and use in the first one values from the list <1,10,5,1,...> ,i.e., the first output queuing port’s queue length is set to 1, the second one’s to 10, the thirdone’s to 5, etc. In the second configuration, the values are taken from the tail of the givensequence (i.e., from the list <10,5,15,10,5,...> ). For the third configuration, valuesare taken from the list <5,15,1,5,15,...> . If more configurations are generated thandifferent lists are possible, the respective configuration parts are equal.The rules are applied separately for each partition as described above.

5. MODIFY PARTITION * TABLE "OUTPUT_DATA" \WHERE "PORT_CHARAC" = "queuing" \

SET "PORT_MAX_MESSAGE_NB" = <0..10> VALNUM 5

The values for modifying the queue length of the output queuing ports are taken from thelist <0,1,2,3,4,0,1,2,...> .

6. MODIFY PARTITION * TABLE "OUTPUT_DATA" \WHERE "PORT_CHARAC" = "queuing" \

SET "PORT_MAX_MESSAGE_NB" = [1,4,10] VALNUM 3

For all partitions, change the queue length of all queuing output ports to one of the specifiedvalues which are each taken from the following list <1,10,4,1,10,4,1,10,...> . Thismeans that the first port’s queue length is set to 1, the second’s to 10, the third’s to 4, thefourth’s to 1, etc.Omitting the parameter VALNUM is equal to VALNUM 1 and results in a list of possible valuescontaining only the first value.The rules are applied separately for each partition, i.e., the first port in each partition uses 1as the queue length, etc.

7. MODIFY PARTITION * TABLE "OUTPUT_DATA" \WHERE "PORT_CHARAC" = "queuing" \

SET "PORT_MAX_MESSAGE_NB" = [1,4,10] CONFNUM 3

Create three configurations and use in each one a value from the specified set. In the firstconfiguration, set the queue length of all queuing output ports for all partitions to 1 (smallestelement in the set). In the second configuration, set the value to 10 (biggest element in the

Page 170: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

150 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

set). In the third configuration, set the queue lengths to 4.If more configurations shall be created than different values are possible, the modificationsresult in identical configurations. For example for CONFNUM 4, the modifications for thefirst and the fourth configuration are the same.The rules are applied separately for each partition as described above.

8. MODIFY PARTITION * TABLE "OUTPUT_DATA" \WHERE "PORT_CHARAC" = "queuing" \

SET "PORT_MAX_MESSAGE_NB" = [1,4,10] VALNUM 3 CONFNUM 3

Create three configurations and use in the first one values from the list<1,10,4,1,10,...> , i.e., the first output queuing port’s queue length is set to 1,the second one’s to 10, the third one’s to 4, etc. In the second configuration, the values aretaken from the infinite list <10,4,1,10,4,...> . For the third configuration values aretaken from the infinite list <4,1,10,4,1,...> . If more configurations are generated thandifferent list are possible, the respective configuration parts are equal.The rules are applied separately for each partition as described above.

9. MODIFY PARTITION * TABLE "OUTPUT_DATA" \WHERE "PORT_CHARAC" = "queuing" \

SET "PORT_MAX_MESSAGE_NB" = [0..10] VALNUM 5

The values for modifying the queue length of the output queuing ports are taken fromthe list <0,10,5,x,y,0,10,5,x,y,...> with x and y chosen from the remaining set( {1,2,3,4,6,7,8,9} ).

10. MODIFY PARTITION * TABLE "OUTPUT_DATA" \WHERE "PORT_CHARAC" = "queuing" \

SET "PORT_MAX_MESSAGE_NB" = [1..10] CONFNUM 3

MODIFY PARTITION * TABLE "INPUT_DATA" \

WHERE "PORT_CHARAC" = "queuing" \

SET "PORT_MAX_MESSAGE_NB" = [11..20] CONFNUM 2

This pair of generator rules denotes that three configurations are created by the first rule andmodified as described above resulting in three different configurations. The first one uses 1as the output ports’ queue length, the second one 10, and the third one 4.By the second rule, at most two different configurations shall be generated, i.e., the firstconfiguration uses 11 as the input ports’ queue length, the second one 20, and the third oneagain 11.The rules are applied separately for each partition as described above.

Examples with String Values

1. MODIFY PARTITION 1 TABLE "GLOBAL_PARTITION_DATA" \SET "APPLICATION_NAME" = "TA1"

Set the application name of partition 1 to TA1.

2. MODIFY MODULE TABLE "HM_SYSTEM" \WHERE "ERROR_SOURCE" = "NUMERIC_ERROR" \

SET "RECOVERY_LEVEL" = "partition"

In the module-level table HM SYSTEM, select all rows where the parameter ERROR SOURCE hasthe actual value NUMERIC ERROR and then set in these rows the recovery level to partitionlevel. (In this particular example, there is exactly one row selected and changed by thegenerator rule.)

Page 171: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.3. IMA CONFIGURATION LIBRARY 151

3. MODIFY MODULE TABLE "HM_SYSTEM" \WHERE "ERROR_SOURCE" = "NUMERIC_ERROR" \

SET "RECOVERY_LEVEL" = <"partition", "module"> CONFNUM 2

Create two different configurations. For each configuration, set in the module-level tableHM SYSTEM the recovery level for numeric errors to partition level (for the first configuration)and to module level (for the second one).

4. MODIFY MODULE TABLE "HM_MODULE" \SET "RECOVERY_LEVEL" = <"reset", "shutdown", "ignore"> VALNUM 3

In the module-level table HM MODULE, select all rows and change the recovery level: in thefirst row to reset, in the second row to shutdown, in the third one to ignore, in the fourth oneagain to reset, etc. As a consequence, if less rows are selected than values shall be used onlythe first values are used.

5. MODIFY MODULE TABLE "HM_MODULE" \SET "RECOVERY_LEVEL" = <"reset", "shutdown", "ignore"> VALNUM 3 CONFNUM 3

Create three configurations. In the first configuration, select all rows in the module-leveltable HM MODULE and change the recovery level of the first row to the first value in the list<"reset","shutdown","ignore","reset","shutdown",...> (i.e., to "reset"), inthe second row to the second value, in the third row to the third value, in the fourth row tothe first again, etc.In the second configuration, start with the second value in the list and then continue asdescribed above. In the third configuration, the first value to be used is the third value in thelist, i.e., for the first row the third value is used, for the second the first value in the list, etc.

6.3.1.4 Generic rule DELETE

DELETE ((SYSTEM)? PARTITION part_spec | MODULE)

TABLE "table_name" ROW (WHERE conditions)?

with

part_spec ::= part_num

| [part_num1 .. part_num2]

| [part_num1, part_num2, ..., part_numX]

| *

conditions ::= conditions condition

| condition

condition ::= "col_name" = value

value ::= int

| "string"

The DELETE rule can be used to delete rows which match the given conditions from the specifiedtable. The optional parameter WHERE provides the condition to select only a subset of the table’srows. If omitted, all rows of the specified configuration are deleted. It is possible to specify thatthe rule shall be applied for a specific partition-level configuration tables in one or more partitions.

Example DELETE PARTITION [1..3] TABLE "OUTPUT_DATA"

ROW WHERE "PORT_CHARAC" = "queuing"

When applying this rule, all queuing output ports are deleted from the configurationtable OUTPUT DATA of partition 1, 2 and 3.

Page 172: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

152 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

6.3.1.5 Specialized rule CLONE PARTITION

CLONE (SYSTEM)? PARTITION part_spec FROM (SYSTEM)? PARTITION part_num

withpart_spec ::= part_num

| [part_num1 .. part_num2]

| [part_num1, part_num2, ..., part_numX]

The CLONE PARTITION rule can be used to add one or more partitions to a configuration by copyingall configuration tables of the specified partition. It is possible to specify several partitions to beadded. When applying this rule, certain configuration parameters are automatically adjusted: atthe partition level, partition name and partition identifier for each new partition and, at the modulelevel, the number of partitions specified in configuration table GLOBAL DATA.

Example CLONE PARTITION [3,4] from PARTITION 1

When applying this rule, all configuration tables of partition 1 are copied twice and,in the first copy, the respective parameter values are set to partition 3 and, in thesecond copy, to partition 4. In the module-level configuration table GLOBAL DATA, theparameter PARTITION NB is set to 4 (if before only partition 1 and 2 where defined).

6.3.1.6 Specialized rule DELETE PARTITION

DELETE (SYSTEM)? PARTITION part_spec

withpart_spec ::= part_num

| [part_num1 .. part_num2]

| [part_num1, part_num2, ..., part_numX]

| *

The DELETE PARTITION rule can be used to delete a partition and all its partition-level configurationtables. Furthermore, the number of partitions defined in the module-level configuration tableGLOBAL DATA is adjusted. It is possible to delete a single partition, a set of partitions or all partitionsby a single rule.In addition, this rule removes all RAM ports in other partitions that were connected to the deletedpartition.

Example DELETE PARTITION [3,4]

When applying this rule, all configuration tables of partition 1 are deleted and in themodule-level configuration table GLOBAL DATA, the parameter PARTITION NB is set to2 (assuming that before four partitions where defined).

6.3.1.7 Specialized rule GENERATE COMMAND PORTS

GENERATE COMMAND PORTS

This rule can be used to generate command ports in all avionics partitions (i.e., in all avionicspartitions which have been copied, generated from scratch, or cloned by previous rules in the rulesfile). The command ports are a pair of specific API ports used for receiving commands (i.e., aninput port) and for sending the results (i.e., an output port). The characteristics of these ports aredescribed in Sect. 6.1.3. The API ports are each connected to an AFDX port and thus the respectiveAFDX virtual links and AFDX messages have also to be generated. If a partition already containscommand ports, they are replaced.

Page 173: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.3. IMA CONFIGURATION LIBRARY 153

Example GENERATE COMMAND PORTS

When applying this rule, one API queuing output port (called RESULT PORT) andone API input port (called COMMAND PORT) are generated for each partition. Inaddition, the respective AFDX output and input message as well as the output andinput VL are generated. An example of the necessary changes when applied to aconfiguration without command ports is provided in Appendix C.2.2 based on thesequence of rules defined in Appendix C.2.1.

6.3.1.8 Specialized rule CONNECT PORTS

The CONNECT PORTS rule has a different syntax depending on the type of ports to be connected:• API ports connected via RAM:CONNECT PORTS (SYSTEM)? PARTITION part_num1 SAMPLING PORT port_num1 TO

(SYSTEM)? PARTITION part_num2 SAMPLING PORT port_num2

PORT_MAX_MSG_SIZE = max_msg_size

CONNECT PORTS (SYSTEM)? PARTITION part_num1 QUEUING PORT port_num1 TO

(SYSTEM)? PARTITION part_num2 QUEUING PORT port_num2

PORT_MAX_MSG_SIZE = max_msg_size PORT_MAX_MSG_NB = max_msg_nb

These rules can be used to create and connect two sampling or queuing API ports, respec-tively, which are located in the same or different partitions of the module. In the configura-tion table OUTPUT DATA of partition part num1, a new row is added containing the specifiedmaximum message size, the specified queue length (if applicable), and the specified datafor the connected input port. In the configuration table INPUT DATA of partition part num2,a new row is added defining the port’s name and also the specified maximum message sizeand the specified queue length (if applicable). If the specified API port already exists, it isreplaced (which may lead to inconsistent configuration tables).

• API port connected to AFDX:CONNECT PORTS (SYSTEM)? PARTITION part_num (INPUT|OUTPUT) SAMPLING PORT

port_num TO AFDX PORT afdx_port_num PORT_MAX_MSG_SIZE = max_msg_size

CONNECT PORTS (SYSTEM)? PARTITION part_num (INPUT|OUTPUT) QUEUING PORT

port_num TO AFDX PORT afdx_port_num PORT_MAX_MSG_SIZE = max_msg_size

PORT_MAX_MSG_NB = max_msg_nb

These rules can be used to create an API queuing or sampling port and connect it to an al-ready defined AFDX port. This means that a new row is added to the partition’s configura-tion table OUTPUT DATA or INPUT DATA, respectively, which defines the specified parameters.The respective AFDX virtual link and AFDX message have to exist and are usually addedto the configuration by applying appropriate INSERT rules. If one of the specified API portsalready exists, it is replaced (which may lead to inconsistent configuration tables).

• API port connected to discrete or analog signal, CAN message or ARINC 429 label:CONNECT PORTS (SYSTEM)? PARTITION part_num (INPUT|OUTPUT) SAMPLING PORT

port_num TO (DISCRETE|ANALOG|CAN|A429) "name"

This rule can be used to create an API sampling port and connect it to an already defineddiscrete or analog signal, CAN message, or ARINC 429 label. This means that a new row isadded to the partition’s configuration table OUTPUT DATA or INPUT DATA, respectively, whichdefines the specified parameters. The respective signal, messages or label and associatedline or bus have to exist and are usually added to the configuration by applying appropriateINSERT rules. If the specified API port already exists, it is replaced (which may lead toinconsistent configuration tables).

Page 174: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

154 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

Examples

1. CONNECT PORTS PARTITION 1 SAMPLING PORT 20 TO PARTITION 2 SAMPLING PORT 20 \PORT_MAX_MSG_SIZE = 512

When applying this rule, two API sampling ports are created and connected: one outputport in partition 1 with port identifier 20 and one input port in partition 2 with port identifier20. Therefore, appropriate rows are added in partition 1 configuration table OUTPUT DATAand partition 2 configuration table INPUT DATA. The port names are set to SP020 in eachconfiguration table, the other configuration parameters are set as specified.

2. CONNECT PORTS PARTITION 1 QUEUING PORT 20 TO PARTITION 1 QUEUING PORT 21 \PORT_MAX_MSG_SIZE = 512 PORT_MAX_MSG_NB = 10

When applying this rule, two API queuing ports are created and connected: one output portin partition 1 with port identifier 20 and one input port in partition 1 with port identifier 21.Therefore, appropriate rows are added in partition 1 configuration table OUTPUT DATA andINPUT DATA. The port names are set to QP020 and QP021, respectively, the other configurationparameters are set as specified.

3. CONNECT PORTS PARTITION 1 OUTPUT QUEUEING PORT 1 TO AFDX PORT 100 \PORT_MAX_MSG_SIZE = 1024 PORT_MAX_MSG_NB = 1

When applying this rule, an API queuing output port is created and connected to the existingAFDX output port 100. For creating the API port, a row is added to the partition-levelconfiguration table OUTPUT DATA of partition 1 with port name QP001; the other parametersare set as specified.

4. CONNECT PORTS PARTITION 1 INPUT SAMPLING PORT 2 TO DISCRETE "DSI_A_04E"

When applying this rule, an API sampling input port is created and connected to the existingdiscrete signal DSI A 04E. For creating the API port, a row is added to the partition-levelconfiguration table INPUT DATA of partition 1 with port name SP002. The other parametersare set as specified.

6.3.2 Configuration Data Parser

The configuration data parser extracts the configuration data relevant for a test from the config-uration tables and presents them in a format usable by the CSP test specifications. Therefore, itgenerates two types of configuration extract files:

• the extract of the global configuration data which contains information about the module, itcan be used by all test specifications of a test procedure and is not relevant for one partitiononly, and

• the extract of partition relevant configuration data which is relevant only to the respectivepartition’s test specifications of a test procedure.

The respective file names are IMA Conf Globals.csp and IMA Conf PTx.csp (where x is the par-tition index). The file names are fixed which allows to include them (i.e., reference their names)in the test specification templates. During the instantiation of the test specification templates, thepaths of the configuration data extracts are set as required and thus reference the configurationdata to be used for testing the instantiated test procedure.

The test specifications use the formal specification technique CSP which puts several requirementson the format of the configuration data extracts:

Page 175: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.3. IMA CONFIGURATION LIBRARY 155

• For CSP specifications, the configuration data extracts can be represented as constants, se-quences, sets, or combinations of sets, sequences and tuples.

• In CSP specifications, the possible communication events are defined as channels whichhave to be declared explicitly. This means that all configuration parameters to be usedas event parameters (i.e., data components of a structured channel) have to comply to thechannel declarations. For the CSP test specifications used in the IMA test suite, the commoninput and output channels are defined in Appendix B.

• CSP has no pre-defined type string and requires to define all possible strings as so-calleddata types which are like C enumerations. Therefore, the strings used for the names ofapplications, partitions, ports, etc. are represented in the configuration data extracts by usingtheir identifiers or indices. For example, according to the naming requirements for portsdescribed in Sect. 6.1.3, the port index is the number used for the port name, i.e., port nameQP003 is represented by port index 3.

• For test execution, the CSP test specifications are transformed into transition systems andthe respective tool cannot cope with too many states which occur when too many events andprocess parameters are used – this is the so-called state explosion problem. Therefore, it isfavorable to separate different configuration aspects using different sets or sequences.

• The state explosion problem can also be reduced by tailoring the data types used for thechannel components. As a consequence, a fixed range of indices for applications, partitions,ports, etc. is used instead of the respective identifiers specified in the configuration tables.(These indices are also used in the commands sent to the TA as described in Sect. 6.1.)

• In CSP processes, the way of handling information provided as set or sequence depends onthe process’ context, i.e., in some contexts sequences are better than sets and vice versa.Therefore, some configuration values are provided both as set and as sequence representa-tion to meet the different requirements.

The format of the extract files which has been developed according to the above requirementsand the specific needs of the test specifications is described in the following. For generatingboth types of configuration data extract files, the module- and the partition-level configuration ta-bles are parsed and – by combining the potentially distributed information – transformed into aset or sequence representation. When providing examples, the configuration tables provided inAppendix C.1 (TEMPLATE01), Appendix C.2.2 (Config0001), and Appendix C.3.2 (Config0040)are used. When referencing to specific configuration tables and their parameters the syntaxCONFIG TABLE NAME.PARAMETER NAME is used.

Examples of configuration data extract files for configuration Config0001 are provided in Ap-pendix C.2.3 (global configuration data extracts) and in Appendices C.2.3.2 and C.2.3.3 (partition-relevant configuration data extracts for partition 1 and 2).

6.3.2.1 Format of Global Configuration Data Extract File IMA Conf Globals.csp

IMA Conf CFG AREA BEGINCSP constant providing GLOBAL_DATA.CFG_AREA_BEGIN (in decimal representation).

IMA Conf CFG AREA SIZECSP constant providing GLOBAL_DATA.CFG_AREA_SIZE.

IMA Conf MAF DURATIONCSP constant providing GLOBAL_DATA.MAF_DURATION.

Page 176: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

156 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

IMA Conf SET AVIONICS PARTITIONSCSP set containing the indices of the avionics partitions. Example for a configuration with fouravionics partitions: { 1, 2, 3, 4 } .

IMA Conf SEQ APPL PART ASSIGNCSP sequence of sets grouping the partitions belonging to the same application (as denoted byGLOBAL_PARTITION_DATA.APPLICATION_ID). Each set in the sequence represents one applica-tion. (Note that an application may consist of multiple partitions but a partition always belongs toone application only.) Example for a configuration with four partitions and the partitions with par-tition indices 1 and 2 belonging to one application and with partition indices 3 and 4 belonging toanother one: < { 1, 2 }, { 3, 4 } > .

6.3.2.2 Format of Partition-Relevant Configuration Data Extract File IMA Conf PTx.csp

The file name already contains the partition index xwhich is used instead of the partition identifier.The range of partition indices is 1..SYSTEM_LIMIT_NUMBER_OF_PARTITIONS as defined in ARINC653 (see [ARINC653P1 d4s2], p. 170). This range also comprises the system partitions whoseconfiguration data are not extracted from the configuration.

IMA Conf APPLICATION IDCSP constant providing GLOBAL_PARTITION_DATA.APPLICATION_ID.

IMA Conf PARTITION IDCSP constant providing GLOBAL_PARTITION_DATA.PARTITION_ID.

IMA Conf PARTITION INDEXCSP constant providing the partition index x used instead of the partition ID.

IMA Conf PROCESS STACK SIZECSP constant providing GLOBAL_PARTITION_DATA.PROCESS_STACK_SIZE.

IMA Conf MAIN STACK SIZECSP constant providing GLOBAL_PARTITION_DATA.MAIN_STACK_SIZE.

IMA Conf PARTITION PERIODCSP constant providing TEMPORAL_ALLOCATION.PARTITION_PERIOD.

IMA Conf SEQ SCHED WINDOW POSCSP sequence of TEMPORAL_ALLOCATION.SCHED_WINDOW_POS. For example, a partition scheduledin the first and in the third scheduling window: < 0, 2 > .

IMA Conf SEQ SCHED WINDOW OFFSETCSP sequence of TEMPORAL_ALLOCATION.SCHED_WINDOW_OFFSET. For example, a partitionscheduled in the first and in the third scheduling window when each scheduling window’s durationis 20000: < 0, 40000 > .

IMA Conf SEQ SCHED WINDOW DURATIONCSP sequence of TEMPORAL_ALLOCATION.SCHED_WINDOW_DURATION. For example, a partitionscheduled in the first and in the third scheduling window when each scheduling window’s durationis 20000: < 20000, 20000 > .

IMA Conf CODE AREA BEGINCSP constant providing SPATIAL_ALLOCATION.CODE_AREA_BEGIN (in decimal representation).

IMA Conf CODE AREA SIZECSP constant providing SPATIAL_ALLOCATION.CODE_AREA_SIZE.

IMA Conf CODE AREA BEGIN TUPLECSP sequence containing a 4-tuple representation of SPATIAL_ALLOCATION.CODE_AREA_BEGIN.The 4-tuple components are the bytes of the respective integer in decimal. This means that(x3, x2, x1, x0) is calculated as 224x3 + 216x2 + 28x1 + x0. For example, the code area begin address0x3600000 is represented as the 4-tuple < (3, 96, 0, 0) > .

Page 177: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.3. IMA CONFIGURATION LIBRARY 157

IMA Conf CODE AREA SIZE TUPLECSP sequence containing a 4-tuple representation of SPATIAL_ALLOCATION.CODE_AREA_SIZE.For example, the code area size 1048576 is represented as the 4-tuple < (0, 16, 0, 0) > .

IMA Conf DATA AREA BEGINCSP constant providing SPATIAL_ALLOCATION.DATA_AREA_BEGIN (in decimal representation).

IMA Conf DATA AREA SIZECSP constant providing SPATIAL_ALLOCATION.DATA_AREA_SIZE.

IMA Conf DATA AREA BEGIN TUPLECSP sequence containing a 4-tuple representation of SPATIAL_ALLOCATION.DATA_AREA_BEGIN.For example, the data area begin address 0x1900000 specified in TEMPLATE01 is represented as the4-tuple < (1, 144, 0, 0) > .

IMA Conf DATA AREA SIZE TUPLECSP sequence containing a 4-tuple representation of SPATIAL_ALLOCATION.DATA_AREA_SIZE.For example, the data area size 2097152 is represented as the 4-tuple < (0, 32, 0, 0) > .

The port configuration data extracts at first provide sets and sequences for all sampling or queuingports and then distinguish the different communication media, i.e., provide separate sets for APIports connected to RAM or AFDX ports, etc. In the latter case, additional information with respectto the AFDXmessage, A429 label, CANmessage, discrete or analog signal is provided as requiredby the test specifications.For ports, the different configuration parameters are extracted as separate sequences (e.g., onesequence for the port indices, one for the maximum message sizes, etc.) and the head elements ofthe sequences are the parameters for the first port, the second elements are for the second port, etc.The following table (Fig. 6.5) illustrates an example for the sequences with prefix IMA_Conf_SEQ_SAMPLING_PORT.When considering the sets and sequences for all queuing ports and AFDX queuing ports, twogroups are provided – one which contains all queuing ports including the command and resultports (e.g., IMA Conf NUM ALL QUEUING PORTS) and another one including only the other queuingports (e.g., IMA Conf NUM QUEUING PORTS).

SP001 SP009 SP018

IMA Conf SEQ SAMPLING PORT ...

...INDICES = < 1 , 9 , 18 >

...MAX MSG SIZE = < 1024 , 512 , 2048 >

...DIRECTIONS = < dir SOURCE , dir DESTINATION , dir SOURCE >

Figure 6.5: Example for representation of port parameters, the configuration for this partitiondefines three sampling ports SP001, SP009, and SP018

IMA Conf NUM SAMPLING PORTSCSP constant providing the number of sampling ports, i.e., the number of rows in OUTPUT_DATA andINPUT_DATA with port character ‘sampling’.

IMA Conf SET SAMPLING PORT INDICESCSP set containing the indices of the sampling ports (the indices are derived from the names of theports). For example, a partition with two sampling ports named SP001 and SP002, the resulting setof indices is { 1, 2 } .

IMA Conf SEQ SAMPLING PORT INDICESCSP sequence containing the indices of the sampling ports (sequence representation of IMA_Conf_SET_SAMPLING_PORT_INDICES). Example for a partition with two sampling ports: < 1, 2 > .

Page 178: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

158 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

IMA Conf SEQ SAMPLING PORTS MAX MSG SIZECSP sequence containing the parameter OUTPUT/INPUT_DATA.PORT_MAX_MSG_SIZE for each portin IMA_Conf_SEQ_SAMPLING_PORT_INDICES. For example, a partition with two sampling portswhere SP001 is configured with a maximum message size of 1024 and SP002 with a maximummessage size of 512, this results in the sequence < 1024, 512 > .

IMA Conf SEQ SAMPLING PORT DIRECTIONSCSP sequence providing the direction of the port, i.e., if the port is defined in configuration tableOUTPUT_DATA (dir_SOURCE) or INPUT_DATA (dir_DESTINATION). For example, a partitionwith output sampling port SP001 and input sampling port SP002 is represented by the sequence< dir_SOURCE, dir_DESTINATION > .

IMA Conf NUM ALL QUEUING PORTSCSP constant providing the number of rows in OUTPUT_DATA and INPUT_DATA with port character‘queuing’, also counting the command and result ports.

IMA Conf SET ALL QUEUING PORTSCSP set containing the indices of all queuing ports. The indices are derived from the names of theports (e.g., port QP003 has index 3). For the command and result ports which are named COMMAND_PORT and RESULT_PORT the port indices 1 and 2, respectively, are used. As a consequence, queuingport names QP001 and QP002 are not used. For example, a partition with only command and resultport has the resulting set { 1, 2 } .

IMA Conf SEQ ALL QUEUING PORT INDICESCSP sequence containing the indices of all queuing ports (sequence representation of the set IMA_Conf_SET_ALL_QUEUING_PORTS). For example, a partition with only command and result port hasthe resulting sequence < 1, 2 > .

IMA Conf SEQ ALL QUEUING PORTS MAX MSG SIZECSP sequence containing the parameter OUTPUT/INPUT_DATA.PORT_MAX_MSG_SIZE for each portin IMA_Conf_SEQ_ALL_QUEUING_PORT_INDICES.

IMA Conf SEQ ALL QUEUING PORTS MAX NB MSGCSP sequence containing the parameter OUTPUT/INPUT_DATA.PORT_MAX_NB_MSG for each port inIMA_Conf_SEQ_ALL_QUEUING_PORT_INDICES.

IMA Conf SEQ ALL QUEUING PORT DIRECTIONSCSP sequence providing the port direction – dir_DESTINATION for input ports and dir_SOURCEfor output ports – for each port in IMA_Conf_SEQ_ALL_QUEUING_PORT_INDICES.

IMA Conf NUM QUEUING PORTSCSP constant providing the number of rows in OUTPUT_DATA and INPUT_DATA with port character‘queuing’ without counting the command or result port.

IMA Conf SET QUEUING PORTSCSP set containing the indices of the queuing ports (without the command or result port). The indicesare derived from the names of the ports.

IMA Conf SEQ QUEUING PORT INDICESCSP sequence containing the indices of the queuing ports (sequence representation of the set IMA_Conf_SET_QUEUING_PORTS).

IMA Conf SEQ QUEUING PORTS MAX MSG SIZECSP sequence containing the parameter OUTPUT/INPUT_DATA.PORT_MAX_MSG_SIZE for each portin IMA_Conf_SEQ_QUEUING_PORT_INDICES.

IMA Conf SEQ QUEUING PORTS MAX NB MSGCSP sequence containing the parameter OUTPUT/INPUT_DATA.PORT_MAX_NB_MSG for each port inIMA_Conf_SEQ_QUEUING_PORT_INDICES.

Page 179: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.3. IMA CONFIGURATION LIBRARY 159

IMA Conf SEQ QUEUING PORT DIRECTIONSCSP sequence providing the direction for each port in IMA_Conf_SEQ_QUEUING_PORT_INDICES.

IMA Conf SEQ RAM SP OUTCSP sequence of RAM output sampling port indices. The indices are derived from the names of theports.

IMA Conf SEQ RAM SP OUT CONFIGCSP sequence of RAM sampling output port communication partner tuples for each port in IMA_Conf_SEQ_RAM_SP_OUT. The tuple contains the destination partition index and the destination portindex. For example, for a partition with partition index 4 and with two configured RAM samplingports SP001 and SP002 which are connected (i.e., SP001 is the source of SP002), the resultingsequence is < (4, 2) > .

IMA Conf SEQ RAM QP OUTCSP sequence of RAM output queuing port indices. The indices are derived from the names of theports.

IMA Conf SEQ RAM QP OUT CONFIGCSP sequence of RAM queuing output port communication partner tuples for each port in IMA_Conf_SEQ_RAM_SP_OUT. The tuple contains the destination partition index and the destination portindex. For example, for a partition with partition index 1 and with two configured RAM queuingports QP003 and QP003 which are connected (i.e., QP003 is the source of QP004), the resultingsequence is < (1, 4) > .

IMA Conf SEQ RAM SP INCSP sequence of RAM input sampling port indices. The indices are derived from the names of theports.

IMA Conf SEQ RAM QP INCSP sequence of RAM input queuing port indices. The indices are derived from the names of theports as described for IMA Conf SET ALL QUEUING PORTS.

IMA Conf SEQ AFDX SP OUTCSP sequence of AFDX output sampling port indices. The indices are derived from the names of theports.

IMA Conf SEQ AFDX SP OUT CONFIGCSP sequence of AFDX output message and VL configuration tuples for all sampling out-put ports in IMA_Conf_SEQ_AFDX_SP_OUT. Each tuple contains the following informa-tion for the respective port: OUTPUT_DATA.ASSOCIATED_PORT, AFDX_OUTPUT_VL.NETWORK,and AFDX_OUTPUT_VL.PORT_TRANS_TYPE. For example, this can result in the triple(1004, net_A_B, trans_multicast) .

IMA Conf SEQ AFDX SP OUT VLCSP sequence of output VL identifiers for all ports in IMA_Conf_SEQ_AFDX_SP_OUT.

IMA Conf SEQ AFDX QP OUTCSP sequence of AFDX output queuing port indices. The indices are derived from the names of theports as described for IMA Conf SET ALL QUEUING PORTS. For example, if only the command andresult ports are specified in the configuration of the respective partition, this results in the sequence< 2 > because 2 is the index of the result port (i.e., the outgoing port).

IMA Conf SEQ AFDX QP OUT CONFIGCSP sequence of AFDX output message and VL configuration types for all queuing out-put ports in IMA_Conf_SEQ_AFDX_QP_OUT. Each tuple contains the following informa-tion for the respective port: OUTPUT_DATA.ASSOCIATED_PORT, AFDX_OUTPUT_VL.NETWORK,and AFDX_OUTPUT_VL.PORT_TRANS_TYPE. For example in Config0001, the commandand result ports of partition 1 are specified to use AFDX and the resulting sequence is< (10018, net_A_B, trans_multicast) > .

Page 180: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

160 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

IMA Conf SEQ AFDX QP OUT VLCSP sequence of output VL identifiers for all ports in IMA_Conf_SEQ_AFDX_QP_OUT. For example,if only the command and result ports are specified in the configuration of partition 1, this results inthe sequence < 10018 > .

IMA Conf SEQ AFDX SP INCSP sequence of AFDX input sampling port indices. The indices are derived from the names of theports.

IMA Conf SEQ AFDX SP IN CONFIGCSP sequence of AFDX output message and VL configuration tuples for all sampling input portsin IMA_Conf_SEQ_AFDX_SP_IN. Each tuple contains the following information for the respectiveport: INPUT_DATA.ASSOCIATED_PORT, AFDX_INPUT_VL.NETWORK.

IMA Conf SEQ AFDX SP IN VLCSP sequence of input VL identifiers for all ports in IMA_Conf_SEQ_AFDX_SP_IN.

IMA Conf SEQ AFDX QP INCSP sequence of AFDX input queuing port indices. The indices are derived from the names of theports. For example, if only the command and result ports are specified in the configuration of therespective partition, this results in the sequence < 1 > because 1 is the index of the commandport.

IMA Conf SEQ AFDX QP IN CONFIGCSP sequence of AFDX input message and VL configuration types for all queuing input ports inIMA_Conf_SEQ_AFDX_QP_IN. Each tuple contains the following information for the respectiveport: INPUT_DATA.ASSOCIATED_PORT, AFDX_INPUT_VL.NETWORK. For example, if only the com-mand and result ports are specified in the configuration of partition 1, this results in the sequence< (10019, net_A_B) > .

IMA Conf SEQ AFDX QP IN VLCSP sequence of input VL identifiers for all ports in IMA_Conf_SEQ_AFDX_QP_IN. For example, ifonly the command and result ports are specified in the configuration of partition 1, this results in thesequence < 10019 > .

IMA Conf SET ALL AFDX OUT IDSCSP set of all configured output AFDX port identifiers (i.e., for all sampling and queuing outputports). For example, if only the command and result ports are specified in the configuration ofpartition 1, this results in the set { 10018 } .

IMA Conf SET AFDX OUT IDSCSP set of all configured output AFDX port identifiers except the result port. For example, if only thecommand and result ports are specified in the configuration of partition 1, this results in the emptyset.

IMA Conf SET ALL AFDX IN IDSCSP set of all configured input AFDX port identifiers (i.e., for all sampling and queuing input ports).For example, if only the command and result ports are specified in the configuration of partition 1,this results in the set { 10019 } .

IMA Conf SET AFDX IN IDSCSP set of all configured input AFDX port identifiers except the command port. For example, ifonly the command and result ports are specified in the configuration of partition 1, this results in theempty set.

IMA Conf SEQ A429 OUTCSP sequence of ARINC 429 output sampling port indices. The indices are derived from the namesof the ports.

Page 181: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.3. IMA CONFIGURATION LIBRARY 161

IMA Conf SEQ A429 OUT CONFIGCSP sequence of ARINC 429 output label configuration tuples for each port listed inIMA_Conf_SEQ_A429_OUT. Each tuple contains the following information for the respec-tive port: A429_OUTPUT_LABEL.ASSOCIATED_A429_BUS, A429_OUTPUT_LABEL.A429_LABEL_NUMBER, A429_OUTPUT_LABEL.SIGNAL_LSB, and A429_OUTPUT_LABEL.SIGNAL_MSB.

IMA Conf SET A429 OUT BUSSESCSP set of ARINC 429 output busses for each port in IMA_Conf_SEQ_A429_OUT.

IMA Conf SEQ A429 INCSP sequence of ARINC 429 input sampling port indices. The indices are derived from the namesof the ports.

IMA Conf SEQ A429 IN CONFIGCSP sequence of ARINC 429 input label configuration tuples for each port listed inIMA_Conf_SEQ_A429_OUT. Each tuple contains the following information for the respec-tive port: A429_OUTPUT_LABEL.ASSOCIATED_A429_BUS, A429_OUTPUT_LABEL.A429_LABEL_NUMBER, A429_OUTPUT_LABEL.SIGNAL_LSB, and A429_OUTPUT_LABEL.SIGNAL_MSB.

IMA Conf SET A429 IN BUSSESCSP set of ARINC 429 output busses for each port in IMA_Conf_SEQ_A429_IN.

IMA Conf SEQ CAN OUTCSP sequence of CAN output sampling port indices. The indices are derived from the names of theports.

IMA Conf SEQ CAN OUT CONFIGCSP sequence of CAN output message configuration tuples for each port listed in IMA_Conf_SEQ_CAN_OUT. Each tuple contains the following information for the respective port: CAN_OUTPUT_MESSAGE.ASSOCIATED_CAN_BUS, CAN_OUTPUT_MESSAGE.CAN_MSG_ID, CAN_INPUT_MESSAGE.CAN_MSG_PAYLOAD, CAN_OUTPUT_MESSAGE.SIGNAL_LSB, and CAN_OUTPUT_MESSAGE.SIGNAL_MSB.

IMA Conf SET CAN OUT BUSSESCSP set of CAN output busses for each port in IMA_Conf_SEQ_CAN_OUT.

IMA Conf SEQ CAN INCSP sequence of CAN input sampling port indices. The indices are derived from the names of theports.

IMA Conf SEQ CAN IN CONFIGCSP sequence of CAN input message configuration tuples for each port listed in IMA_Conf_SEQ_CAN_IN. Each tuple contains the following information for the respective port:CAN_INPUT_MESSAGE.ASSOCIATED_CAN_BUS, CAN_INPUT_MESSAGE.CAN_MSG_ID, CAN_INPUT_MESSAGE.CAN_MSG_PAYLOAD, CAN_INPUT_MESSAGE.SIGNAL_LSB, and CAN_INPUT_MESSAGE.SIGNAL_MSB.

IMA Conf SET CAN IN BUSSESCSP set of CAN output busses for each port in IMA_Conf_SEQ_CAN_OUT.

IMA Conf SEQ DISC OUTCSP sequence of output sampling port indices connected to discrete signals. The indices are derivedfrom the names of the ports.

IMA Conf SEQ DISC OUT CONFIGCSP sequence of discrete output signal configuration tuples for each port in IMA_Conf_SEQ_DISC_OUT. Each tuple contains the following information: DISCRETE_OUTPUT_SIGNAL.ASSOCIATED_LINE, DISCRETE_OUTPUT_SIGNAL.SIGNAL_NAME, DISCRETE_OUTPUT_SIGNAL.DEFAULT_VALUE.

Page 182: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

162 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

IMA Conf SEQ DISC INCSP sequence of input sampling port indices connected to discrete signals. The indices are derivedfrom the names of the ports.

IMA Conf SEQ DISC IN CONFIGCSP sequence of discrete input signal configuration tuples for each port in IMA_Conf_SEQ_DISC_IN. Each tuple contains the following information: DISCRETE_INPUT_SIGNAL.ASSOCIATED_LINE,DISCRETE_INPUT_SIGNAL.SIGNAL_NAME, DISCRETE_INPUT_SIGNAL.LOGIC.

IMA Conf SEQ ANALOG OUTCSP sequence of output sampling port indices connected to discrete signals. The indices are derivedfrom the names of the ports.

IMA Conf SEQ ANALOG OUT CONFIGCSP sequence of analog output signal configuration tuples for each port in IMA_Conf_SEQ_ANALOG_OUT. Each tuple contains the following information: ANALOG_OUTPUT_SIGNAL.ASSOCIATED_LINE,ANALOG_OUTPUT_SIGNAL.SIGNAL_NAME.

IMA Conf SEQ ANALOG INCSP sequence of input sampling port indices connected to discrete signals. The indices are derivedfrom the names of the ports.

IMA Conf SEQ ANALOG IN CONFIGCSP sequence of analog input signal configuration tuples for each port in IMA_Conf_SEQ_ANALOG_IN. Each tuple contains the following information: ANALOG_INPUT_SIGNAL.ASSOCIATED_LINE,ANALOG_INPUT_SIGNAL.SIGNAL_NAME.

The partition-relevant configuration data extract file also contains information for so-called stan-dard test application processes which are used in test specifications, if the test design denotesno specific requirements for a process to perform the commands. The information is providedfor aperiodic and periodic processes as well as the error handler process. The information ispart of this extract file because some of the process attributes depend on partition-level con-figuration data (e.g., the partition’s period). However, other attributes depend on the processtype (e.g., the possible values for TIME CAPACITY) or on the test application code (e.g., theminimum required stack size for each process). The values specified in the extract file areused by the CSP macros CREATE STANDARD APERIODIC TA, CREATE STANDARD PERIODIC TA, andCREATE STANDARD ERROR HANDLER which are defined in IMA API macros (see Appendix B.3.1).

TA PERIODIC STACK SIZECSP constant providing the stack size parameter when creating a standard periodic process. For thetest application, the minimum possible stack size is 32768 .

TA PERIODIC BASE PRIORITYCSP constant providing the base priority when creating a standard periodic process. The selectedvalue is 3 .

TA PERIODIC PERIODCSP constant providing the period when creating a standard periodic process. This value is set to thepartition’s period in order to schedule the process once in each major time frame.

TA PERIODIC TIME CAPACITYCSP constant providing the time capacity when creating a standard periodic process. This value isset to the partition’s period.

TA PERIODIC DEADLINECSP constant providing the deadline when creating a standard periodic process. Possible values aredeadline_HARD and deadline_SOFT. The selected constant is deadline_HARD .

Page 183: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.3. IMA CONFIGURATION LIBRARY 163

TA APERIODIC STACK SIZECSP constant providing the stack size parameter when creating a standard aperiodic process. For thetest application, the minimum possible stack size is 32768 .

TA APERIODIC BASE PRIORITYCSP constant providing the base priority when creating a standard aperiodic process. The selectedvalue is 3 .

TA APERIODIC PERIODCSP constant providing the period when creating a standard aperiodic process. To express an infinitetime value, the constant is set to -1 .

TA APERIODIC TIME CAPACITYCSP constant providing the time capacity when creating a standard aperiodic process. This value isset to the partition’s period.

TA APERIODIC DEADLINECSP constant providing the deadline when creating a standard aperiodic process. The selecteddeadline is deadline_HARD .

ERR STACK SIZECSP constant providing the stack size parameter when creating a standard error handler. For the testapplication, the minimum possible stack size for the error handler process is 32768 .

Page 184: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

164 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

6.4 IMA Test Specification Template Library

In the previous sections, the details of the generic test application, the test execution environment,and the configuration library have been described. Generic test applications can show various be-haviors by means of commands from external test specifications. As explained in the introductionto this chapter, the tests are ideally performed with all possible configurations to ensure the correctbehavior of the operating system for all current and prospective fields of application (within thebounds of the IMA platform’s characteristics). For testing, this means that it is desirable to exe-cute each test procedure with different adequate configurations. For the test suite described in thischapter, an approach has been developed that allows to abstract from concrete configuration datawhen implementing the test design: So-called test specification templates define the test’s behav-ior but use symbolic names instead of concrete configuration values. When instantiating the testspecification template, these references are replaced with the values of a specific test configurationand the resulting test specifications can be used for test execution.The test specification template library assembles the test specification templates. It is structuredas follows (also depicted in Fig. 6.6):

• Test specification templates are CSP test specifications which use symbolic names when re-ferring to configuration data. When instantiated, they can be executed by the RT-Tester. Thereference names are defined in the syntax for configuration data extracts (see Sect. 6.3.2).

• Each test specification template is part of a test procedure template which additionally con-tains test procedure specific configuration information: IMA platform configuration require-ments, RT-Tester configuration information, and test procedure template specific documen-tation information.

• Each test procedure template implements a particular part of a test objective. Usually, sev-eral test procedures are necessary to cover all normal behavior and robustness checkingaspects of a particular test objective.

• Each test objective contributes to testing a specific feature, functionality, or property of thesystem under test. For testing bare IMA platforms, this comprises the operating systemservices, partitioning, configurability, data loading, the operational modes of the moduleand the partitions, the communication with external communication partners and with otherprocesses in the same partition, the health-monitoring characteristics, and the power supplyand consumption. These categories of features are usually determined in the test designdocuments and, for bare IMA module testing as considered in this thesis, the set of featurecategories has been proposed in Sect. 5.2.1.

Figure 6.6 illustrates the logical (and directory) structure of the IMA test specification templatelibrary using the sample test procedure template test procedure template 03. It shows that theIMA test specification template library groups the test objectives into nine feature categories andthat testing one of them may require several test procedure templates. The following subsectionelaborates on the structure and the information assembled in each test procedure. Further de-tails are explained thereafter by introducing two example test procedure templates: Section 6.4.2addresses the test procedure template TO_PART_003/test_procedure_template_03 which is al-ready highlighted in Fig. 6.6. Section 6.4.3 describes one of the test procedure templates forintra-partition communication testing – namely TO_PARCOM_002/test_procedure_template_01.

6.4.1 Test Procedure Templates

Each test procedure template combines a set of test cases as determined in the test design docu-ment which specifies the design of the test cases and test procedure templates. Test procedures

Page 185: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.4. IMA TEST SPECIFICATION TEMPLATE LIBRARY 165

TO PART 001

TO PART 002

TO PART 003

TO PART 004

...

TO PART xxx

Health MonitoringTests

ConfigurationTests

Operational ModesTests

Inter-Partition Com-munication Tests

Intra-Partition Com-munication Tests

Data LoadingTests

Config0040 1

Config0040 2

Power Supply andConsumption Tests

PartitioningTests

IMA Test Specification Template Library

Operating System(API) Tests

Test Objectives for Partitioning

test procedure template 01

test procedure template 02

test procedure template 03

...

test procedure template yy

Test Procedure Templates for TO PART 003

test procedure template 03possible configs

config.rttdoc.t

config.rtt.t

specs

main pt1.csp.t

main pt2.csp.t

main pt3.csp.t

main pt4.csp.t

. . .

Figure 6.6: Structure of the Test Specification Template Library

templates are unique in their combination of test cases, but can be instantiated for test executionwith different appropriate test configurations. For grouping a set of test cases into one test proce-dure template, it is mandatory that (a) they are contributing to the same test objective and (b) theirconfiguration requirements match (i.e., are either similar or complementing each other). In addi-tion, it is favorable that the behavior of the test cases is complementary such that the results of thefirst test case (partially) set the preconditions for the next one(s). Following these guidelines, it ispossible that the sequence of test cases (i.e., the test procedure) can be executed without changingthe module configuration (unless explicitly stated by the test cases itself) and that the number ofIMA module resets between two test cases can be limited. The former contributes to automatedtest execution and the latter helps to reduce the overall test execution time.Each test procedure template contains the following information:

• a list of appropriate module configurations (possible configs),• RT-Tester configuration information (config.rtt.t),• general test procedure template specific documentation (config.rttdoc.t), and• a set of test specification templates (main pt1.csp,...).

The information is provided in several files and structured into subdirectories as illustrated inFig. 6.6. The following gives further details.

Page 186: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

166 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

6.4.1.1 Test Specification Templates

When implementing a test procedure template using CSP test specifications to be executed bythe RT-Tester test system, it is possible to distribute test control, test checking, and environmentsimulation to different CSP test specification templates. After test instantiation, the RT-Tester thenallows to execute them in separate abstract machines according to the RT-Tester configuration (seeSect. 5.4.2.1 and Sect. 6.4.1.2).

Different approaches are possible for distributing the different tasks into separate test specifica-tions which mainly depend on (a) the requirements of the test cases to be implemented and (b)the limitations imposed by the test system and the specification formalism. For the chosen testsystem RT-Tester and CSP as its (formal) specification technique, the respective limitations arediscussed in Sect. 6.2 when describing the test execution environment. The approach chosen forimplementing test procedure templates for bare module testing is best described by the followingguiding principles:

• Commands which concern the test application processes of different partitions are specifiedin separate test specification templates. The respective file names contain a reference tothe partition index, for example, main ptn.csp.t contains the test specification template forpartition n.

• If simultaneous commands for different partitions have to be implemented, the test speci-fications use test control events to synchronize. The respective CSP channels are added tothose defined in the test execution environment.

• If it is required to check and compare the results of different partitions, module-level testchecking specifications can be introduced.

• If enforced by test system limitations or to distinguish different roles, the test specificationof a partition can be split into separate test specifications, for example, into a test controland a test checking specifications.

• For test control and checking within the same test specification, the tasks can be separatedusing the means provided by CSP (e.g., using the interleaving or parallel operator). How-ever, for most test cases, it is sufficient to alternate test control and test checking activitiesas it is easily possible using the CSP macros defined in the test execution environment.

• Test specifications which provide environment simulation are usually defined in separatetest specifications thereby also allowing their reuse.

• Test specification templates abstract from concrete module configuration data and insteaduse symbolic names to refer to the configuration values. The symbolic names are definedin Sect. 6.3.2 which describes how the configuration data parser extracts the test relevantconfiguration data from the configuration tables to provide it in a format usable by the CSPtest specifications (e.g., as constants, sequences, sets, etc.).

Structure of a CSP Test Specification Template. A test specification template consists of twoparts: A declaration part and a process definition part. In the declaration part, the interface ofthe test specification is defined, i.e., the CSP types, input and output channels, macros, and timervariables. Types, channels, and macros for general usage are provided by the test execution en-vironment and can be included here as required. Test procedure specific types and channels canbe added for internal test control (e.g., for synchronization) or structuring and reuse (e.g., for thetest procedure specific macros). In the process definition part, the top-level process of the testspecification as well as test procedure specific macro processes are specified.

Page 187: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.4. IMA TEST SPECIFICATION TEMPLATE LIBRARY 167

Examples of test specification templates are provided in Appendix D.1.1.1 (for TO_PART_003/test_procedure_template_03) and Appendix D.2.1.1 (for TO_PARCOM_002/test_procedure_template_01) which are further investigated in Sections 6.4.2 and 6.4.3, respectively.

6.4.1.2 IMA Module Configuration Requirements

A test procedure template uses abstract references to the configuration data to allow the instan-tiation for different IMA module configurations. However, the test cases combined in the testprocedure may require specific configuration properties, for example, with respect to the num-ber of configured partitions, the number and / or types of configuration ports, partition scheduling,partition memory for code and data areas, etc. Depending on the test cases, these requirementscan be very specific (e.g., “partition 1 has four AFDX source queuing ports with the followingparameter values, . . . ”) or quite general (e.g., “at least four ports are defined for each partition”).To avoid developing a ‘configuration requirement definition syntax’ which can be used to deter-mine automatically which test configurations can be used, each test procedure template providesa textual description of the configuration requirements and additionally lists some appropriate testconfigurations.Examples for configurations requirements and lists of possible test configurations are included inAppendix D (Appendices D.1.1.2 and D.2.1.2).

6.4.1.3 Test System Configuration Requirements

For executing a test procedure using the RT-Tester, it is necessary to provide an RT-Tester test con-figuration file which contains the information required for test control and test execution: It definesthe abstract machines (which are execution containers for the test specifications), their timer val-ues, the required interface modules, the assignment of abstract machines and interface modules tocluster nodes, and the location of all files (test specifications, configurations, log files, etc.). Fortest procedure templates, only the test procedure specific information have to be provided: (1)General configuration data including path information for the instantiated test procedure and op-tionally a test duration, (2) information about the abstract machines including their timer values,and (3) the required interface modules and their parameters. During instantiation, the configu-ration template of the test procedure template is used to generate the RT-Tester configuration filewhich is required for executing the test procedure (i.e., the respective instance of the test proceduretemplate).Examples for RT-Tester configuration templates are included in Appendices D.1.1.3 and D.2.1.3.The configuration templates use partially a similar syntax as the RT-Tester configuration file. Nev-ertheless, they cannot be used by the RT-Tester without prior instantiation for a specific test con-figuration.

6.4.1.4 Test Procedure Documentation Template

For generating test procedure documentation as well as test execution result documents, the testprocedure template also contains a test procedure documentation template. Based on the design ofthe test procedure and its test cases, the document provides the configuration-independent infor-mation which applies to all possible instances. When instantiating the test procedure template, thedocumentation file for each test procedure instance may need to be extended with configuration-dependent information. After executing the test procedure, the instance’s document is linked tothe respective test result files (i.e., the execution logs and the test verdict).As the test procedure documentation template file only contains human-readable descriptions in-tended for inclusion in the generated procedure description and test result documents, no examplefor this file type is provided.

Page 188: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

168 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

6.4.2 Example for Partitioning Testing

The aim of partitioning testing is to verify the partition segregation with respect to memory,scheduling, operational modes, and communication. In particular, this includes the followingtest objectives:

TO PART 001 Check that a partition’s memory cannot be accessed by any other partition, neitherfor reading nor for writing.

TO PART 002 Check that the access to specific parts of the partition’s own memory is further re-stricted in user mode (e.g., to code areas or partition configuration memory).

TO PART 003 Check that a partition’s failure resulting in a partition reset or shutdown does notaffect other partitions and their operating mode.

TO PART 004 Check that scheduling of the partitions is deterministic and not affected by resets orshutdowns of single partitions.

TO PART 005 Check that updating a partition’s configuration table or its code does not affect theconfiguration, code area, and other memory areas of any other partition. This check is alsorelated to the failure categories “data loading testing” and “configuration testing”.

TO PART 006 Check that a partition can only access its own communication ports and intra-partition communication means (e.g., buffers) and that communication of one partition doesnot affect the communication possibilities of the other partitions. This check is also re-lated to the failure categories “inter-partition communication testing” and “intra-partitioncommunication testing”.

This subsection further addresses test procedure template TO_PART_003/test_procedure_template_03 – one of the test procedure templates for testing test objective TO PART 0003 – whichverifies that resetting a partition does not affect the other partitions and their scheduling, com-munication behavior or operational mode. This example test procedure has been chosen to beintroduced in more detail for two reasons: Firstly, its test specification templates are good exam-ples (a) to describe the use of the channels and macros provided by the test execution environmentfor normal API handling but as well for communication flow and (b) to show the abstraction fromspecific configuration data. Secondly, it uses a predefined scenario which is executed by the testapplication.

General Testing Approach of test procedure template 03. For testing the segregation ofa module’s partitions, each partition performs a predefined behavior which allows to verify thatcommunication behavior, partition scheduling, and intra-partition scheduling is steady and uni-form and not affected if another partition is reset. A so-called communication flow scenario isspecified to achieve a predefined and uniform behavior in each partition. For showing the unifor-mity, a message is sent at a fixed frequency to the partition which relays the message internallyas fast as possible before sending it back to the test specification. A time stamp is added to themessage by each receiver and thus allows also to measure and compare the internal behavior. Thepartition-internal routing is defined by the message itself by specifying a sequence of output portsand buffers accessible by the respective partition. For the single-partition communication flowscenario used in this test procedure template, the communication flow for one partition is illus-trated in Fig. 6.7: The test control specification sends a communication flow message via AFDXto process 1 which executes the test application internal scenario for communication flow and lis-tens on queuing port QP1. It then relays the message to the next receiver which can be reached viabuffer BUF1. TA process 2 also executes the same TA-internal scenario and listens on buffer BUF1.

Page 189: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.4. IMA TEST SPECIFICATION TEMPLATE LIBRARY 169

When receiving a message from the buffer, it immediately forwards the message to the next outputport (buffer BUF2). TA process 3 listens on this buffer and, eventually, sends the message back tothe test specification using the specified output port QP2. The test specification triggers the samecommunication flow message (but with increased sequence identifier) at a fixed frequency. If thepartition’s behavior is independent of the other partitions, each communication flow starting andending at the test specification should thus take the same time to complete.

� � � � � �� � � � � �� � � � � �

� � � � �� � � � �� � � � �

� � � � �� � � � �� � � � �

� � � � �� � � � �� � � � �

� � � � � �� � � � � �� � � � � �

� � � � � �� � � � � �� � � � � �

Test Application 1

AFDX

IFM

AFDX

Communication FlowIFM AFDX

TA 4

COMMANDPORT

RESULTPORT

TA-internal scenario forcommunication flowexecuted by theTA process

TestEngine

IMAModule

BUF1 BUF2

Error Handler TA Process 2 TA Process 3TA Process 1

Tables

Configuration

Test Control Specification / Test Checker Specification / Simulator

Communication Control Layer

QP1QP2QP

QP

Communication FlowMessage

message size = 512message seq id = 1number of receivers = 3next receiver = 3

CRC

receiver [0] = BUF1receiver [1] = BUF2receiver [2] = QP2receiver [3] =receiver [4] =

message payload

time [0] = t1time [1] = t2time [2] = t3time [3] =time [4] =

Figure 6.7: Communication flow scenario for one partition

The communication flow scenario is executed independently but simultaneously by all four config-ured partitions as depicted by Fig. 6.8. Each partition gets its own communication flow messageswhich are then relayed partition internally as described above before sending them back to the testspecification.

Communication Flow Message. The communication flow message is structured such that it ispossible to easily determine the next receiver but also to check the correct message transfer and theuniform message transmission (when evaluating a sequence of returned messages). Furthermore,the structure supports a variable message size which allows to perform the communication flowscenario with different (each time constant) message sizes. The structure of the communicationflow message is illustrated in Fig. 6.7. The figure shows that the message contains an array ofoutput ports (receiver[]) which is defined by the test control specification. When receiving amessage, the receiver can determine the next receiver (receiver [next receiver]). It can alsocheck the correct message transfer by analyzing the message’s CRC (field CRC) and by verifyingthat the message sequence identifier (message seq id) has been increased by one (with respectto the last received message). To allow offline evaluation of the message transmission times,each receiver adds the time stamp when relaying the message (array time[]). The content of themessage, i.e., the message sequence identifier, the number of receivers and their sequence as wellas the message payload, are determined by the test control specification.

Page 190: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

170 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

� � �� � �� � �� � � � � �

� � �� � �� � � � � �

� � �� � �� � � � � �

� � �� � �� � �

� � �

� � �� � �� � � � � �

� � �� � �� � � � � �

� � �� � �� � � � � �

� � �� � �� � �

� � �

� � �� � �� � �

� � �� � �

Test Application 2Test Application 1 Test Application 4Test Application 3

IMA Module

Test Engine

AFDX

AFDX

IFMTA 4 Communication Flow

IFM AFDX

Tables

Configuration

Test Control Specification / Test Checker Specification / Simulator

Communication Control Layer

EH TAP1 TAP2 TAP3 TAP1 TAP2 TAP3EH TAP1 TAP2 TAP3EHTAP1 TAP2 TAP3EH

Figure 6.8: Independent communication flow scenarios for four partitions

Test Application Scenario. The TA-internal scenario for communication flow is executed byeach of the four TA processes. When starting the scenario in a TA process, the input parametersdefine the set of ports or buffers that the respective TA process shall listen on. Whenever the TAprocess then receives a message on one of these ports, it checks the message’s CRC and sequenceidentifier, adds the reception time, increments the reference to the next output port, and calculatesthe new CRC. It then sends the message via the output port indicated in the message to the nextreceiver. A special feature of this scenario is that it performs the aforementioned behavior whenreceiving a communication flow message but still allows other commands to be received, e.g., acommand for resetting the partition.

Test Specification Templates. The test procedure template considers four test application par-titions and thus four test specification templates (main pt1.csp.t – main pt4.csp.t) are pro-vided – each controlling and checking the behavior of one partition. The test specification tem-plates use a common set of test procedure-specific CSP macro processes which are defined inaux procs.csp.t. The complete files are included in Appendix D.1.1.1, but selected excerptsare shown in the following to demonstrate the use of CSP macros defined in the test executionenvironment and to illustrate the abstraction from specific configuration data.

Example for Using CSP Macros. CSP macro processes are defined, on the one hand, to ab-stract from the sequence of CSP events to be generated for triggering a command and, on theother hand, to simplify commanding API calls with standard parameters. An extensive set ofCSP macros for various purposes is defined in the test execution environment and used by all testspecification templates. For example, in TO_PART_003/test_procedure_template_03, three testapplication processes (and an error handler to be aware of any error) are required for each parti-tion. It is sufficient to use standard aperiodic processes whose parameters are compliant with themodule and partition configuration requirements. The respective process parameters to create stan-dard aperiodic or periodic processes are defined in the partition-specific configuration data extract(IMA Conf PTx.csp). CSP macros for creating standard test application processes are contained inIMA API macros.csp.

The following example is an excerpt of the locally defined CSP macro process PART INIT definedin aux procs.csp.t which is called by each test specification template main ptx.csp.t to

Page 191: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.4. IMA TEST SPECIFICATION TEMPLATE LIBRARY 171

initialize the partition x for the test procedure’s purpose. This sequence of CSP macro callscommands that the main process (addressed by toPRmain) creates an error handler and threestandard aperiodic TA processes.

-- initialize the partition: create processes, error handler, buffers, and

-- queuing ports; after switching to normal mode, start communication flow

-- scenario in each started process

PART_INIT(source_port_idx, dest_port_idx) =

(

-- create standard error handler

CREATE_STANDARD_ERROR_HANDLER(toPRmain) ;

-- create three standard aperiodic process

CREATE_STANDARD_APERIODIC_TA(toPRmain, PR1) ;

CREATE_STANDARD_APERIODIC_TA(toPRmain, PR2) ;

CREATE_STANDARD_APERIODIC_TA(toPRmain, PR3) ;

...

Example for Abstracting from Configuration Data. When continuing the partition initialization,it is also necessary to create the queuing ports for receiving the communication flow messagesfrom the test specification (called QP1 in Fig. 6.7) and for sending it back (QP2). For the queu-ing ports, no standard parameter values can be provided as for the process parameters. For thecommunication flow scenario, the ports’ parameters only have to comply with certain minimumrequirements but can otherwise be chosen freely from the set of configured ports. The minimumrequirements are determined by the test design and define the communication flow message size(COM FLOW MSG SIZE) and the minimum queue length (COM FLOW MAX MSG NB), but the ports selectedfor the communication flow scenario can also support bigger messages or longer queues. To ab-stract from concrete configuration data and to automate the port selection, the test specificationtemplates use CSP macro functions: (1) for selecting appropriate destination and source ports and(2) for retrieving from the configuration the port indices and any other parameters required for portcreation.

For example, the following excerpt from main ptx.csp.t and aux procs.csp.t illustrates extract-ing the parameters for an appropriate source port (QP2) from the configuration extract. For findingthe port index of a matching port, the CSP function get port index is used and, for getting the pa-rameters for this port, the CSP function get matching element is applied. The queuing port is thencreated by calling the CSP macro check CREATE QUEUING PORT with these parameters. The CSPmacro function get port index is defined at the end of aux procs.csp.t (see Appendix D.1.1.1)whereas get matching elem is specified in IMA API macros.csp (see Appendix B.3.1). The portindex and parameters of the destination port (QP1) are retrieved similarly (but not shown in theexample).

let

-- get the index of a source queuing port which supports the requested

-- message size (COM_FLOW_MSG_SIZE), queue length (COM_FLOW_MAX_MSG_NB)

-- from sequence IMA_Conf_SEQ_AFDX_QP_OUT

source_port_idx = get_port_index(COM_FLOW_MSG_SIZE,

COM_FLOW_MAX_MSG_NB,

port_QUEUING_PORT,

dir_SOURCE)

-- get the maximum message size for the given source queuing port index

Page 192: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

172 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

msg_size_src = get_matching_elem(source_port_idx,

IMA_Conf_SEQ_QUEUING_PORT_INDICES,

IMA_Conf_SEQ_QUEUING_PORTS_MAX_MSG_SIZE)

-- get the maximum message number for the given source queuing port index

msg_nb_src = get_matching_elem(source_port_idx,

IMA_Conf_SEQ_QUEUING_PORT_INDICES,

IMA_Conf_SEQ_QUEUING_PORTS_MAX_NB_MSG)

within

-- create the respective source queuing port and check that the

-- creation was successful

check_CREATE_QUEUING_PORT(toPRmain,

source_port_idx,

msg_size_src,

msg_nb_src,

dir_SOURCE,

qd_FIFO,

ret_NO_ERROR) ;

6.4.3 Example for Intra-Partition Communication Testing

The second test procedure template to be discussed in the following contributes to intra-partitioncommunication testing. Intra-partition communication testing focuses on the means for partition-internal communication like buffers, blackboards, semaphores, and events. It includes the follow-ing test objectives:

TO PARCOM 001 Check that the maximum number of buffers can be created by the main processbut not more; other processes cannot create buffers.

TO PARCOM 002 Check that message exchange through buffers is possible by all processes of apartition and that no messages are lost, corrupted, or delivered in wrong sequence.

TO PARCOM 003 Check that the maximum number of blackboards can be created by the main pro-cess but not more; other processes cannot create blackboards.

TO PARCOM 004 Check that message exchange through blackboards is possible by all processes ofa partition and that the sequence of messages remains unchanged.

TO PARCOM 005 Check that the maximum number of events can be created by the main process butnot more; other processes cannot create events.

TO PARCOM 006 Check that process synchronization using events is possible for all processes of apartition and without loosing any event.

TO PARCOM 007 Check that the maximum number of semaphores can be created by the main pro-cess but not more; other processes cannot create semaphores.

TO PARCOM 008 Check that signaling and locking using semaphores is possible by all processes ofa partition.

In this subsection, test procedure template TO_PARCOM_002/test_procedure_template_01 is con-sidered in more detail. It checks that it is possible to write and then read the maximum number ofmessages without corrupting the messages or their sequence. It further tests that it is not possible

Page 193: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.4. IMA TEST SPECIFICATION TEMPLATE LIBRARY 173

to write more messages than allowed (overflow checking) or read more messages than have beenwritten (underflow checking). The tests are performed with different maximum message sizes anddifferent maximum buffer queues.This test procedure template has been selected for more detailed investigation in this subsectionbecause its test specification templates exploit the specification possibilities of CSP and the RT-Tester – at least, more than many other test specification templates which only alternate test controland test evaluation. However, these sequential and relatively simple test specifications are mostlyevoked by design specifications for test cases and test procedures which are given as (structured)text.

General Testing Approach of test procedure template 01. For testing the functionality ofbuffers with respect to message corruption, sequencing, overflow, and underflow, a buffer is createdand then filled and emptied several times while performing several checks: Firstly, to verify thatthe message is not corrupted, each message contains a CRC which is checked. Secondly, tocheck the correct sequencing of the messages, each message contains a sequence number which isincremented with each write. Thirdly, to ensure that overflow and underflow of the buffer cannotoccur, it is also tested that it is not possible to write one more message when filling the buffer orread one more message when emptying it. The tests are performed for different combinations ofbuffer parameter values and, since these parameters can only be set during buffer creation, it isnecessary to repeat the tests for each different parameter combination.

To exploit the possibilities of CSP and the RT-Tester, the parameter combinations are not specifiedexplicitly but by specifying a set of possible test values for each parameter. These sets containonly possible parameter values as defined in the ARINC 653 specification but have manually beenrestricted to contain only the interesting test values. This allows to focus the tests by reducing thenumber of possible parameter combinations but also helps to cope with the limitations of the testsystem.

The tests are performed by one partition of the module under test which runs one test applicationprocess to write the messages and another one to read them. The test is controlled and checked bya single test specification template (main pt1.csp).

Test Specification Template. The test specification template main pt1.csp specified for TO_PARCOM_002/test_procedure_template_01 uses the CSP macros defined in the test executionenvironment, for example, for creating a buffer check CREATE BUFFERwhich triggers the commandto create a buffer and then compares automatically the return code with the expected return code.In the following, two excerpts of the test specification template are described in more detail: Oneshowing how the buffer parameters can be selected with CSP means and another one introducingthe CSP macro process to write into a buffer until its full. The complete test specification isprovided in Appendix D.2.1.

Example for Selecting Buffer Parameters with CSP. For selecting one value from a set of possiblevalues, CSP provides the operator replicated internal choice. By nesting several of these choiceoperators, a combination of values can be selected. The following excerpt from the CSP testspecification shows how the combination of buffer parameters is selected and subsequently usedto create a buffer with these parameters. After switching to normal operating mode, the newlycreated buffer is filled and emptied until the timer elapses. Writing to and reading from buffer isencapsulated in the CSP process WRITE READ BUF.

-- select a possible combination of buffer parameters from the given sets

|˜| buf_size : buf1_max_msg_size_set @

|˜| buf_nb_msg : buf1_max_nb_msg @

Page 194: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

174 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

-- create buffer with these parameters

(check_CREATE_BUFFER(toPRmain, buf1, buf_size, buf_nb_msg,

qd_FIFO, ret_NO_ERROR);

-- switch to NORMAL mode

SET_PARTITION_MODE(toPRmain, op_NORMAL);

WAIT(TM_WAIT);

-- start timer for restarting the test after a random time

-- (after restart, a new parameter combination is selected)

setTimer.TM_RESTART ->

-- write to and read from currently empty buffer

WRITE_READ_BUF(buf1,buf_size,buf_nb_msg,false)

)

Example for Using a CSP Macro to Fill a Buffer. Writing into a buffer until it is full depends onthe maximum number of messages determined when creating the buffer. A CSP macro to fill abuffer thus has to cope with different buffer parameters by repeatedly writing a message into thebuffer until the current buffer’s queue is full. For looping, CSP supports recursion of parameterizedprocesses. The following excerpt shows the recursive CSP process WRITE ITEM which repeatedlywrites a new message with updated message parameters into the buffer. To check that it is notpossible to write more messages than allowed by the defined queue length, it is then tried to writeone more message. For this check of the API call SEND BUFFER, the timeout parameter (zero or non-zero) defines which error return code is expected in this situation. The CSP process READ ITEMSfor emptying the buffer, i.e., for repeatedly reading a message until the buffer is empty, is similarin its structure and therefore not shown here.

-- write to buffer buf until it is full and then try to write one more message

-- (recursive function)

WRITE_ITEMS(buf,buf_size,buf_nb_msg,curr_msg,wr_seq) =

((curr_msg < buf_nb_msg) & (-- buffer is not yet full

-- write message into buffer

check_SEND_BUFFER(toPR1,buf,buf_size,

wr_seq,10000,

ret_NO_ERROR);

-- continue recursively

WRITE_ITEMS(buf,buf_size,buf_nb_msg,

curr_msg+1,

((wr_seq)%max_seq_num)+1)))

[]

((curr_msg >= buf_nb_msg) & (-- buffer is full

-- try to write another message

-- without time_out

check_SEND_BUFFER(toPR1,buf,buf_size,

wr_seq,0,

ret_NOT_AVAILABLE);

-- stop recursion

SKIP))

[]

((curr_msg >= buf_nb_msg) & (-- buffer is full

-- try to write another message

Page 195: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.4. IMA TEST SPECIFICATION TEMPLATE LIBRARY 175

-- with short time_out

check_SEND_BUFFER(toPR1,buf,buf_size,

wr_seq,10,

ret_TIMED_OUT);

-- stop recursion

SKIP))

Page 196: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

176 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

6.5 Test Preparation

Before a test procedure can be executed, the module under test and the test system have to beprepared. For the test suite introduced in this chapter, this comprises:

1. Test Procedure Selection, i.e., choosing a test procedure template defined in the test specifi-cation template library and determining which of the possible configurations shall be used.

2. Data Loading, i.e., generating a data load containing the test applications and the configu-ration data of the selected configuration and loading it onto the IMA module.

3. Test Instantiation, i.e., instantiating the chosen test procedure template using the config-uration data extracts for the selected configuration in order to generate an executable testprocedure.

Test procedure selection is performed manually by the tester or in an automated way by a test man-agement system and the selection process is outside the scope of this thesis. All other tasks haveto be performed before executing the selected test procedure to ensure that (a) the IMA module isconfigured as expected by the test specifications and (b) the test application software is running ineach configured partition. However, load generation and data loading are not necessary if the IMAmodule is already configured correctly, e.g., when the previous test procedures already used thesame configuration. This is possible because a generic test application is used for all tests. There-fore, these tasks can be considered separately in the following sections: Section 6.5.1 addressesthe load generation environment, and Sect. 6.5.2 describes the test instantiation environment.

6.5.1 Load Generation Environment

The load generation and data loading environment is described in two phases: At first, the tasksand the abstract tool chain are addressed in the following subsection. Thereafter, the concrete toolchain and its applicability for automated testing is discussed.

6.5.1.1 Tool Chain for Data Loading

The data loading environment consists of a set of tools which – when applied in the correct se-quence – generate a so-called data load which can be uploaded to the respective memory areas inthe IMA module using a dedicated data loader. The inputs to the tool chain are (a) the configura-tion tables as described in Sect. 3.2.1 and Sect. 3.2.2, (b) the source code of all applications to berunning in the IMA module, and (c) the module memory layout which provides hardware infor-mation. The tool chain is depicted in Fig. 6.9. The respective tools are described in the followingparagraphs starting at the end of the tool chain.

Data loader. The configuration tables and the test application code are loaded into the respec-tive IMA module’s memory areas using a specific data loader which allows updating applicationsoftware or configuration data while the IMA module is already installed on-board the aircraft. Ofcourse, data loading is only allowed and possible, when the aircraft is on ground.To allow for independent development of different but interchangeable data loaders, the inputformat of such data loaders is standardized as ARINC 665, and the data loading protocol is stan-dardized as ARINC 615A. Usually, the data loader provides three operations: uploading of datato the IMA module (e.g., uploading application software), downloading of data from the target(e.g., downloading maintenance data), and a possibility to obtain configuration information. Fortest preparation, only uploading of data is required. The other operations could be used duringoffline test evaluation.

Page 197: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.5. TEST PREPARATION 177

Application 1 Application 2

ModuleMemoryLayout

ApplicationSourceCode

Linker andLoad Generator

AdminFile

Script

*.c

*.c

*.csv

Compiler

Configuration TableTransformer

Object Code

Configuration TableDataConfigurationPartitionModule and

Data Loader

A665 compliant load

Tables

Config

Figure 6.9: Tool chain for load generation and data loading

ARINC 665 Compliant Data Load. An ARINC 665 compliant data load is composed of headerfiles and one or more data files which contain executable code, configuration tables, and other data(e.g., data bases required for certain applications). In addition, the data loads contain CRCs tocheck the correct transmission from the computer running the data loader to the IMA module.The main characteristic of the ARINC 665 data load format is the possibility to contain only theconfiguration data – or only the executable code of a subset of applications. Thus, software updatesof single applications are possible without changing the configuration.

Load Generation. The data load is generated using the object code of the application softwareand/or the configuration tables. Specific administration files can provide additional informationfrom the configuration data, the source code, or the module memory layout which can be used bythe linker and load generator as required. A specific script extracts this administration informationfrom the input files.

Generating the Object Code. The object code is the output of the compilation process of theapplication source code and/or a C representation of the configuration tables. The latter is gener-ated by a configuration table transformer which provides a C representation of the configurationdata. Note that this transformer should not be confused with the configuration data generator usedfor generating the configuration tables based on the configuration rules in the configuration library(see Sect. 6.3.1).

Tool Chain. Summarizing, the tools identified for generating a data load and uploading it are (listaccording to tool invocation sequence): a configuration table transformer, a compiler, a tool/scriptfor generating the administration file, a linker and load generator, and a data loader. They aredepicted in Fig. 6.9 in beige-grey. In contrast, the input data as well as the intermediate data arerepresented in white.

Embedding the Load Generation Environment into the Test Suite Environment. When ap-plying the tool chain described above for the IMA test suite, the input data are (a) the generatedmodule and partition configuration data in CSV format as generated by the rule-based configura-tion data generator and (b) the source code of the test application. Since all test procedures usethe generic test application, it is possible and advisory to prepare all data loads in order to avoidtime consuming data load generation before each test execution. To generate all data loads, the

Page 198: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

178 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

configuration data generator generates all configurations which are used by at least one test pro-cedure. Each generated configuration together with a copy of the generic TA’s source code foreach configured partition is then transformed into a data load. Figure 6.10 shows how the loadgeneration environment is embedded into the test suite environment.

. . .Config0001rules rules rules rules rules rules

Config0002 Config0003 Config0004 Config0005 ConfigXXXX

IMA Configuration Library

Configuration Data Generator

ModuleMemoryLayout

TestApplication

Load GeneratorLinker andScript

Test Application 1 Test Application 2

Module and PartitionGenerated

*.c

Compiler

Configuration TableTransformer

DataConfiguration

*.csv Configuration Table

Data Loader

AdminFile

A665 compliant load

Object Code

*.cIMA Module

Test Engine

Tables

Config

Test Control Spec / Test Checker Spec / Simulator

Figure 6.10: Tool chain embedded into the test suite environment

6.5.1.2 Concrete Tool Chain

For the implementation of the test suite, the required tools for the tool chain were pre-determinedby the project management (see [Air04], [TD04b], [TD04a], among others). The concrete toolsare shortly introduced in the following and the respective tool chain is depicted in Fig. 6.11.

Configuration Table Transformer. For transforming the configuration data into a C representa-tion, the COTAGE tool is used which is developed by Thales Avionics (see [Tha04a]). Inaddition to transforming the data, COTAGE adds target-related additional configuration en-tries. It also provides a graphical user interface for displaying and defining the module- andpartition-level configuration tables; this feature is not used in our environment.

Compiler and Linker. A specific compiler is used which is certified according to avionics stan-dards as well as a separate linker which generates S-record files as input for the data loader.

Load Generator. In contrast to the tool chain described in Sect. 6.5.1.1, the linker and the loadgenerator are separate tools. For generating the ARINC 665 compliant load, LODGE isused which is developed by Diehl Avionik Systeme.

Scripts for Generating Administrative Data. The administrative data for the linker (i.e., thelinker definition file) are extracted from the application source code and the module mem-ory layout. The administrative data for the load generator LODGE are obtained from theconfiguration data and the module memory layout.

Page 199: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.5. TEST PREPARATION 179

Data Loader. For loading the generated loads, the tool BETSI is used which is provided by DiehlAvionik Systeme. It implements the ARINC 615A protocol for communication with theIMA module and provides a graphical user interface for selecting the data load.

. . .Config0001rules rules rules rules rules rules

Config0002 Config0003 Config0004 Config0005 ConfigXXXX

IMA Configuration Library

Automated generationof data loads

Manual interaction fordata loading

Test Application 1 Test Application 2

MemoryLayout

Module

TestApplication

Configuration Data Generator

Thales

Diehl

DiehlBETSI

LODGE

Linker

Compiler

COTAGE

Script

Script

Module and Partition

*.c

ConfigurationData

Generated

LinkerDefinitionFile

LODGEAdminFile

Configuration Table

Object Code

S-Record File

A665 compliant load

*.c*.csv

IMA Module

Test Engine

Tables

Config

Test Control Spec / Test Checker Spec / Simulator

Figure 6.11: Load generation tool chain embedded into the test suite environment

The used tool chain provides the development environment to be used by application developersfor testing their applications on an IMA module or in a simulation environment.

However, the tool chain is not designed for automated load generation and automated data loadingwhich is required for executing test procedures with different configurations (or even differentapplications) without manual interaction. Two main problems arise if this tool chain shall be usedfor automated testing: Firstly, the tools are designed for manual interaction and thus are GUIoriented. Nevertheless, most tools provide a command-line interface. Secondly, the tools usedifferent operating systems, i.e., some require Microsoft Windows and other require Solaris. Forall Microsoft Windows-based tools with a command-line interface, a possible solution is to embedthe tools into a special environment running on the test system which triggers specific batch fileson the Windows computer.

As already depicted in Fig. 6.11, all tools in the concrete tool chain except BETSI provide acommand-line interface and are thus embedded into the automated load generation process. Incontrast, for uploading the generated loads, manual interaction by the tester is required usingthe graphical user interface of BETSI. The consequences of this limitation are serious becauseit prevents automated test execution of a batch of test procedures if different configurations arerequired and thus hampers one of the key objectives of this test suite. To reduce the impact ofthis limitation and to avoid manual interaction before each test procedure, the test procedurescan be grouped according to their used configuration and then all test procedures using the sameconfiguration can be executed one after the other. Thus, manual interaction is required only beforeeach batch of test procedures.

Page 200: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

180 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

ReferencesThe concrete tool chain as used in the VICTORIA project is introduced in [TD04b], [TD04a], and[Air04], among others.

6.5.2 Test Instantiation Environment

The IMA test specification template library contains test procedure templates grouped accord-ing to their test objective (see Sect. 6.4). Each test procedure template contains a set of testspecification templates (*.csp.t), IMA module configuration requirements and a list of compli-ant test configurations (possible configs), an RT-Tester configuration template (config.rtt.t),and a test procedure documentation template (config.rttdoc.t). All test specification templatesare independent of a specific configuration, i.e., are implemented to be executed with differ-ent matching configurations. For example, if the only requirement is to have two partitions,the test can be executed with all configurations defining two partitions (or even with config-urations defining more than two partitions). Depending on the module configuration require-ments, many configurations could be compliant and, therefore, a manual preselection is donefor each test procedure in the file possible configs. For example, for test procedure templateTO_PART_003/test_procedure_template_03, two configurations are suggested: Config0040 1and Config0040 2 (see Appendix D.1.1.2).

Test procedure templates cannot be executed as such but have to be instantiated for one of the pos-sible configurations using the test instantiation tool. Thereby, the tester selecting the test proce-dures to be executed can select a subset or all of the preselected possible configurations. During thetest instantiation for a specific configuration, a new test procedure directory is created whose nameis composed of the test procedure name and the configuration name. For example, the test pro-cedure instantiated from template test procedure template 03 for configuration Config0040 1is called test procedure 03 Config0040 1. In addition, all template files of the respective testprocedure template (i.e., all files with ending *.t) are copied to the new directory while removingthe template ending and are then instantiated with the test relevant configuration data. This isdescribed in more detail in the following and also depicted in Fig. 6.12.

• The test specification templates can be used without modifications because they in-clude the configuration data extracts using the generic names IMA Conf Globals.csp andIMA Conf PTx.csp (where x is the respective partition index). The paths denoting the loca-tion of the include files are defined in the configuration file.

• The configuration template for test execution registers the CSP test specificationsand defines the test procedure-specific paths in an abstract form. For exam-ple, the name of the new directory is specified in the configuration template asTESTPROCEDURE test_procedure_03_--CONFIG--_--VAR-- where --CONFIG-- is thename of the configuration (e.g., Config0040) and --VAR-- is the respective variant as de-noted in the configuration rules files (e.g., 1). However, the configuration template missesfurther path information which does not depend on the concrete test procedure, but is re-quired (a) for finding the CSP types, channels, and macros included in the test specificationsand (b) for locating the instantiated test procedure files like the test specifications and thetest execution logs. In addition, the required interface modules (and their types) are onlylisted.This abstract configuration information is not sufficient for test execution and, therefore,the test instantiation tool takes the abstract configuration information and generates an RT-Tester configuration file (config.rtt). This concrete configuration file then specifies alldata which are required for test execution with the RT-Tester test system. This means itdefines the paths for finding the general include files and the test procedure specific files and

Page 201: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.5. TEST PREPARATION 181

denotes for each interface module which cluster node provides the required hardware inter-faces. The format of the resulting RT-Tester configuration file is described in [RT-Tester 5],p. 39–68.

• For test documentation, the general, configuration-independent details of the test specifi-cation behavior and special data types and channels used by the test specifications are de-scribed in the test procedure documentation template config.rttdoc.t which may alsocontain configuration-dependent documentation parts for the preselected configurations.During test instantiation, all configuration-independent parts as well as the configuration-dependent parts for the chosen configuration are compiled in a test procedure documenta-tion file which can be used for documenting the test procedure itself.Documentation of the test execution is not provided because – to yield a test verdict for aspecific run – it is required to evaluate the test logs in an automated or manual way after thetest execution.

The instantiation of a test procedure template is depicted in Figures 6.12 and 6.13: Fig-ure 6.12 shows the structure and the resulting files when instantiating test procedure templatetest procedure template 03 for one of the possible configurations (here Config0040 1). Fig-ure 6.13 pictures the instantiation of the same test procedure template for all possible configura-tions.

For a given test suite, it is possible to instantiate all test procedure templates with their preselectedpossible configurations in an automated way which results in a test procedure library that is struc-tured like the IMA test specification template library. Such a library is especially interesting forregression testing and avoids having to repeat the time consuming tests instantiation process.

rules

IMA Configuration Library

rulesConfigzzzz

. . .Config0001

IMA Test ExecutionEnvironment

Test Instantiation Tool

DataTest Relevant Configuration

*.cspConfig0040 1

TO PART 001

TO PART 002

TO PART 003

TO PART 004

...

TO PART xxx

Health MonitoringTests

ConfigurationTests

Operational ModesTests

Inter-Partition Com-munication Tests

Intra-Partition Com-munication Tests

Data LoadingTests

possible configs

config.rttdoc.t

config.rtt.t

specs

main pt1.csp.t

main pt2.csp.t

main pt3.csp.t

main pt4.csp.t

. . .

config.rttdoc

config.rtt

specs

main pt1.csp

main pt2.csp

main pt3.csp

main pt4.csp

. . .

Test Application 1 Test Application 2

Power Supply andConsumption Tests

PartitioningTests

Operating System(API) Tests

Test Objectives for Partitioning

test procedure template 01

test procedure template 02

test procedure template 03

...

test procedure template yy

Test Procedure Templates for TO PART 003

test procedure template 03

Config0040 1

Config0040 2

Instantiated Test Proceduretest procedure 03 Config0040 1

IMA Test Specification Template Library

IMA Module

Test Engine

Tables

Config

Test Control Spec / Test Checker Spec / Simulator

Figure 6.12: Instantiation of test procedure template test procedure template 03 for configura-tion Config0040 1 resulting in test procedure test procedure 03 Config0040 1.

Page 202: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

182 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

TO PART 001

TO PART 002

TO PART 003

TO PART 004

...

TO PART xxx

Health MonitoringTests

ConfigurationTests

Operational ModesTests

Inter-Partition Com-munication Tests

Intra-Partition Com-munication Tests

Data LoadingTests

Config0040 1

Config0040 2

Power Supply andConsumption Tests

PartitioningTests

IMA Test Specification Template Library

Operating System(API) Tests

Test Objectives for Partitioning

test procedure template 01

test procedure template 02

test procedure template 03

...

test procedure template yy

Test Procedure Templates for TO PART 003

test procedure template 03possible configs

config.rttdoc.t

config.rtt.t

main pt1.csp.t

main pt2.csp.t

main pt3.csp.t

main pt4.csp.t

specs

. . .

TestInstantiation

Tool

DataTest Relevant Configuration

*.cspConfig0040 1

Config0040 2DataTest Relevant Configuration

*.csp

IMA Test Execution Environment

Instantiated Test Proceduretest procedure 03 Config0040 1

Instantiated Test Proceduretest procedure 03 Config0040 2

CSP Types CSP Channels CSP Macros Interface Modules

Figure 6.13: Instantiation of test procedure template test procedure template 03 for allpossible configurations resulting in test procedure test procedure 03 Config0040 1 andtest procedure 03 Config0040 2.

Page 203: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.6. TEST EXECUTION AND EVALUATION 183

6.6 Test Execution and Evaluation

So far in this chapter, the core components for performing functional testing of bare IMA moduleshave been described. This section aims at summarizing their relation with respect to test executionand test evaluation.

Above, the first two phases of IMA module testing have been discussed extensively: Design andimplementation (i.e., designing test cases and test procedures, designing and implementing thetest bench, and developing the test suite) as well as test preparation (i.e., preparing the test suitefor test execution). The design specifications for test cases and test procedures as used in thisthesis were given as (structured) text. They were made available by the airframer and, thus, arenot addressed in much detail in this thesis. Based on the requirements of the test cases and testprocedures, the test bench for IMA module testing has been designed and implemented. Sect. 5.4has described the test bench and also introduced the test engine (a hard real-time test engine clus-ter) and the test system (the RT-Tester test tool) used for the case study described in this chapter.The test suite has been structured according to the general test approach (see Sect. 5.2.1) and thestructure of the test design specifications. It comprises the following core components which havebeen described in the previous sections: the generic test application, the test configuration library,the test specification template library, and the test execution environment the latter of which pro-vides the CSP data types, channels, and macros for test specification development as well as theinterface modules for the test system. Preparing the test suite for test execution (i.e., generatingtest configurations based on the provided configuration rules, instantiating the test procedure tem-plates, generating data loads) can then be performed automatically as described in the precedingsection. In addition, all test procedures have to be compiled such that their test specifications canbe executed in real-time by the test system RT-Tester.

The next steps of IMA module testing are test execution and evaluation which are both addressedin the following.

Test Execution. After having developed the test bench and completed and prepared the testsuite, test execution comprises executing the test procedures using the test bench. For extensivetesting, this means that all test procedures have to be executed – in particular, since the number oftest procedures has already been reduced by (manually) defining the list of test configurations thatcan be used to instantiate a test procedure template. However, the sequence of their execution canbe determined as follows:

1. All test procedures contributing to the same test objective are grouped and then executed asone batch. Test execution starts with an arbitrarily chosen group.The assessment and grouping can be performed automatically.

2. All test procedures using the same test configuration are grouped independently of their testobjective. Test execution starts with an arbitrarily chosen group (i.e., with an arbitrarilychosen test configuration).The assessment and grouping can be performed automatically.

3. All test objectives are (manually) assessed and the associated test procedures are classifiedand grouped according to the importance / criticality of their test objective. Test executionthen starts with the group containing the most important / critical test procedures.The assessment has to be performed manually and the (time) cost depends on the number oftest objectives. The grouping can then be performed automatically.

4. All test procedure templates are (manually) assessed and classified according to their impor-tance and criticality. After test procedure instantiation, they are grouped according to these

Page 204: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

184 CHAPTER 6. AUTOMATED BARE IMA PLATFORM TESTING

classifiers and then executed (starting with the group containing the most important / criticaltest procedures).The assessment has to be performed manually and the (time) cost depends on the number oftest procedure templates (which is usually higher than the number of test objectives). Thegrouping can then be performed automatically.

5. All test procedures are (manually) assessed, classified and grouped according to their im-portance and criticality and then executed (starting with the group containing the most im-portant / critical test procedures).The assessment has to be performed manually and the (time) cost depends on the number oftest procedures (which is usually higher than the number of test procedure templates). Thegrouping can then be performed automatically.

6. Any combination of the above means.

Executing a specific test procedure requires that (a) the SUT is configured as expected by the testprocedure and loaded with the required test applications, and (b) the test system is prepared tostart the compiled test procedure. The actual test execution is performed automatically (except fora few test procedures which require manual interaction of a tester).

Preferably, test execution of all test procedures is performed fully automatically by pressing onebutton and waiting for completion. However, this “one button approach” cannot be used for bareIMAmodule testing because the tools that are available for data loading do not allow remotely con-trolled uploading of new configurations or test applications. As a consequence, every preparationof the SUT has to be performed manually. In order to reduce the number of manual interactions,the test procedures can be grouped for test execution according to their configuration require-ments (as described above in option 2) and then performed as one batch after manually uploadingthe data load. Thus, the number of manual interactions can be reduced to the number of differentconfigurations because manual interaction is only required as a preparation for each batch of testprocedures – and in case that single test procedures require configuration changes as part of theirtest execution.

Test Evaluation. After having executed all test procedures, the test results are determined basedon the evaluations that were already performed on-the-fly by the test specifications. The aimof automated testing and, in particular, of this test suite is that (a) (most) test evaluation can beperformed automatically and does not require time-consuming manual evaluation of all SUT re-sponse and (b) the overall test result is “passed”, if executing all test procedures has not discoveredany unexpected behavior. In case of any errors, their cause is corrected and regression testing isperformed. Depending on the type of error observed and its cause, it may be possible to limitregression testing to the related test procedures, i.e., to those where the respective error was ob-served. For example, this is possible for some errors that are caused by an erroneous specificationdocument and the IMA module already showed the correct behavior. In such cases, the respectivetest procedures are corrected and re-executed. However, such decisions have to be made on a caseby case basis in consultation with the responsible parties and the certification authorities.

6.7 Classification of the Observed Errors

After having discussed the components of the test suite and its test execution and evaluation, acouple of questions remain regarding the effectiveness of the approach and its implementation.These include (but are not limited to): How many errors have been found? What is the ratio ofnumber of errors to number of test cases (or number of test procedures)? How can these errors be

Page 205: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

6.7. CLASSIFICATION OF THE OBSERVED ERRORS 185

classified? How many of each error type have been found? How many critical errors have beenfound? Where is the list of observed errors?Unfortunately, the answers for most of these questions cannot be published in this thesis becausethe respective information is confidential. This includes, in particular, a listing of the observederrors and their criticality, the number of errors, a listing of all test procedures (among others), inshort, most of what is interesting at first sight. Nevertheless, it can be stated that – as expected –several errors of different criticality levels were revealed when performing the tests. To stress theimportance of testing with respect to other verification methods, it can also be stated that manyof the discrepancies between specified and observed behavior could not have been detected by(a) code review of the software or (b) formal verification of (parts of) the operating system orconfiguration tables.When asking for the number of observed errors, it also has to be taken into account that there isnot a single test execution run for all test procedures which then returns the number of observeddiscrepancies. On the contrary, testing is an iterative process. For the test suite described inthis chapter, its design and development had been started before even early prototypes of IMAmodules and the tool chain were available (in order to be ready for testing the first versions).While implementing the test suite, many trial runs have been performed on target-based simulatorsand, later on, early prototypes of IMA modules and the tool chain to examine the details of thetest suite implementation. This includes the implementation of the test specifications and the testprocedures, the communication protocol between test applications and test specifications, dataloading of generated test configurations, data loading of (test) applications, the tool chain for loadgeneration, etc. Although these “tests” aimed at showing the operation of the test suite and theproper interaction of its components, they also revealed discrepancies that were corrected beforethe first version was officially released for testing.The observed errors can be classified as follows:

• errors in specification documents5 (e.g., inconsistencies, wrong minimum and maximumvalues),

• imprecise information in the specification documents (e.g., return codes not defined for eachsituation),

• errors in the tool chain (e.g., interaction problems of successive tools), and• errors in the system under test (e.g., the operating system software, the interface drivers, thehardware).

Thereby, the first two types of problems were mostly identified during implementation of the testspecifications when the test expert (responsible for implementing the test procedure) manuallycompared or analyzed the different documents. In most cases, the error was only observed inthe documents while the SUT already behaved as expected (but not as specified). The third typeof error was mostly revealed by the trial runs described above and corrected (or avoided) beforetesting started. The fourth group of errors is the most critical one and can be further decomposedinto (a) those errors that are most likely to restrict normal operation of the applications and (b)others that usually only occur when testing the upper or lower limit but are less likely to occur inreal applications (e.g., the creation of the maximum number of allowed semaphores).

ReferencesBased on the approach for automated bare IMA module testing – as described in this chapter –, atest suite for testing configured IMA modules has been developed (see also Sect. 5.2.2). [MP04]discusses this approach and also shortly addresses the types of observed errors.

5The term specification document includes IMA module specifications or user guides, the ARINC 653 API docu-ment, and specifications or user guides of tools in the tool chain.

Page 206: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

186

Page 207: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Chapter 7

Approach for Testing a Network of IMAPlatforms

This chapter details another test step of the system test approach for IMA-based architectures (seeChap. 5). While the previous chapter has considered an approach for testing a single IMA modulewith many different configurations, this chapter focuses on network integration testing and thus onthe interconnection of a set of IMA modules with their final configurations. By concentrating onthe configured communication flow between partitions (either on the same module or on differentones), the test approach described in this chapter can consider a more automated way of test casegeneration – also at the risk of finally having very many test cases or even too many to be executedin an adequate amount of time. However, the advantages of generating test cases and possiblyalso test evaluation specifications in an automated fashion which are then to be executed by anappropriate test automation bench are numerous. For example, by using a test case generationalgorithm it is much easier to ensure that all necessary test cases for a specific set of test objectiveshave been considered than by manually designing them. Even the reduction of the generatedtest cases according to clearly defined rules can be integrated in such an algorithm and allowsto counter the possibly high number of test cases in a structured way. Another advantage ofautomated test case generation lies in handling changes in the network configuration which areoften required at quite late stages in the overall development and V&V process: While manualadaption of the test case design would mean re-designing most of the test cases and thus wouldmost likely delay the test execution and consequently the overall process, re-execution of the testcase generation algorithm requires only little extra time and thus should outweigh the effort whichis necessary for developing and implementing the algorithm and appropriate reduction rules.Generally, the problem of having very many test cases or test cases which require a very longtest execution time can also occur when manually defining and implementing test cases, i.e., itis not limited to automated test case generation. Often it is accompanied by manual or semi-automated test execution which further increases the execution time and also the test costs. Forbare IMA module testing, the previous chapter has described in detail the various constrainingmeans to increase the degree of test automation and to simplify the creation and loading of testconfigurations (within the tool chain limitations). By spending so much effort, the number ofimplemented test cases that are executable in an acceptable amount of time had been increasednoticeably. However, it was still necessary to manually reduce the number of test cases (withrespect to the initial test design) to ensure that all interesting test cases can be implemented andexecuted in the given test preparation and test execution time.In contrast to bare IMA module testing, the basic conditions for test automation and reducingthe number of test cases are different for testing a network of IMA modules: Firstly, only onenetwork configuration using final IMA module configurations is considered instead of variouscombinations of test configurations which means that manual interactions for configuring the IMA

187

Page 208: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

188 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

modules are only required in the test preparation phase (i.e., once for each network under test)and do not interrupt the test execution. Secondly, the set of test objectives (see Sect. 5.3.1) isless diverse and focuses on inter-partition communication aspects which allows to have one testcase generation algorithm to generate all test cases. Thirdly, combining the configurations ofdifferent modules requires extensive and timely manual analysis in order to distinguish possibleand impossible interactions to be used in test cases. In such cases, the expected savings from anappropriate test case generation algorithm can outweigh the effort required for its development.Summarizing, the conditions for applying an automated test case generation algorithm are verypromising. However, the task of selecting the interesting test cases out of the enormous numberof test cases remains as important but also as difficult as before, particularly, since it can hardly bedone manually for generated test cases.This chapter aims at (1) describing generally the approach for testing the communication flowin a network of configured IMA modules, (2) introducing in detail an algorithm for test datageneration, and (3) suggesting means for focusing the test range in an automated way in orderto reduce the number of test cases but also to sort them according to well-defined rules. Theremainder of this chapter provides firstly a rough overview of the test approach and addressesthe characteristics and the representation of the test cases (also called communication schedules)illustrated by manually generating an example communication schedule (Sect. 7.1). Section 7.2then introduces the algorithm and also considers means for influencing and handling the generatedcommunication schedules in order to focus testing. Unlike in the previous chapter, it is outside thescope of this chapter to provide a complete suite for communication flow testing. Nevertheless,some implementation aspects such as characteristics of the test applications and test specificationsas well as approaches for test evaluation are considered in Sect. 7.3. For assessing the results ofthe automated test case generation, the characteristics of the algorithm are then investigated inSect. 7.4 using the results of a prototype implementation. The chapter concludes in Sect. 7.5 bydiscussing the future directions of the algorithm and the test approach.

ReferencesThe first ideas of the approach for communication flow testing in a network of configured IMAmodules – as it is discussed in the following – have been presented in [Tsi04]. The underly-ing strategy for testing a network of IMA platforms (see Chap. 5) follows the general approachdescribed in [Pel03].

7.1 Communication Schedules

When testing a network of IMA platforms for evaluating the inter-partition communication flow,configured IMA modules are considered, i.e., IMA modules with their final configurations. Asalready discussed in Sect. 5.2, this restriction is needed because, otherwise, the number of possiblenetwork configurations (i.e., sets of IMA module configurations) would be too high and wouldresult in even more test cases. Assuming that the total available test execution time is constant,executing more test cases might not be possible and thus it would be necessary to reduce thenumber of test cases for each tested network configuration, i.e., also for the current final networkconfiguration. Being prepared for future configuration changes – in this case – does not outweighthe drawback of not having enough time for testing the communication flow of the current finalconfiguration. Clearly, this is a major difference to the approach described in the previous chapterwhere the aim was to test the IMA module’s behavior with many different configurations – finaland possible future ones.For investigating the communication flow in a network of IMA modules, three types of communi-cation links can be distinguished in the network’s configuration which all have to be addressed bythe approach:

Page 209: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.1. COMMUNICATION SCHEDULES 189

1. module-internal inter-partition communication links,

2. communication links between partitions hosted on different IMA modules, and

3. communication links to and from non-IMA controllers and peripheral equipment.

Each of the above communication links may equally affect the communication flow within a mod-ule or, via its external interfaces, in the network – of course, partly also depending on the types ofcommunication techniques used for interconnection since different interface drivers are employedfor each one. According to the test objectives given in Sect. 5.3.1, it is the aim to test all possiblecommunication flow, i.e., all communication flows that are allowed by the network configurationas well as by the characteristics of the communication techniques and the performance character-istics of the modules and the network. As a consequence, this requires that the system under test,i.e., the network elements, can be adequately influenced to communicate at certain points in timewhich are defined by the test case. On the one hand, this means that the communication behaviorof the applications running in the IMA modules’ partitions has to be controlled which is only pos-sible when using test applications. On the other hand, it has to be considered that the network’sconfigurations typically also include links to or from non-IMA controllers and peripheral equip-ment which are not part of the network under test since such components cannot necessarily becontrolled as required by general means. They have to be replaced by adequate test specificationsrunning in the test system which allow control of communication as required by the test cases.From a test case point of view, the possible communication behavior of applications running onIMA modules does not differ from non-IMA controllers. Moreover, the two main constraints are(a) which behavior can take place simultaneously and (b) which behavior can only occur sequen-tially. For simultaneous interactions, the applications have to be executed on different CPUs eitherby having partitions hosted on different IMA modules or by considering applications on differentsingle-CPU controllers or on a shared multi-CPU controller. Sequential communication can berequired by applications sharing a common computing resource or by message transmission logicwhich requires a message to be sent first and transmission time needs to pass before the messagecan be received. As a consequence, the test cases could simply consider communicating networkcomponents which can each access a set of communication ports. This term “communicating net-work component” refers both to an IMA partitions or to processes of non-IMA controllers but,for easier understanding, the following descriptions still use the IMA terms. For the same rea-son, most examples only consider networks of IMA modules where all ports are used for inter- orintra-module communication.

Communication Schedules. The algorithm introduced in the following is not generating testcases as such but so-called communication schedules which each describe a possible communi-cation flow in the network under test, i.e., the communication behavior of all partitions. For eachpartition it is stated when to show which communication behavior, for example, when to writeinto one of the partition’s source ports or when to read from one of its destination ports. For test-ing based on such a communication schedule, the test application running in the partition has tointerpret the communication schedule by filtering only the partition’s tasks. The possible commu-nication behavior of each partition and the possible communication flow between the partitionsare defined by the network configuration, i.e., the configurations of each IMA module, and furtherrestricted by the module characteristics. In particular, the IMA module configurations define

• which partitions are hosted by the IMA modules,• which queuing and sampling ports are configured for each partition,• which communication ports are connected (either module-internal or between modules),and

Page 210: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

190 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

• when which partition is scheduled and for how long.In addition, the technical specifications of the modules and the communication techniques definethe performance characteristics which further influence the possible communication flows. Ofparticular interest are

• the worst-case software latencies which define the execution time of each API call,• the worst-case hardware latencies which define how long the I/O driver needs for sendingor receiving the data, and

• the worst-case transmission times which define the latencies of the network.For example, for successfully sending a message from one port to another, first the source port’spartition has to be scheduled. The partition then executes the respective API call for sending themessage and is busy for the duration of the API call (but at most for the time given as worst-casesoftware latency). The message is then transmitted by the I/O driver and sent via the commu-nication link to the destination port where the respective I/O driver receives the message. Theperformance information given in the module specifications guarantee that these times do not ex-ceed the respective worst-case latencies (i.e., worst-case hardware latencies at sender and receiver,worst-case transmission time). The last step is to wait until the destination port’s partition isscheduled again and can execute the respective API call for receiving the message.An example of a successful message transmission is depicted in Fig. 7.1: A message is sent fromsource port A1p1 belonging to partition A1 to destination port B1p2 belonging to partition B2 whenassuming that all behavior needs not less than the defined worst-case times. In this example, theworst-case software latency for writing into the source port is WCSL (A1p1) = 2 and for readingfrom the destination port WCSL (B1p2) = 3 . The worst-case hardware latency in the source port’sI/O driver is WCHL (A1p1) = 1 and for the destination port’s I/O driver WCHL (B1p2) = 2 . Theworst-case network transmission time is WCT (A1p1, B1p2) = 2 . Note that the figure – as well as allfurther descriptions – abstracts from concrete API call names for writing or reading a message anduses instead the abstract API call names WRITE PORT and READ PORT which do not distinguishthe port type.

Worst-case HW latency for A1p1

Worst-case transmission time for A1p1 - B1p2

Worst-case HW latency for B1p2

1 2 3 5 60 4 7 8 9 10 t

Worst-case SW

Worst-case SW latency for

latency forWRITE PORT(A1p1)

READ PORT(B1p2)

A1 A2 A3 A1 A2 A3

B1 B2 B1

Figure 7.1: Successful message transmission from source port A1p1 to destination port B1p2

Before further considering the characteristics of communication schedules (in Sect. 7.1.2) and theirrepresentation formats (in Sect. 7.1.3) and before presenting a possible communication schedulegeneration algorithm in Sect. 7.2, the following section shortly describes how one possible com-munication schedule can be manually generated based on the network’s configuration data and theperformance information.

Page 211: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.1. COMMUNICATION SCHEDULES 191

Small Network Example. The network example used in the following sections is a small net-work consisting of two IMA modules with a few communication links between them, i.e., onlyintra- and inter-module communication is considered. Figure 7.2 shows this network contain-ing two IMA modules called A and B, the configured partitions (A1, A2, and A3 on module A;partitions B1 and B2 on module B), and their communication ports which are either queuing orsampling ports (denoted by QP and SP, respectively).

Avionics Partition A1 Avionics Partition A2 Avionics Partition A3

Avionics Partition B1 Avionics Partition B2

IMA Module B

IMA Module A

B1p1 B1p2 B1p3 B1p4 B2p1 B2p2

A1p1 A1p2 A2p1 A2p2 A3p1 A3p2

ConfigurationTables

ConfigurationTables

SP QP QP

QP QP QPQP

SP

QP QP

QPQP

Figure 7.2: Sample network consisting of two IMA platforms A and B with queuing and samplingports for intra- and inter-module communication

The scheduling information contained in the IMA module configurations is depicted in Fig. 7.3:For module A, the repeated scheduling block (also called major frame) defines the schedulingsequence partition A1, partition A2, and then partition A3. For module B, the repeated schedulingblock is partition B1 and then partition B2. Partitions A1, A2, and A3 are each scheduled for twotime units while partition B1 is schedule for four time units and partition B2 for three time units.

0 1 2 3 4 5 6 7 8t

A1 A2 A3 A1

B1 B1B2

Figure 7.3: Scheduling of the partitions for the network depicted in Fig. 7.2

The performance information for the module and the network are listed in Table 7.1. It contains thespecified worst case latencies and transmission times for the network depicted in Fig. 7.2. Thereare the worst-case software latency (WCSL) for each port, the worst-case hardware latency (WCHL)for each port’s I/O driver, and the worst-case transmission time (WCT) for each communicationlink between a source and a destination port. Note that in case of intra-module communication theworst-case hardware latencies and the network transmission time for the respective ports are zero

Page 212: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

192 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

since neither the I/O driver nor the network are involved for such kind of communication. Thisintra-module performance information is therefore depicted in grey.

Worst-case SW Latencies Worst-case HW Latencies Worst-case Transmission TimesWCSL (A1p1) = 2 WCHL (A1p1) = 1 WCT (A1p1, B1p2) = 2WCSL (A1p2) = 1 WCHL (A1p2) = 0 WCT (A2p1, A1p2) = 0WCSL (A2p1) = 2 WCHL (A2p1) = 0 WCT (A2p2, A3p2) = 0WCSL (A2p2) = 2 WCHL (A2p2) = 0WCSL (A3p1) = 1 WCHL (A3p1) = 2WCSL (A3p2) = 1 WCHL (A3p2) = 0

WCSL (B1p1) = 4 WCHL (B1p1) = 0 WCT (B1p1, B2p2) = 0WCSL (B1p2) = 3 WCHL (B1p2) = 1 WCT (B1p3, A3p1) = 1WCSL (B1p3) = 2 WCHL (B1p3) = 2 WCT (B2p1, B1p4) = 0WCSL (B1p4) = 2 WCHL (B1p4) = 0WCSL (B2p1) = 2 WCHL (B2p1) = 0WCSL (B2p2) = 1 WCHL (B2p2) = 0

Table 7.1: Performance information about the modules and the network given in Fig. 7.2

7.1.1 Manual Generation of an Example

For the following manual generation of a communication schedule, the small network exampledescribed above in Fig. 7.2, Fig. 7.3 and Table 7.1 is used. In this example, two IMA modulesare running in parallel and perform the configured scheduling. It is possible to determine at eachtime tick (counted from the beginning of partition scheduling in the module) which partitionsare currently scheduled and thus which ports can be accessed (namely those belonging to thescheduled partition). Furthermore, it can be determined how much execution time remains for thecurrent partition until the end of its scheduling window. With this knowledge and additionallythe performance information, it can then be investigated which communication behavior is pos-sible in the running partitions, i.e., which communication-related API calls will terminate beforerescheduling. For generating a communication schedule, one possible API call is then chosen foreach partition and the procedure continues at the next discrete time tick. If the API call may takelonger than one time tike (i.e.,WCSL > 1), the respective partition is still busy at the next time unit(and possibly also at later ones) and no new API call is chosen. The following manual generationdescribes step-by-step which API calls are possible and also which are not and why.Note that the above algorithm (which is further illustrated in the following) only generates onepossible communication schedule. For generating all possible communication schedules, it isnecessary to consider at each discrete time unit all possible combinations of API calls. This isdiscussed in more detail in Sect. 7.2.When investigating which API calls are possible, the resulting set can be restricted further basedon given rules, e.g., to consider only successful API calls like reading from non-empty ports andwriting into non-full queuing ports. Such restriction rules help to focus the generated communica-tion schedules (i.e., the respective test cases) and, at the same time, reduce their number. Furtherdetails are discussed in Sect. 7.2.2. In the following example, the possible API calls are manuallyrestricted to those which are successful, i.e., whose expected return code is NO ERROR, since thisis more intuitive to understand and it is easier to imagine an evaluation function. However, alsonon-successful possible API calls (i.e., those which are expected to have a return code other thanNO ERROR because, for example, the message to be read is not yet available) are important to testin a communication flow context and thus such restrictions are not contained in the generationalgorithm by default. However, since only possible API calls are considered, such a return code

Page 213: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.1. COMMUNICATION SCHEDULES 193

other than NO ERROR cannot be caused by scheduling problems or by using wrong input parameterssuch as non-existing or non-owned port identifiers.

In the following stepwise generation of a possible communication schedule, it is assumed that theIMAmodules A and B start partition scheduling synchronously, i.e., their first configured partitionsbegin synchronously at time unit t = 0 (as already depicted in Fig. 7.3). The options and selectionsat each time tick are illustrated in Fig. 7.5.

Time stamp t = 0. For finding one possible communication schedule, it has to be selected foreach module which API call shall be performed. Therefore, for each module, a four-stepalgorithm investigates the alternatives, selects one of them, and finally “executes” these APIcalls:

Step 1. Get a list of communication API calls which could generally be performed by thecurrently scheduled partition by adding for each assigned communication port the re-spective API call to the list. Then add the “idle” API call TIMED WAIT (1) to the listwhich ensures that at least one possible API call is available.This is depicted in Fig. 7.4(a) showing that, for module A, two communication ports areassigned which are both added with their respective (abstract) API call (i.e., dependingon the port’s direction the abstract API call is either WRITE PORT or READ PORT).The resulting list with three API calls is pictured as a box below the time line of mod-ule A. For module B, four communication ports are assigned and thus the list containsfive possible API calls including the idle API call.

Step 2. Reduce the list of API calls to (a) those which can be scheduled in the remainingscheduling window and (b) those which are not restricted by the defined restrictionrules. Note that the idle API calls can always be scheduled since its duration is alwaysexactly the difference between two time ticks. However, restriction rules can eliminateit from the list if other API calls are possible.Figure 7.4(b) shows that for both modules the READ PORT API calls are restrictedaccording to the restriction rule “perform only successful API calls” and these APIcalls are therefore depicted in grey. According to the worst-case SW latencies, allAPI calls can be completed in the remaining scheduling window and thus no furtherrestrictions apply.

Step 3. For each module, select one API call of the generated list. This selection across allmodules is one possible combination of API calls.Figure 7.4(c) shows that for module A WRITE PORT (A1p1) and for module BWRITE PORT (B1p1) are selected which is indicated by marking them with bold let-ters in the list.

Step 4. In the last step, the selected API calls are scheduled for execution.This is shown in Fig. 7.4(d) by printing the selected API call below the respectivemodule’s time line and additionally showing its expected duration – which is given bythe worst-case software latency – by an arrow directly below the time line.

A summary of this four-step selection process is also depicted in Fig. 7.5(a). Note that inthe beginning of the scheduling (and thus of the generation algorithm) it is not necessary totake previous API calls into account which can influence the possible API calls or the resultof applying the restriction rules.

Time stamp t = 1. As a result of the above decisions, both IMA modules are still busy at time tickt = 1 and thus the four-step algorithm is not applied. This is depicted in Fig. 7.5(b) whichshows that the arrows directly below the time lines denoting the duration of the previouslyselected API calls include the current period under investigation.

Page 214: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

194 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

t1 2 3 5 60 4 7 8

TIMED WAIT(1)READ PORT(B1p4)WRITE PORT(B1p3)READ PORT(B1p2)WRITE PORT(B1p1)

TIMED WAIT(1)

WRITE PORT(A1p1)READ PORT(A1p2)

A1 A2 A3 A1

B1B2B1

(a) Step 1

t1 2 3 5 60 4 7 8

TIMED WAIT(1)READ PORT(B1p4)WRITE PORT(B1p3)READ PORT(B1p2)WRITE PORT(B1p1)

TIMED WAIT(1)

WRITE PORT(A1p1)READ PORT(A1p2)

A1 A2 A3 A1

B1B2B1

(b) Step 2

t1 2 3 5 60 4 7 8

TIMED WAIT(1)READ PORT(B1p4)WRITE PORT(B1p3)READ PORT(B1p2)WRITE PORT(B1p1)

TIMED WAIT(1)

WRITE PORT(A1p1)READ PORT(A1p2)

A1 A2 A3 A1

B1B2B1

(c) Step 3

t1 2 3 5 60 4 7 8

WRITE PORT(B1p1)

WRITE PORT(A1p1)

A1 A2 A3 A1

B1B2B1

(d) Step 4

Figure 7.4: Detailed four-step selection of a possible combination of API calls at t = 0 in order togenerate one possible communication schedule

Time stamp t = 2. As pictured in Fig. 7.5(c), module A has just rescheduled at this time tick whichmeans that all API calls of the previously scheduled partition must have been completed.When applying the four-step algorithm for the currently scheduled partition A2, one of thethree possible API calls (including the idle API call) is selected which will continue untilthe end of the scheduling window. For module B, the currently scheduled partition B1 is stillexecuting the API call selected at time tick t = 0.

Time stamp t = 3. As shown before, the decisions for this time tick t = 3 are restricted becauseboth IMA modules are still busy performing the previously selected API calls. This isdepicted in Fig. 7.5(d).

Time stamp t = 4. At this time tick, both IMA modules reschedule new partitions. Module Anow executes partition A3 and the list of possible API calls has been severely restricted byapplying the restriction rules since the respective queuing ports are still empty (i.e., neitherWRITE PORT (B1p3) nor WRITE PORT (A2p2) has already been selected). Thus, only theidle API call can be selected. For module B, partition B2 can select from a list of threepossible API calls which has not been reduced by the restriction rules. The lists of possibleAPI calls and the selections are depicted in Fig. 7.5(e).

Time stamp t = 5. For module A, the list of possible API calls for A3 is still restricted toTIMED WAIT (1). Partition B2 of module B is still busy performing the API call which hasbeen selected at time tick t = 4. This is depicted in Fig. 7.5(f).

Time stamp t = 6. Module A reschedules and again executes partition A1. In contrast to t = 0,the list of possible API calls does not contain WRITE PORT (A1p1) but READ PORT (A1p2).

Page 215: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.1. COMMUNICATION SCHEDULES 195

WRITE PORT (A1p1) is not included because the message sent at time tick t = 0 is still be-ing transmitted (see Fig. 7.1) and the port’s queue is full. READ PORT (A1p2) is containedbecause at time tick t = 2 partition A2 has sent a message via port A2p1 which is alreadyavailable to partition A1. Thus, the previously selected sequence of API calls influences theresult of the restriction rules.For module B, WRITE PORT (B2p1) has been removed from the list of possible API callssince the worst-case software latency exceeds the remaining scheduling time for partitionB2 which is denoted in Fig. 7.5(g) by crossing through the removed API call. Thus, un-schedulable API calls can be distinguished from those which are restricted by applying therestriction rules.

Page 216: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

196 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

t1 2 3 5 60 4 7 8

TIMED WAIT(1)READ PORT(B1p4)WRITE PORT(B1p3)READ PORT(B1p2)WRITE PORT(B1p1)

TIMED WAIT(1)

WRITE PORT(A1p1)READ PORT(A1p2)

WRITE PORT(B1p1)

WRITE PORT(A1p1)

A1 A2 A3 A1

B1B2B1

(a) Options and selections at t = 0

t1 2 3 5 60 4 7 8

BUSYWRITE PORT(B1p1)

BUSYWRITE PORT(A1p1)

A1 A2 A3 A1

B1B2B1

(b) Options and selections at t = 1

t1 2 3 5 60 4 7 8

TIMED WAIT(1)

WRITE PORT(A2p1)WRITE PORT(A2p2)

BUSYWRITE PORT(B1p1)

BUSY

WRITE PORT(A2p1)BUSY

WRITE PORT(A1p1)

A1 A2 A3 A1

B1B2B1

(c) Options and selections at t = 2

t1 2 3 5 60 4 7 8

BUSY

BUSY

BUSY

WRITE PORT(B1p1)

BUSY

WRITE PORT(A2p1)BUSY

WRITE PORT(A1p1)

A1 A2 A3 A1

B1B2B1

(d) Options and selections at t = 3

t1 2 3 5 60 4 7 8

TIMED WAIT(1)READ PORT(B2p2)WRITE PORT(B2p1)

TIMED WAIT(1)READ PORT(A3p2)READ PORT(A3p1)

BUSY

BUSY

BUSY

WRITE PORT(B1p1)

BUSY

WRITE PORT(A2p1)

WRITE PORT(B2p1)

TIMED WAIT(1)

BUSYWRITE PORT(A1p1)

A1 A2 A3 A1

B1B2B1

(e) Options and selections at t = 4

t1 2 3 5 60 4 7 8

TIMED WAIT(1)READ PORT(A3p2)READ PORT(A3p1)

BUSY

BUSY

WRITE PORT(B1p1)

BUSY

WRITE PORT(A2p1)

WRITE PORT(B2p1)

TIMED WAIT(1)

BUSY

TIMED WAIT(1)

BUSYWRITE PORT(A1p1)

BUSY

A1 A2 A3 A1

B1B2B1

(f) Options and selections at t = 5

t1 2 3 5 60 4 7 8

TIMED WAIT(1)READ PORT(B2p2)WRITE PORT(B2p1)

TIMED WAIT(1)

WRITE PORT(A1p1)READ PORT(A1p2)BUSY

READ PORT(A1p2)

BUSY

BUSY

WRITE PORT(B1p1)

BUSY

WRITE PORT(A2p1)

WRITE PORT(B2p1)

READ PORT(B2p2)

TIMED WAIT(1)

BUSY

TIMED WAIT(1)

BUSYWRITE PORT(A1p1)

A1 A2 A3 A1

B1B2B1

(g) Options and selections at t = 6

Figure 7.5: Generating an example communication schedule of 7 time units length by selecting ateach time tick for each currently scheduled partition one of its possible API calls

Page 217: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.1. COMMUNICATION SCHEDULES 197

7.1.2 Characteristics of Communication Schedules

Communication schedules as introduced so far are timed sequences of interface triggers whichexpress the visible behavior of all modules in the network. Thus, each communication sched-ule defines a communication flow: When a communication link is used, which communicationis performed simultaneously or sequentially, what kind of signal or message is transferred, etc.For testing, the communication schedules are then interpreted and denote when specific interfacetriggers have to be performed and when communication shall take place.

Generally, each communication schedule addresses the behavior of all network components, inparticular of all configured partitions that have configured ports for communication flows. Thismeans that the overall communication schedule is the interleaving of separate communicationschedules of each component which all have the same reference starting point. Each communi-cation schedule has to comply with the configurations of the network and with the performanceinformation provided for each network component and each networking technology. Particularly,regarding the IMA modules in the network, the communication schedule has to comply with theport configurations, the configured partition scheduling on each module, and with each moduletype’s performance information. These restrictions are reflected in the communication schedulecharacteristics as follows:

• Each communication schedule for one particular partition is a timed sequence of inter-face triggers, i.e., triggers to read from or write into the ports configured for the respectivepartition. Thereby, it is defined at which time relative to the start of the module’s scheduling(i.e., time tick t = 0) a specific interface trigger has to be executed when the communicationschedule is interpreted for testing. However, IMA modules are single-CPU computers andeach interface trigger needs time to be performed. Consequently, interface triggers by thesame partition cannot be triggered simultaneously and thus the time stamps used within thecommunication schedule of a single partition are strictly monotonically increasing. Fur-thermore, due to the scheduling of partitions in fixed, deterministic cycles as specified inthe configuration tables, a communication schedule for a specific partition does not containinterface triggers while the partition is not scheduled. For example, if a partition P is sched-uled at the beginning of a major time frame and afterwards other partitions are scheduled,P’s communication schedule contains interface triggers at the beginning of the respectivecommunication schedule but must not contain interface triggers while the other partitionsare scheduled. When P is rescheduled (e.g., at the beginning of the new major time frame),the communication schedule again contains interface triggers.Interface triggers are API call for sending messages (WRITE SAMPLING MESSAGE andSEND QUEUING MESSAGE, respectively) and for receiving messages (READ SAMPLING MESSAGEand RECEIVE QUEUING MESSAGE, respectively) whose execution length depends on the char-acteristics of the port and the message. Thus, each execution time is different but boundedby the worst-case software latency defined for each port as part of the performance infor-mation. With respect to the communication schedules of a single partition, this means that,between two subsequent interface triggers, at least this worst-case transmission time has tobe waited to ensure that it is possible to perform the next API call at its determined timestamp.

• Each communication schedule for one module is the interleaving of its partition’s com-munication schedules. This means that the used time stamps are still strictly monotonicallyincreasing because only one partition can be assigned to the single CPU when consideringa consistent configuration.

• When considering the communication schedule for a network of modules several IMAmodules are executing in parallel allowing simultaneous interface triggers of partitions as-

Page 218: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

198 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

signed to different modules because each module is able to access its interfaces indepen-dently of and simultaneously to other modules. Thus, the communication schedule is theinterleaving of the respective modules’ communication schedules allowing monotonicallyincreasing time stamps which can occur when considering simultaneous interface triggers.

The characteristics of communication schedules can be further specialized when applying restric-tion rules while generating communication schedules. For example, if only successful API callsshall be considered as in Sect. 7.1.1, the interval between the same interface trigger is further lim-ited by the worst-case latencies and transmission times and by the interfaces characteristics (e.g.,queue length for queuing ports).

7.1.3 Representation of Communication Schedules

7.1.3.1 Representation of a Single Communication Schedule

Individual communication schedules can be represented in different ways: graphically as anno-tations to scheduling figures as in Fig. 7.5 or as a sequence of tuples where each tuple containsthe time stamp and a set of interface triggers which contains exactly one interface trigger for eachmodule in the network. Each interface trigger itself denotes the module name and the API call tobe performed by the scheduled partition (which is not explicitly referenced in the interface triggerbut can be deduced by investigating the scheduling configuration and the time stamp of the tu-ple). Graphical representations as used in the previous sections have the major disadvantage thatthey provide a clear overview for relatively short communication schedules but cannot be easilyhandled and interpreted in an automated way as required for testing. In contrast, the sequence oftuples provides a clear and unambiguous textual representation for communication schedules ofany length which can also be processed by tools and scripts.

For example, the communication schedule example generated in Sect. 7.1.1 and depicted inFig. 7.5 can be textually represented as follows:

< (0, {A:WRITE_PORT(A1p1), B:WRITE_PORT(B1p1)}),

(1, {A:BUSY(1), B:BUSY(3)}),

(2, {A:WRITE_PORT(A2p1), B:BUSY(2)}),

(3, {A:BUSY(1), B:BUSY(1)}),

(4, {A:TIMED_WAIT(1), B:WRITE_PORT(B2p1)}),

(5, {A:TIMED_WAIT(1), B:BUSY(1)}),

(6, {A:READ_PORT(A1p2), B:READ_PORT(B2p2)}) >

Thereby, each busy time denoted by BUSY (x) provides the number of time units x that the partitionis still busy when assuming that the duration of the API call is equal to the worst-case softwarelatency. This information is redundant and only provided to simplify interpretation of the commu-nication schedules by restriction or sorting rules applied later (i.e., after generation).

7.1.3.2 Representation of a Collection of Communication Schedules

Representation formats for several disjunct communication schedules are required since, for test-ing purposes, all possible communication schedules have to be considered. Additionally, theserepresentation formats shall easily be interpretable for further rule-based reduction or sorting ofthe communication schedules. Three representation formats are used in the following: Commu-nication schedule trees, communication schedule sets, and communication schedule sequences.They are shortly introduced and analyzed in the following paragraphs.

Page 219: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.1. COMMUNICATION SCHEDULES 199

Communication Schedule Tree. When representing a collection of communication schedulesas a tree, the root of the communication schedule tree contains a branch for each possible combi-nation of simultaneous interface triggers at time stamp t = 0. Each of them then further subdividesinto all possible branches at time stamp t = 1 which are possible after the respective combinationat t = 0. The branches for time stamp t = 2 and following are similarly added to the communica-tion schedule tree. Thus, if all branches at t = x have been extended with all succeeding possibleAPI call combinations, the number of leafs represents the number of generated communicationschedules of length t = x+1 time units.A tree-like representation format is also very efficient for tool internal representation since the treesummarizes equal beginnings of communication schedules and requires memory for them onlyonce. However, for representing generated communication schedules, the graphical visualizationis somehow limited to quite short communication schedules because the number of communicationschedules increases exponentially with the length. Additionally, a graphical visualization canusually not be used for a sorted printout of the communication schedules (i.e., branches in thetree) if arbitrary sorting functions shall be supported. The internal tree representation, however,may store this information by appropriate labels on leafs, nodes or edges (depending on the sortingfunction), but has to provide further means to access the branches in their new sorting order.Figure 7.6(a) shows a communication schedule tree which contains all possible communicationschedules of length 3 time units which have been generated without restricting the API calls tosuccessful ones. The details contained in each tree node are depicted more detailed in Fig. 7.6(b)which shows one branch of Fig. 7.6(a). In this detailed view, it can be seen that each node containsthe time stamp and an interface trigger for each partition scheduled at that time stamp. In bothfigures, the beginning of the manually generated communication schedule (see Sect. 7.1.1) ismarked with bold letters and light-gray background. The function of the ROOT node is to gatherup all possible combinations of API calls at time stamp t = 0.

Communication Schedule Set. Another possibility to represent an unsorted collection of com-munication schedules is a set which contains all generated communication schedules as describedin Sect. 7.1.3.1. The advantage of this textual representation is that humans as well as tools (e.g.,FDR) can easily access single communication schedules by taking one element of the set. How-ever, this form of representation can hardly be used as an internal representation since it requiresseparate memory also for equal beginnings of two communication schedules. This can also beobserved for printouts which are very space and memory consuming if longer communicationschedules are considered. For example, the printout file containing all possible communicationschedules of length 6 time units requires almost 2 MB; for length 8, this amounts to more than100 MB. Note that the set representation provides per se an unsorted view on the communicationschedules although the printout may pretend that they are sorted. The elements of the sets (i.e.,the timed traces) cannot contain any additional information (e.g., for sorting).The following set of communication schedules shows some of the possible communication sched-ules of length 3 time units (generated without applying any restriction rules). The dots denotethat some communication schedules are left out. The complete set, i.e., all communication sched-ules which are also contained in the communication schedule tree in Fig. 7.6(a), contains 210elements and is provided in Appendix E.1. The order of the communication schedules is the sameas that of the communication schedule tree (from top to bottom). This means that the beginningof the manually generated communication schedule is shown as the first element of the set. Fora readable printout, the abstract API call names have been abbreviated as W (WRITE PORT), R(READ PORT), T (TIMED WAIT), and B (BUSY).

{ < (0,{A:W(A1p1),B:W(B1p1)}), (1,{A:B(1), B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >,

< (0,{A:W(A1p1),B:W(B1p1)}), (1,{A:B(1), B:B(3)}), (2,{A:W(A2p2),B:B(2)}) >,

< (0,{A:W(A1p1),B:W(B1p1)}), (1,{A:B(1), B:B(3)}), (2,{A:T(1), B:B(2)}) >,

Page 220: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

200 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

< (0,{A:W(A1p1),B:R(B1p2)}), (1,{A:B(1), B:B(2)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:W(A1p1),B:R(B1p2)}), (1,{A:B(1), B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:W(A1p1),B:R(B1p2)}), (1,{A:B(1), B:B(2)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

...,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:T(1)}) > }

Communication Schedule Sequence. To allow a sorted representation of the communicationschedules, a communication schedule sequence can be used. For sorting of communication sched-ules, a weight is associated with each branch which, for example, is calculated by applying aspecific function on each communication schedule. The communication schedule sequence thencontains per definition the communication schedules sorted according to a “greater or equal weightas the next one” relation, i.e., the first sequence element has (one of) the highest weights. Com-munication schedules with equal weight are contained in the sequence in arbitrary order.Since this representation form is very similar to communication schedule sets, all advantages anddisadvantages (except for the sorted printout problem) also apply for the communication schedulesequences.The following example shows – like in the previous ones – some of the possible communicationschedules of length 3 time units (generated without applying any restriction rules) and their cal-culated weight. Thereby, communication schedules with more different interface triggers (i.e.,executions of WRITE PORT or READ PORT) have a higher weight than those which mostly idle(i.e., perform TIMED WAIT). As can be seen, the manually generated communication schedule hasa weight of 45 while the highest one is 55 and the lowest one 0. The dots denote that some com-munication schedules have been omitted. The complete sequence of communication schedulescontains 210 elements and is provided in Appendix E.2.< < (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 55

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 55

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 55

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 55

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 55

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 55

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 55

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 55

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >, # 50

...,

< (0,{A:W(A1p1),B:W(B1p1)}), (1,{A:B(1), B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >, # 45

...,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >, # 15

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:T(1), B:T(1)}) >, # 15

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 10

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 10

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 10

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 10

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:T(1)}) >, # 10

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:T(1)}) >, # 10

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:T(1)}) > > # 0

Page 221: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.1. COMMUNICATION SCHEDULES 201

ROOTtime stamp: 0

A: WRITE PORT(A1p1)B: READ PORT(B1p2)

time stamp: 0

A: WRITE PORT(A1p1)B: WRITE PORT(B1p3)

time stamp: 0

A: WRITE PORT(A1p1)B: READ PORT(B1p4)

time stamp: 0

A: WRITE PORT(A1p1)B: TIMED WAIT(1)time stamp: 0

A: READ PORT(A1p2)B: WRITE PORT(B1p1)

time stamp: 0

A: READ PORT(A1p2)B: READ PORT(B1p2)

time stamp: 0

A: READ PORT(A1p2)B: WRITE PORT(B1p3)

time stamp: 0

A: READ PORT(A1p2)B: READ PORT(B1p4)

time stamp: 0

A: READ PORT(A1p2)B: TIMED WAIT(1)time stamp: 0

A: TIMED WAIT(1)B: WRITE PORT(B1p1)

time stamp: 0

A: TIMED WAIT(1)B: READ PORT(B1p2)

time stamp: 0

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 0

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 0

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 0

A: BUSY(1)B: BUSY(2)time stamp: 1

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: BUSY(1)B: BUSY(1)time stamp: 1

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: BUSY(1)B: BUSY(1)time stamp: 1

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: BUSY(1)B: READ PORT(B1p2)

time stamp: 1

A: BUSY(1)B: WRITE PORT(B1p3)

time stamp: 1

A: BUSY(1)B: READ PORT(B1p4)

time stamp: 1

A: BUSY(1)B: TIMED WAIT(1)time stamp: 1

A: WRITE PORT(A2p1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(2)time stamp: 2

A: TIMED WAIT(1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: READ PORT(A1p2)B: BUSY(3)time stamp: 1

A: TIMED WAIT(1)B: BUSY(3)time stamp: 1

A: WRITE PORT(A2p1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(2)time stamp: 2

A: TIMED WAIT(1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(2)time stamp: 2

A: TIMED WAIT(1)B: BUSY(2)time stamp: 2

A: READ PORT(A1p2)B: BUSY(2)time stamp: 1

A: TIMED WAIT(1)B: BUSY(2)time stamp: 1

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: READ PORT(A1p2)B: BUSY(1)time stamp: 1

A: TIMED WAIT(1)B: BUSY(1)time stamp: 1

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: READ PORT(A1p2)B: BUSY(1)time stamp: 1

A: TIMED WAIT(1)B: BUSY(1)time stamp: 1

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: READ PORT(A1p2)B: READ PORT(B1p2)

time stamp: 1

A: READ PORT(A1p2)B: WRITE PORT(B1p3)

time stamp: 1

A: READ PORT(A1p2)B: READ PORT(B1p4)

time stamp: 1

A: READ PORT(A1p2)B: TIMED WAIT(1)time stamp: 1

A: TIMED WAIT(1)B: READ PORT(B1p2)

time stamp: 1

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 1

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 1

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 1

A: WRITE PORT(A2p1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(2)time stamp: 2

A: TIMED WAIT(1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(2)time stamp: 2

A: TIMED WAIT(1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: READ PORT(A1p2)B: BUSY(3)time stamp: 1

A: TIMED WAIT(1)B: BUSY(3)time stamp: 1

A: WRITE PORT(A2p1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(2)time stamp: 2

A: TIMED WAIT(1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(2)time stamp: 2

A: TIMED WAIT(1)B: BUSY(2)time stamp: 2

A: READ PORT(A1p2)B: BUSY(2)time stamp: 1

A: TIMED WAIT(1)B: BUSY(2)time stamp: 1

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: READ PORT(A1p2)B: BUSY(1)time stamp: 1

A: TIMED WAIT(1)B: BUSY(1)time stamp: 1

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: READ PORT(A1p2)B: BUSY(1)time stamp: 1

A: TIMED WAIT(1)B: BUSY(1)time stamp: 1

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: READ PORT(A1p2)B: READ PORT(B1p2)

time stamp: 1

A: READ PORT(A1p2)B: WRITE PORT(B1p3)

time stamp: 1

A: READ PORT(A1p2)B: READ PORT(B1p4)

time stamp: 1

A: READ PORT(A1p2)B: TIMED WAIT(1)time stamp: 1

A: TIMED WAIT(1)B: READ PORT(B1p2)

time stamp: 1

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 1

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 1

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 1

A: WRITE PORT(A2p1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(2)time stamp: 2

A: TIMED WAIT(1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(2)time stamp: 2

A: TIMED WAIT(1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(1)time stamp: 2

A: TIMED WAIT(1)B: BUSY(1)time stamp: 2

A: WRITE PORT(A2p1)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p1)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: WRITE PORT(B1p3)

time stamp: 2

A: WRITE PORT(A2p2)B: READ PORT(B1p4)

time stamp: 2

A: WRITE PORT(A2p2)B: TIMED WAIT(1)time stamp: 2

A: TIMED WAIT(1)B: WRITE PORT(B1p3)

time stamp: 2

A: TIMED WAIT(1)B: READ PORT(B1p4)

time stamp: 2

A: TIMED WAIT(1)B: TIMED WAIT(1)time stamp: 2

A: WRITE PORT(A2p2)B: BUSY(2)time stamp: 2

A: TIMED WAIT(1)B: BUSY(2)time stamp: 2

A: WRITE PORT(A1p1)B: WRITE PORT(B1p1)

time stamp: 0

A: BUSY(1)B: BUSY(3)time stamp: 1

A: WRITE PORT(A2p1)B: BUSY(2)time stamp: 2

(a)

A: WRITE PORT(A2p2)B: BUSY(2)time stamp: 2

A: TIMED WAIT(1)B: BUSY(2)time stamp: 2

ROOTtime stamp: 0

A: WRITE PORT(A1p1)B: WRITE PORT(B1p1)

time stamp: 0

A: BUSY(1)B: BUSY(3)time stamp: 1

A: WRITE PORT(A2p1)B: BUSY(2)time stamp: 2

(b)

Figure 7.6: (a) Communication schedule tree representing all possible communication sched-ules of length 3 time units (generated without applying any restriction rules), (b) the uppermostbranches of the tree which also contain the beginning of the communication schedule generated inSect. 7.1.1 (marked in (a) and (b) with bold letters and light-gray background)

Page 222: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

202 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

7.2 Generation Algorithm for Communication Schedules

The previous section has elaborated on the characteristics of communication schedules and hasalso presented three different representation formats which provide an unsorted or sorted view of acollection of communication schedules, respectively. Moreover, Section 7.1.1 has introduced themanual generation of one possible communication schedule step-by-step. Possible communicationschedule in this respect means that it must be possible to perform the API calls denoted by thecommunication schedule in normal as well as in worst case situations. This means that

• only partitions are expected to perform an API call which are scheduled at that point of time,• each partition shall access only the communication ports assigned to it, and• enough time is considered for each API call to execute it also in worst case scenarios.

At first sight, this definition of possible communication schedules forecloses several communica-tion flow tests which might also be interesting, e.g., accessing ports assigned to other partitions orstarting to execute an API call although it might – in worst-case situations – not complete in theremaining time. However, such tests are outside the scope of this test step which shall focus on thecommunication flow between two ports. Furthermore, they have already been addressed as robust-ness tests during bare module testing and thus have not to be considered here again. Moreover, thisdefinition of possible communication schedules does not predetermine the expected return code,but denotes that neither scheduling problems nor wrong input parameters (such as non-existing ornon-owned port identifiers) should be the reason for return codes other than NO ERROR. Finally, itwill become obvious in this and the following sections that even limiting the scope of testing tothe communication flow test objectives (i.e., not considering such tests) is not sufficient to copewith the resulting number of possible communication schedules.

This section is structured as follows: In the following (Sect. 7.2.1), an algorithm is described whichcan generate all possible communication schedules and therefore extends the four-step algorithmfor generating one possible communication schedule which had been introduced in Sect. 7.1.1.Section 7.2.2 then considers means for influencing the results of the algorithm either by reduc-ing the number of generated communication schedules (in Sect. 7.2.2.1) or by sorting them (inSect. 7.2.2.2). How the resulting communication schedules can be used for testing, how the al-gorithm scales with respect to different network configurations, or which further aspects might beconsidered in future is subsequently addressed in Sect. 7.3, Sect. 7.4, and Sect. 7.5, respectively.

7.2.1 Description of the Generation Algorithm

The algorithm to be described in the following is based on the four-step algorithm introduced pre-viously for generating one possible communication schedule.1 Two characteristics of this algo-rithm have to be changed to generate all possible communication schedules: Firstly, the four-stepalgorithm selects arbitrarily one combination of possible API calls to be executed simultaneously.For generating all possible communication schedules, the generation algorithm has to consider allcombinations in each step. Secondly, when generating one communication schedule, it is sufficientthat the four-step algorithm extends it at each time tick with the chosen API call combination, but,when generating all possible communication schedules, the current schedule has to be extendedwith all possible combinations c1, . . . , cn. Therefore, the generation algorithm has to replicate thecurrent schedule s n times and then extends each replica with one of the possible combinations; for

1In the following, the term “four-step algorithm” refers to the algorithm described in Sect. 7.1.1 which can generateexactly one possible communication schedule. The term “generation algorithm” is used when regarding the algorithmfor generating all possible communication schedules.

Page 223: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.2. GENERATION ALGORITHM FOR COMMUNICATION SCHEDULES 203

the next time tick, each of these new schedules sc1 , . . . , scn has then to be considered individually,i.e., for each one, all combinations of API calls have to be determined and then the schedule hasto be replicated and extended as required.

In the following, the details of the generation algorithm will be introduced which is based onthe textual description of the four-step algorithm in Sect. 7.1.1. Due to this relation, a recursiveapproach has been pursued which first generates one communication schedule (until a specifiedlength d is reached) and then generates further communication schedules by considering the otherAPI call combinations at each time tick (starting at time tick d−1). This means the generation algo-rithm follows a depth-first approach. Obviously, breadth-first or mixed breadth-depth approacheswould also be possible, but do conceptually not provide any advantages given that:

• The generation algorithm shall always generate all communication schedules up to a speci-fied length and only specific restriction rules shall reduce the number of generated commu-nication schedules.

• The generation algorithm is to be executed offline and thus it is conceptually not restrictedby a somehow limited execution time or other performance considerations.

Moreover, the appeal of a depth-first algorithm is the (relative) easiness of its implementationwhich makes it a good starting point for studying the feasibility of the approach and for imple-menting a prototype.

It is worthwhile to remark here that the algorithm generates all communication schedules of aspecified length and thus also generates all shorter communication schedules since these are con-tained in one or more longer ones. However, since the communication schedules shall be used fortesting, the shorter ones will never be explicitly chosen for testing but executed implicitly as partof the longer ones. If required, it is easily possible to get all possible communication schedules oflength 1 to length d by taking from a communication schedule tree the respective partial branchesstarting at the ROOT node.

Generation Algorithm Details. The generation algorithm basically consists of one functionwhich appends all possible combinations of API calls to the input trace (i.e., communicationschedule) and thus generates different new traces which all differ in the API calls to be performedat the last contained time stamp. The number of new traces thus depends on the number of APIcall combinations. For all these new traces, the function recurses in order to continue the giventrace with all possible combinations. The generation algorithm is given in a pseudo-code notationwhich distinguishes two parts – the initialization and the recursive function.

During the initialization, the inputs for the initial call of the recursive function are determined.The recursive function has two parameters: a trace which shall be duplicated and extended withnew API call combinations until the specified length is reached and a time stamp which denotesthe current length of the trace. Initially, the trace is empty and the time stamp is 0.

Within the recursive function, two parts can be distinguished. In the first, the lists of possibleAPI calls are generated (one for each IMA module in the network) as described in step 1 and 2 ofthe four-step algorithm. In the second part, all combinations of possible API calls are generatedsuch that in each combination there is one API call per IMA module. This step is new since thefour-step algorithm required only one possible combination which was arbitrarily selected. Then,the input trace is duplicated and extended for each combination. Finally, each generated trace iscontinued recursively until the specified length is reached.

Page 224: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

204 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

InitializationDetermine the maximum trace length for the generationGenerate an empty start trace and set the time stamp to 0

Recursive Function1. For all IMA modules:

(a) If previous API call has finished or beginning of tracei. Generate a list of communication API calls such that each port of thecurrently scheduled partition can be triggered

ii. Add TIMED WAIT API call to listiii. Delete API calls which would take longer than the remaining partition

time to send/receive a message(b) Else generate a list with pseudo element BUSY

2. For each combination of API calls with one API call per IMA module(a) Generate a new trace which appends the combination to the start trace(b) If specified trace length is not yet reached, recur with incremented time stamp

and new trace as start trace

7.2.2 Influencing and Handling the Generation Algorithm’s Results

In order to investigate the characteristics of the generation algorithm, it can be applied to an ex-ample network with different maximum trace lengths and the results can be analyzed with respectto testing (i.e., executing the algorithm’s results) and with respect to the generation algorithm’sexecution time. This section focuses on analyzing the results of the generation algorithm andprovides means for influencing and handling them. The investigations discussed in this sectionare independent of the characteristics of the network under test and of the generation algorithm’sexecution time. Such investigations are addressed later in Sect. 7.4.Providing means for influencing and handling the generation algorithm’s results is important sincethe time needed for executing the tests increases significantly when slightly longer communicationschedule lengths shall be used for testing. This is depicted in Table 7.2 which shows the numberof communication schedules for different maximum trace lengths when applying the generationalgorithm for the example network pictured in Fig. 7.2, Fig. 7.3, and Table 7.1.

maximum length resulting number expected test durationof communicationschedules

of communicationschedules

in time units time unit = msecreset time 0 sec

time unit = msecreset time 30 sec

3 210 630 <1 sec 2 h4 210 840 <1 sec 2 h5 1 890 9 450 10 sec 16 h6 13 230 79 380 2 min 5 days7 68 040 476 280 8 min 24 days8 567 000 4 536 000 1 h 197 days9 2 721 600 24 494 400 7 h 945 days

Table 7.2: Number of communication schedules for different maximum lengths (calculated for theexample network) and analysis of the results with respect to testing

Column 1 and 2 show that the number of resulting communication schedules increases exponen-tially: from 210 communication schedules of length 3 time units to almost 2.7 million traces of

Page 225: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.2. GENERATION ALGORITHM FOR COMMUNICATION SCHEDULES 205

length 9 time units. This increase is due to the fact that each communication schedule of length dcan usually be extended by several API call combinations and thus the number of communicationschedules of length d + 1 is much higher. As a consequence, the expected test duration just forexecuting all schedules increases since it is calculated as the product of length and number of com-munication schedules as shown in column 3 (for abstract time units). Column 4 converts column 3in a human-readable format when assuming that 1 time unit is 1 msec – which is not considered tobe realistic with respect to the example network but is just used for illustration. The result is alarm-ing: Although considering a very simple example network and still quite short test cases, almost7 hours uninterrupted testing would be required for executing all test cases. Moreover, it mightbe more realistic that the time to be waited between two test cases, e.g., for resetting the modulesor loading the new test case, is not 0 msec. For example, when considering that the reset time is30 sec, it would take 945 days to execute all test cases which interpret communication schedulesof length 9 msec. This is by far not feasible – especially not for tests in later development stages.

The aim of this section is to present two means for influencing and handling the algorithm’s results:One for reducing the number of communication schedules (i.e., the number of test cases) by focus-ing the test objectives and another for sorting the communication schedules such that interruptingthe test execution is possible because, at the time of interrupting, at least the most interesting testcases have been performed. Section 7.2.2.1 and Section 7.2.2.2, respectively, elaborate on thesetwo approaches.

7.2.2.1 Reducing the Number of Communication Schedules

For reducing the number of communication schedules, restriction rules can be determined which– when applied on a given communication schedule – decide if a communication schedule shallbe used for testing. Possible simple restrictions functions can reduce the set of communicationschedules to those which ensure that successful API calls are possible (i.e., those where the APIcalls are always expected to return NO ERROR) or to those which always expect the API calls to fail(i.e., have another return code). Others – even more complex – restriction rules are also possible,but not presented here since the aim is to investigate if and how it is generally possible to reducethe test cases.

Restriction Function Example. In Sect. 7.1.1, a simple restriction function has already beenproposed and applied by focusing on successful API calls. The following set of rules is appliedin order to decide if a partition shall consider to perform a particular API call with respect to thepresent trace:

• Do not consider API calls which would read from an empty queuing port.Possible reasons for this situation: Either it has not yet been written into the port or allwritten messages have already been read.

• Do not consider API calls which would write into a full queuing port.Possible reason for this situations: All written messages have not yet been read.

• Do not consider API calls which would read from an empty sampling port.Possible reason for this situation: No message has yet been written into the port (or it is notyet available).

• Do not consider API calls which try to read messages which might not yet be availableaccording to the worst-case latencies and transmission times.

Page 226: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

206 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

When applying these restriction rules, the number of communication schedules is reduced signifi-cantly. Figure 7.7 compares the communication schedule trees when generating all communicationschedules of length 3 time units without and with applying a restriction function (Fig. 7.7(a) andFig. 7.7(b), respectively). To further illustrate how many communication schedules can be omit-ted when focusing on successful API calls only, all communication schedules which haven beenrestricted are depicted in grey in Fig. 7.7(a).Additionally, Table 7.3 compares the generated number of communication schedules for differentcommunication schedule lengths when applying the generation algorithm without any specificrestriction function (column 2) and when restricting to successful API calls (column 3).

maximum length of resulting number of communication schedulescommunication schedules without restriction rules only successful API calls3 210 304 210 305 1 890 886 13 230 1807 68 040 3408 567 000 14209 2 721 600 5900

Table 7.3: Number of communication schedules for different maximum lengths calculated for theexample network without and with applying restriction rules

Generation Algorithm considering Restriction Functions. Generally, it is possible to applythe restriction rules on a set of generated communication schedules. However, it is much moreeffective if such restriction rules are already considered in the generation algorithm. Thus, on-the-fly avoidance of communication schedule generation is possible which also reduces the generationalgorithm’s execution time. The generation algorithm is therefore extended as follows:

InitializationDetermine the maximum trace length for the generationGenerate an empty start trace and set the time stamp to 0

Recursive Function1. For all IMA modules:

(a) If previous API call has finished or beginning of tracei. Generate a list of communication API calls such that each port of thecurrently scheduled partition can be triggered

ii. Add TIMED WAIT API call to listiii. Delete API calls which would take longer than the remaining partition

time to send/receive a messageiv. Reduce list according to restriction function

(b) Else generate a list with pseudo element BUSY

2. For each combination of API calls with one API call per IMA module

(a) Generate a new trace which appends the combination to the start trace(b) If specified trace length is not yet reached, recur with incremented time stamp

and new trace as start trace

Page 227: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.2. GENERATION ALGORITHM FOR COMMUNICATION SCHEDULES 207

Summarizing, restriction rules can be an effective mechanism to significantly reduce the numberof generated communication schedules for a given maximum length and, furthermore, allow tofocus the test scope on specific investigation areas which are considered to be more important thanothers. Restriction rules in general can be very complex since it is allowed to reason based on theprevious communication flow. It is even allowed to combine non-conflicting restriction functions.However, in order to ensure that each shorter communication schedule can be maintained, it isnecessary that the restriction function does not restrict all possible API calls but that at least oneAPI call remains possible for each partition (e.g., TIMED WAIT).

Page 228: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

208 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

A: BUSY(1)B: BUSY(3)

time_stamp: 1

A: WRITE_PORT(A1p1)B: WRITE_PORT(B1p1)

time_stamp: 0

A: BUSY(1)B: BUSY(1)

time_stamp: 1

A: WRITE_PORT(A1p1)B: WRITE_PORT(B1p3)

time_stamp: 0

A: WRITE_PORT(A1p1)B: READ_PORT(B1p4)

time_stamp: 0A: BUSY(1)

B: READ_PORT(B1p2)time_stamp: 1

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: BUSY(1)B: BUSY(2)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(1)

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: BUSY(1)B: READ_PORT(B1p4)

time_stamp: 1

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: READ_PORT(A1p2)B: WRITE_PORT(B1p1)

time_stamp: 0

A: READ_PORT(A1p2)B: BUSY(3)

time_stamp: 1

A: TIMED_WAIT(1)B: BUSY(3)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 1

A: READ_PORT(A1p2)B: BUSY(1)

time_stamp: 1

A: READ_PORT(A1p2)B: READ_PORT(B1p4)

time_stamp: 0

A: READ_PORT(A1p2)B: READ_PORT(B1p2)

time_stamp: 0

A: READ_PORT(A1p2)B: BUSY(2)

time_stamp: 1

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 1

A: READ_PORT(A1p2)B: BUSY(1)

time_stamp: 1

A: READ_PORT(A1p2)B: WRITE_PORT(B1p3)

time_stamp: 0

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: READ_PORT(A1p2)B: READ_PORT(B1p2)

time_stamp: 1

A: READ_PORT(A1p2)B: WRITE_PORT(B1p3)

time_stamp: 1

A: READ_PORT(A1p2)B: READ_PORT(B1p4)

time_stamp: 1

A: READ_PORT(A1p2)B: TIMED_WAIT(1)

time_stamp: 1

A: READ_PORT(A1p2)B: TIMED_WAIT(1)

time_stamp: 0

A: TIMED_WAIT(1)B: READ_PORT(B1p2)

time_stamp: 1

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 1

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 1

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 1

A: READ_PORT(A1p2)B: BUSY(2)

time_stamp: 1

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p2)

time_stamp: 0

A: READ_PORT(A1p2)B: BUSY(1)

time_stamp: 1

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: READ_PORT(A1p2)B: BUSY(1)

time_stamp: 1

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 0

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 1

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: READ_PORT(A1p2)B: READ_PORT(B1p2)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: READ_PORT(A1p2)B: WRITE_PORT(B1p3)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: READ_PORT(A1p2)B: TIMED_WAIT(1)

time_stamp: 1

A: READ_PORT(A1p2)B: READ_PORT(B1p4)

time_stamp: 1

A: TIMED_WAIT(1)B: READ_PORT(B1p2)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: BUSY(1)B: BUSY(1)

time_stamp: 1

ROOTtime_stamp: 0

A: WRITE_PORT(A1p1)B: TIMED_WAIT(1)

time_stamp: 0

A: TIMED_WAIT(1)B: WRITE_PORT(B1p1)

time_stamp: 0

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 0

A: BUSY(1)B: WRITE_PORT(B1p3)

time_stamp: 1

A: BUSY(1)B: TIMED_WAIT(1)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(3)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 1

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 0

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 1

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: READ_PORT(A1p2)B: BUSY(3)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: READ_PORT(B1p4)

time_stamp: 2

A: TIMED_WAIT(1)B: READ_PORT(B1p4)

time_stamp: 2

A: WRITE_PORT(A1p1)B: READ_PORT(B1p2)

time_stamp: 0

A: WRITE_PORT(A2p1)B: READ_PORT(B1p4)

time_stamp: 2

(a)

ROOTtime_stamp: 0

A: WRITE_PORT(A1p1)B: WRITE_PORT(B1p1)

time_stamp: 0

A: WRITE_PORT(A1p1)B: WRITE_PORT(B1p3)

time_stamp: 0

A: WRITE_PORT(A1p1)B: TIMED_WAIT(1)

time_stamp: 0

A: TIMED_WAIT(1)B: WRITE_PORT(B1p1)

time_stamp: 0

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 0

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 0

A: BUSY(1)B: BUSY(3)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: BUSY(1)B: BUSY(1)

time_stamp: 1

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: BUSY(1)B: WRITE_PORT(B1p3)

time_stamp: 1

A: BUSY(1)B: TIMED_WAIT(1)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(3)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(2)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(2)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 1

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 1

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 1

A: WRITE_PORT(A2p1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: BUSY(1)

time_stamp: 2

A: TIMED_WAIT(1)B: BUSY(1)

time_stamp: 2

A: WRITE_PORT(A2p1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p1)B: TIMED_WAIT(1)

time_stamp: 2

A: WRITE_PORT(A2p2)B: WRITE_PORT(B1p3)

time_stamp: 2

A: WRITE_PORT(A2p2)B: TIMED_WAIT(1)

time_stamp: 2

A: TIMED_WAIT(1)B: WRITE_PORT(B1p3)

time_stamp: 2

A: TIMED_WAIT(1)B: TIMED_WAIT(1)

time_stamp: 2

(b)

Figure 7.7: Communication schedule trees showing all generated communication schedules oflength 3 time units when (a) generated without restriction function and when (b) reduced to suc-cessful API calls. All communication schedules which are not contained in (b) are depicted ingrey in figure (a).

Page 229: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.2. GENERATION ALGORITHM FOR COMMUNICATION SCHEDULES 209

7.2.2.2 Sorting the Communication Schedules

In addition to reducing the number of generated communication schedules, it is also possible todefine heuristic functions which allow to sort the communication schedules. This allows moreinteresting communication schedules get a higher weight (i.e., importance) than others.

Like restriction functions, different heuristic functions can be defined and even applied in com-bination. The following list suggests some simple heuristic functions but, generally, heuristicfunctions can be arbitrarily complex.

• Sort the communication schedules such that load on all interfaces is favored. Consequently,communication schedules which often contain TIMED WAIT (i.e., idle with respect to com-munication flow) are considered to be less interesting than those which require frequentcontext switches. Additionally, shorter interface triggers can be preferred to those which areexpected to have a longer duration.

• Sort the communication schedules such that simultaneous load (i.e., simultaneous interfacetriggers) are favored.

• Sort the communication schedules such that communication schedules which trigger allinterfaces equally often are favored.

• Sort the communication schedules such that communication schedules with a normal loaddistribution on all resources are favored.

• Sort the communication schedules such that specific interface types are favored. This typeof heuristic function can be used to prefer queuing to sampling ports (or vice versa), allinterfaces which use a specific type of communication technique (e.g., only AFDX ports),or any combination of characteristics (e.g., only AFDX queuing ports).

Heuristic functions are always applied on a set of communication schedules or a communicationschedule tree and result in a sequence of communication schedules.

Heuristic Function Example. When applying the first of the above heuristic functions to the setof generated communication schedules of length 3 time units, the communication schedules aresorted according to the weight calculated by the heuristic function. The result is partially shownin Sect. 7.1.3.2 when presenting communication schedule sequences and completely contained inAppendix E.1.

Summarizing, heuristic functions provide a means for sorting the generated communication sched-ules such that the more interesting test cases come to the fore. However, heuristic functions cannotbe used to reduce the number of generated communication schedules, but can in combination withrestriction functions provide adequate means to handle the (potentially “extensive”) results of thegeneration algorithm.

Page 230: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

210 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

7.3 Considerations for Implementing the Approach

The previous sections have discussed in detail the automated generation of communication sched-ules and means for influencing the generation algorithm’s results. Communication schedules aretimed traces of (simultaneous) interface triggers which can be executed by the respective net-work’s IMA modules – or, more precisely, by the partitions scheduled by the IMA modules –because only possible API calls are considered at each time tick.2 The aim of this section is todiscuss briefly how the generated communication schedules can be used for testing and what hasto be considered for test evaluation. This includes particularly the demands on test applicationsand test specifications.

7.3.1 Approaches for Test Execution using Communication Schedules

For testing, the communication schedules are interpreted by test applications (running in the mod-ules’ partitions) or by test specifications (executed by the test system). The test applications al-low to control all communication ports configured for the partition they are running in. The testspecifications are required to control communication ports which are normally part of non-IMAcontrollers and peripheral equipment. Such controllers are never part of the network under testsince their behavior cannot be controlled by general means; but the respective communicationflow to or from these controllers is still part of the configurations considered for generating thecommunication schedules.

Test Applications and Test Specifications. Generally, the functionality of test applications andtest specifications is similar:

• They control exclusively one or a set of communication ports.• Before test start, they perform the initialization activities (e.g., create the ports) to allowcommunication flow.

• To initiate communication flow, they filter and interpret the communication schedule andthus know when to trigger one of their controlled interfaces.

• For offline test evaluation, they log all interface triggers and the results (e.g., time stampbefore and after each interface trigger, input and output parameters, return codes, result ofCRC verification, sender or receiver of message).

• After test execution, they transfer their test result or the test log to a specific test checkerspecification for offline evaluation.

Test applications and test specifications use different means to implement this functionality. Testapplications (like other applications) use the ARINC 653 API provided by the IMA modules’operating system which allows to access the communication ports by means of API calls. Usingthese API functions, the test applications can write to or read from ports independently of thecommunication technique used for message transmission. Test specifications, in contrast, have touse the means provided by the test system which can even mean to call the driver routines directly.For testing, the communication schedules to be used have to be available at each test applica-tion and each test specification. As shown in the previous section (particularly in Sect. 7.2.2),a test suite for communication flow testing of a specific IMA module network can comprise a

2The term possible API call (as defined in this chapter) does not predetermine the expected return code, but denotesthat neither scheduling problems nor wrong input parameters (such as other partitions’ port identifiers) should be thereason for return codes other than NO ERROR.

Page 231: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.3. CONSIDERATIONS FOR IMPLEMENTING THE APPROACH 211

huge number of generated test cases (each represented by a communication schedule). However,the data area available for test applications (and also for test specifications) to store any kind ofapplication-specific data is usually very limited and, thus, it is often not possible that each testapplication and each test specification contains all its offline prepared communication schedulesin order to execute one after the other. In addition, the data area of test applications is not acces-sible from outside the module or by other avionics partitions running on the same IMA module.As a consequence, it is required that the communication schedules are distributed as part of theoverall test execution, e.g., before executing each test case. Different approaches are possible todistribute the communication schedules which are discussed in the following paragraph. Since asimilar problem arises for storing and accessing the test execution logs, log collection alternativesare discussed thereafter. For both problems, it has to be considered that test applications can onlyaccess their own configured data areas and cannot read nor write any other partition’s memory.System partitions might have access to the data areas of avionics partitions but their access rightsare outside the scope of ARINC 653 and thus also not considered in this thesis. At run time, theonly means from outside to “access” the test application’s data is by means of the ARINC 653 API(e.g., by inter-partition communication).

Distributing Communication Schedules Prepared Offline. The communication scheduleshave to be distributed to the test applications as well as to the test specifications. For test speci-fications, the distribution is relatively simple since the test specifications are executed by the testsystem which can easily control the test specifications’ input and output. However, for test ap-plications, several alternatives exist for distributing the communication schedule which can begrouped into (A) those which distribute the communication schedules as part of the test applica-tions and (B) those which distribute the communication schedules using standard inter-partitioncommunication means. Some alternatives of each group are discussed in the following.Distribution Alternative A1. The first distribution alternative distributes the communicationschedule as part of the test application by specifically compiling – for each test case – one testapplication for each partition which contains the respective partition’s communication schedulepart. Thus, each test case comprises several test application types – namely one for each partition.The main advantage of this approach is that it also allows to further specialize each partition’s testapplication type in order to consider the partition’s configuration requirements, e.g., with respectto code or data memory. The main disadvantage is related to the – potentially huge – number ofgenerated communication schedules which each require to prepare (and store) several test applica-tions to be used only in this specific test case. Moreover, this approach requires that, before eachtest case can be executed, the respective test applications are loaded to the partitions.Distribution Alternative A2. The second alternative is based on the previous approach but re-duces the number of different test applications to be prepared for a test suite by implementingmeans that identify which different communication flow behaviors are required and then providesrespective test applications only once (instead of once for each test case which requires suchcommunication flow behavior). The main advantage is that the number of test applications to beprovided for a complete test suite is reduced significantly. However, this approach still requiresthat the respective test applications are uploaded to the partitions before the test case executioncan start.Distribution Alternative A3. The aim of this alternative is to reduce, for each test case, the diver-sity of test application types by providing only one test application type per module. Therefore,each test application type contains the communication schedule for all partitions of one moduleand each test application instance knows how to filter the schedule depending on its partition iden-tifier. Thus, the number of test application types can be reduced to the number of modules in thenetwork. The advantage of this approach is the reduced number of test application types. How-ever, for test execution, it is still required to upload the new test application code for each testcase.

Page 232: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

212 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

Distribution Alternative B1. This distribution approach overcomes the above described draw-backs by designing a generic test application which complies with the requirements of all parti-tions (e.g., with respect to memory). Before performing a test, the generic test application receivesthe next communication schedules from a test control specification running on the test system. Forthe communication between test application instance and test control specification, reserved com-munication links are used which are intended for test control only and shall not be used by thereal applications for communication. The advantage of this approach is that only one type of testapplication is used for all tests which means that data loading between two tests is not necessary.However, this approach requires that the configurations for all partitions contain such a distribu-tion port which is also desirable for other test purposes (e.g., testing single IMA modules withtheir final configuration) but cannot be taken for granted.

The approach is depicted in Fig. 7.8 which shows that each test application has its dedicatedqueuing port which is used by the test control specification. The respective communication link isdepicted in red. The figure also depicts communication links for sending the communication logscompiled during test execution to a test evaluation specification for offline test evaluation.

Test Application A1 Test Application A2 Test Application A3 Test Application B1 Test Application B2

AFDX ARINC 429 CAN Analog I/O Discrete I/Odistribute offline preparedcommunication schedule

receive communication logfor offline evaluation

IMA Module A IMA Module B

Test SystemNetworkunderTest

Configuration

Tables

Configuration

Tables

QPQPSPQPQPSP QP QP QP QP QP

QP QPQP

QPQP QP

QP QPQPQP

QP

Communication Control Layer

Test Evaluation SpecificationTest Control Specification

Figure 7.8: Approach for distributing offline prepared communication schedules to the test appli-cations and for receiving the communication logs for offline evaluation (the reserved communica-tion links are depicted in red and magenta, respectively)

Distribution Alternative B2. The second distribution approach of this group extends the previ-ous one by allowing to start the test execution already after the beginning but before the completecommunication schedule has been transmitted and to continue the distribution during test execu-tion. This is advantageous if longer communication schedules are considered, but also requiresthat the test applications are not permanently occupied with communication flow activities (i.e.,interface triggers) and have some idle time (i.e., TIMED WAIT) which can then be used for gettingthe communication schedule. In any case, this means that the IMA modules have a higher com-munication load than intended by the communication schedule which should be considered duringtest evaluation to avoid distortion of the overall result.

Distribution Alternative B3. To overcome the drawback of requiring specific distribution portswhich might not be available, this approach uses existing communication links between the testsystem and the partitions to distribute the communication schedules. It is based on the assump-tion that the network configuration of each IMA network contains at least one communicationlink from a non-IMA controller – potentially even one to each module or each partition. Since,

Page 233: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.3. CONSIDERATIONS FOR IMPLEMENTING THE APPROACH 213

for testing purposes, all non-IMA controllers are replaced by test specifications, this would meanthat there exits already one or several communication links from the test system which might beused for distributing the communication schedules before the test execution starts. Thus, some testapplications can be reached directly and others via other partitions. However, for providing thisfunctionality, each test application would have to know how to obtain the communication sched-ule and if it is required to forward the communication schedule to other partitions via the existinginter-partition communication links. This requires that each test specification has to be specializedin order to listen at startup at its respective communication port. The advantage of this approachis that the final configurations can be used without modifications by distributing and forwardingthe communication schedules as required. However, this means that pre-programmed specializedbehavior is required in each test application for forwarding the communication schedule in orderto reach every test application also via several hops. Furthermore, distributing the communicationschedule via several intermediate partitions means that relaying occurs sequentially which is likelymore time consuming. The applicability of this approach also depends on the types of commu-nication links which are available communication schedule distribution since not all are equallysuitable for distributing communication links (e.g., analog signals might be unusable). Addition-ally, the reachability of each partition from the test system has to be ensured by analyzing thenetwork’s configuration.

The approach is depicted in Fig. 7.9 which shows (in red) the communication links used for dis-tributing the communication schedules. Since test application A1 and B1 have no communicationlink to the test system, they get the communication schedule via test application A2 (for A1) andA2 and A1 (for B1). Therefore, test application A2 and A1 have to forward the communicationschedule.

Test Application B2Test Application B1Test Application A2Test Application A1

Test Specificationdistribute offline prepared communication schedules- directly to A2- to A1 via A2

- directly to B2- to B1 via A2 and A1

IMA Module BIMA Module A

NetworkunderTest

Test System

Test Control /Test EvaluationSpecification

ConfigurationTables

ConfigurationTables

Figure 7.9: Approach for distributing offline prepared communication schedules to the test appli-cations using existing communication links (distribution flow is depicted in red)

In conclusion, there is no favorite approach since each has its drawbacks and a decision for oneand against the others depends on many factors. For example, the first three (A1, A2, and A3)require that some mechanism exist to generate all different test application types and to uploadthem to the modules in an automated way. When considering the tool chain available for testingbare IMA modules and its limitations (see Sect. 6.5.1.2), this might not be possible. However,when considering distribution alternative B1, the requirements on the configurations can limitits applicability. When considering distribution alternative B2, the requirements are even more

Page 234: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

214 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

stringent since only specific communication schedules allow such an approach. For implementinga test suite, these pros and cons have to be weighted up carefully before choosing one of thedistribution alternatives.

Collecting Communication Logs. During the test execution, each test application and each testspecification logs its communication data, i.e., time stamps before and after API call, input andoutput parameters, return code of the API call, result of CRC verification, and receiver or senderof message. The resulting communication log is stored in the data area of each test applicationor test specification, respectively. For evaluating the communication log, different approaches arepossible: Firstly, the communication log can be used for partition internal evaluation – either on-the-fly or offline – resulting in a test verdict for the single partition or, secondly, can be evaluatedby a central test evaluation specification which brings in a global test verdict. Further details arediscussed in Sect. 7.3.2, but both approaches have in common that the test applications have tostore the communication log or test verdict in a data area which is not accessible from outsidethe module or by other avionics partitions. It is currently only possible to access the data whenthe test application sends them to a test evaluation specification using the standard inter-partitioncommunication means. Two different approaches are possible for sending the communication logsor test verdicts which are briefly addressed in the following.

Log Collection Alternative 1. The first approach, already pictured in Fig. 7.8, requires that spe-cific communication links are provided which are reserved for sending the communication log orthe test verdict. The advantage of such communication links is that their configuration can fulfillthe requirements for sending long communication logs or short test verdicts which are directly re-ceived by the test evaluation specification. As discussed with respect to the communication linksused for distributing the communication schedules, relying on the existence of such communica-tion links in each final configuration is not possible.

Log Collection Alternative 2. The second approach can be chosen if these specific communica-tion links are not available. Like distribution alternative B3, it is based on the assumption that thenetwork’s configuration contains communication links to non-IMA controllers which are – duringcommunication flow testing – changed into communication links to the test system. Furthermore,it is assumed that partitions which have no such direct communication link can send data to the testsystem by routing all data via one or several relaying partitions such that it finally reaches the testsystem – like in the Internet where packets can be passed from one computer via several routersto their final destination computer. However, the advantage of transmitting the test results withoutrelying on specific communication links comes at the expense of the effort required for definingthe possible routes and for specializing the test applications to support the routing functionality.Moreover, this approach is highly configuration dependent since not all types of communicationlinks can be used equally well, for example, it is not convenient to transmit long communicationlogs via analog signals.The approach is depicted in Fig. 7.10 which shows (in magenta) the routes for sending the testresults of the partitions to the test evaluation specification. Only test application A2 has no directcommunication link and thus the data is relayed by test application A1.

Further Requirements for Test Applications. The test applications – independent of whichapproaches are selected for distributing the communication schedules and for collecting the testdata – are running in the configured partitions and thus have to consider the requirements deter-mined by the ARINC 653 specification, module-specific user specifications, or the IMA module’sconfiguration. In particular, the following two aspects have to be regarded:

• To ensure correct partitioning and correct scheduling of the IMA modules, it is required thateach test application complies with the configuration of its respective partition, in particular

Page 235: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.3. CONSIDERATIONS FOR IMPLEMENTING THE APPROACH 215

Test Application B2Test Application B1Test Application A2Test Application A1

Test Specificationreceive communication log for offline evaluation- directly from A1- from A2 via A1- directly from B2- directly from B1

IMA Module BIMA Module A

NetworkunderTest

Test System

Test Control /

SpecificationTest Evaluation

ConfigurationTables

ConfigurationTables

Figure 7.10: Collecting test results via existing communication links (test result flow is depictedin magenta)

with respect to available memory for code and data, scheduling, and communication ports.Considering these requirements, specialized types of test applications might be easier tohandle than generic test applications since then it is possible to tailor each test applicationappropriately. However, this specialization should – if possible – not affect the genericbehavior required for interpreting communication schedules.

• While executing one test case, the test applications interpret a communication schedulewhich specifies when to use which of the test application’s (i.e., partition’s) communicationports. ARINC 653 communication ports have to be opened by using specific API service(CREATE SAMPLING PORT and CREATE QUEUING PORT, respectively) before they can be usedfor sending or receiving messages. Since all communication schedules assume that all portsare already created, the test applications have to perform the required initialization activities.After completing these, the partitions synchronize on a common point in time to start theactual test execution. Note that this synchronization may be arbitrarily complex, if the parti-tions have to agree on this on their own using a specific (hardware) mechanism or protocol.But it may also be fairly easy if test management links to the test system are available so thatcentralized coordination by the test control specification is possible. How such mechanismsare implemented is highly specific to the respective test setup and is therefore not consideredfurther in this thesis.

Replacing Non-IMA Communication Components by Test Specifications. When consider-ing the module configurations of the network under test, they contain intra-network (i.e., inter-and intra-module) communication links as well as communication links to or from non-IMA com-ponents. These components are any kind of either non-IMA controllers and smart peripherals orsimple peripheral equipment like sensors and actuators. If these components were physically partof the network under test, it would be necessary to consider their requirements, limitations andcharacteristics not only with respect to communication but also with respect to providing adequatetest applications. Since generic means for this are not available and considering each type sepa-rately is too costly and time-consuming, all non-IMA components are considered as single-CPUexecution components that are replaced by test specifications. The test specifications thus becomethe communication link’s source or destination.

Page 236: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

216 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

The test specifications are executed by the test system which runs in parallel to the network ofIMA modules and thus allows simultaneous interface triggers. Moreover, the test specificationscan be distributed across several CPUs which – if supported by the test system and the test engine– may be exclusively reserved for an arbitrary subset of test specifications.

For replacing the non-IMA components, the general strategy is to assign one test specificationto one replaced execution component. Execution components are all single-CPU componentswhich inherently prohibit simultaneous communication. Depending on the specification infor-mation available for the replaced component, the scheduling and performance information aremapped – for the generation algorithm – to the configuration structures used for IMA modules.If detailed scheduling information (e.g., frequency of message transmission) is not available, it isassumed that the replaced component is scheduled all the time. As a consequence, more com-munication flows may be tested than possible under normal operation conditions of the replacedcomponent. When analyzing the performance information of the component to be replaced – inparticular the duration for transmitting and receiving messages – these values have to be comparedto those required within the test specification. If the performance of the test specifications is worse,appropriate actions have to be taken ranging from assigning the test specification to an exclusivelyreserved CPU or improving the test engine’s performance to splitting the test specification intoseveral parts which can be executed simultaneously.

Depending on the different platform characteristics, replacing complex non-IMA controllers (e.g.,smart peripherals) is handled differently than replacing simple peripheral equipment (e.g., actua-tors or sensors). When replacing the former, each controller is represented by a test specificationwhich – like the replaced component – can handle several communication links for receiving andsending messages. When replacing simple peripheral equipment, it can be assumed that these areeither sensors or actuators and thus can be replaced by test specifications acting as sender or re-ceiver component, respectively. Figure 7.11 depicts one single-CPU non-IMA controller as wellas two actuators and one sensor replaced by the respective types of test specification. It also showsthat the test control and test evaluation is performed by another test specification. The figure ab-stracts from concrete assignments of the test specifications to reserved CPUs of the test engine.

Test Specification 4 Test Specification 5Test Specification 3

Test Application B2Test Application B1Test Application A2Test Application A1

Test Specification 2Test Specification 1Test Control / Test Evaluation

IMA Module BIMA Module ANetworkunderTest

Test System

ConfigurationTables

Receiver Component

“actuator”

Sender Component

“sensor”

Receiver Component

“actuator”

ConfigurationTables

Receiver and Sender Component

“single-CPU non-IMA controller”

Figure 7.11: Generic test specifications replacing a single-CPU non-IMA controller (test specifi-cation 2), two actuators (test specification 3 and 4), and a sensor (test specification 5)

Page 237: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.3. CONSIDERATIONS FOR IMPLEMENTING THE APPROACH 217

7.3.2 Approaches for Test Evaluation

When regarding how the results of the generation algorithm may be employed for testing the com-munication flow in a network of IMA modules, approaches for evaluating test execution resultshave to be considered as well. It is the aim of this section to discuss which evaluation activitiesare necessary and how these can be shared between test applications, test specifications, and ded-icated test evaluation specifications. Different approaches are possible and the choice depends tosome extent on the possible log collection approach to be used by the test applications and testspecifications.

When executing a specific communication schedule, numerous parameters are measured all ofwhich should be considered when evaluating the schedule’s overall result. Generally, on-the-flyor offline checks can be performed by the test applications / test specifications or by dedicated testevaluation specifications provided that the required information is available. In practice, however,this may be limited for several reasons: Firstly, test applications can be very busy performing thesequence of API calls specified by the communication schedule and thus may not have spare timefor comprehensive on-the-fly checking or immediate transmission of the test logs for on-the-flyevaluation by a test evaluation specification. Secondly, test applications may not have enoughmemory to store all required information for offline evaluation and thus have to perform certainchecks on-the-fly. Thirdly, for some investigations, it may be required to incorporate the testexecution logs of several partitions; for checking by the test applications themselves, this in turnwould require an adequate test log exchange which is subject to the aforementioned restrictions.Fourthly, it is required for each test case to have a common test result which combines the resultsof all test applications and all test specifications – a task likely to be performed by a central testevaluation specification gathering this information. However, for different network configurationsand different sets of communication schedules, each factor may be more or less restrictive and thusthe assignment of specific verification tasks to test applications or test evaluation specificationshas to be decided based on this information. Consequently, the extent of distributed on-the-flyevaluation, distributed offline evaluation, centralized on-the-fly evaluation, and centralized offlineevaluation has to be determined for each test suite. Nevertheless, some general indications can begiven on what information is required for each check, what are the configuration limitations, andwhat are the pros and cons of possible assignments.

The list of checks is compiled by analyzing which parts of the test objectives (see Sect. 5.3.1) arealready met by performing all generated communication schedules and what has to be evaluatedto ensure that the results of the execution are as expected. For example, the generation algorithm’smode of operation ensures that simultaneous network access is considered wherever possible (andnot on purpose restricted by the applied restriction function). Consequently, if each of the respec-tive simultaneous interface triggers in all generated communication schedules can be performed asexpected, simultaneous network access has been proven. This means, in this example, it remainsto check that each API call can be performed when expected and returns the expected outputparameters and return codes.

Thus, the list of checks that should be considered for evaluating the communication flow includes(but is not limited to):

Checking the API Call Durations. It has to be verified that all performed API calls have notexceeded their allowed maximum duration as specified by the worst-case software latency.This is ensured if it was always possible to perform the next scheduled API call at thetime specified by the communication schedule. For this reason, the expected idle time isexplicitly specified in the communication schedule as a TIMED WAITAPI call which ensuresthat each attempt of overrunning can be detected. In addition, it can also be checked if themeasured duration in average complies with the expected mean API call duration.The verification can be done (a) on-the-fly by the executing test application which almost

Page 238: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

218 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

automatically detects such irregularity when the next API call cannot be started on timeor (b) offline by the test application or test evaluation specification when analyzing theinformation compiled in the communication log. For the latter, it is required that at leastthe start time of each API call is logged but preferably also its end time since this mayallow for a fine-grained evaluation. On-the-fly checking by the test evaluation specificationis only possible, if at least the start time of each API call is transmitted to the test evaluationspecification and checked against the communication schedule.Preferably, this check is performed on-the-fly by the test application which gathers thisinformation almost automatically and can stop the test execution in case of discrepancies.However, if any API call takes longer than specified in the performance information, thegeneration algorithm’s basic assumption is violated and also questions the other generatedcommunication schedules.

Checking the Duration of Message Transmission. When checking the duration of a messagetransmission, it is verified that the time from the end of the sending API call until the mes-sage is available at the receiving partition does not exceed the respective worst-case duration(i.e., the sum of the respective worst-case hardware latencies and the worst-case transmis-sion time). However, timely availability at the receiving partition can only be checked by anappropriate read API call which would require that the respective partition is scheduled atthat point of time – partition scheduling, however, is fixed and cannot be enforced. An ex-ample of a communication schedule with an appropriate read API call is depicted in Fig. 7.1.For the evaluation, three cases have to be distinguished: (1) If it is possible to perform therespective read API call at the calculated point of time (or even slightly before) and themessage can be read successfully, the duration of the message transmission has not beenexceeded. (2) If the read API call is scheduled for execution at the calculated point of time(or slightly later) and the message reception fails, the duration of the message transmissionhas been exceeded. (3) If – for partition scheduling reasons – the read API call can onlybe executed slightly before or slightly after the calculated point of time and then fails orsucceeds, respectively, no definite result can be provided; a heuristic function can help toestimate the probability that the message will be or has been available in time.In any case, the checking approach is very complex and requires information from severalpartitions which motivates performing the checks offline by central test evaluation specifi-cations based on the communication logs of the partitions. In addition, centralized off-lineevaluation simplifies assessment by a heuristic function.

Checking the Expected API Call Return Codes. Return code evaluation is a difficult task sincethe correct return code for an API call depends on the previous communication behaviorof all test applications and test specifications. It is even more difficult to predetermine acorrect return code since there is no fixed duration for each API call and message transmis-sion but an allowed range with the specified worst-case value as the upper bound. This canlead to situations where – according to the worst-case latencies and transmission times – amessage has not yet been completely transmitted and thus an attempt to receive it shouldfail but actually succeeds because the transmission was faster than required for worst-casesituations. Moreover, if the same API call for receiving the message is repeated when themessage transmission must have been completed also in worst-case situations, the returncode of the previous API call has to be considered – in particular for queuing port com-munication with destructive reading. This means the problem of determining the correctreturn code carries on and may even require considering all previous return codes. It evensnowballs if the return code of one partition affects the expected return code of another one.For example, when considering a queued communication link with queue length 1 betweentwo partitions and the sending partition repeatedly sends a message with a frequency lessthan the worst-case transmission duration, the return code to be expected depends on (a) the

Page 239: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.3. CONSIDERATIONS FOR IMPLEMENTING THE APPROACH 219

actual transmission duration and (b) the frequency of reading the message in the destinationpartition. Assuming that the frequency of reading attempts is much higher than the send-ing frequency, the expected return code for each message sending attempt can still not bepredetermined but may vary depending on other simultaneous network access which mightslow down the message transmission. However, if each sent message shall be read only afterwaiting the worst-case transmission duration, each read API call must succeed. Likewise,if a new message is only transmitted after the previous one could be read successfully, therespective API call must succeed.Other situations, however, may be predetermined and only one return code is correct. Thisincludes, for example, any read attempts from empty ports where the respective sender par-tition has not yet started transmitting a message (according to the communication schedule)or writing into full queuing ports where the respective receiver partition has not yet starteda read attempt (according to the communication schedule). Often, these situations are mostlikely to occur at the beginning of a communication schedule.All examples above have shown that determining the expected return code may require acomplex algorithm and information about previously triggered API calls and their respectivereturn codes. While the former can be extracted from the communication schedule, the latterrequires to exchange the return codes. Thus, on-the-fly evaluation by the test applicationsor by a central test evaluation specification depends on an adequate on-the-fly return codeexchange protocol. However, such a protocol may not be usable if the network is alreadyheavily loaded for communication flow testing. Consequently, return codes are preferablyevaluated offline after exchanging the communication logs. Most likely the central test eval-uation specification performs the checks assuming that it collects the communication logsalso for other purposes.

Checking the Correct Message Transmission. In addition to the return code of each API call,the correct message transmission can be verified by analyzing the output parameters ofthose API calls which are supposed to successfully receive a message. Two approaches arepossible: Either the received message is compared with the sent message or the CRC ofthe message is checked which implicitly verifies the message’s content and length withoutcomparing it with the sent message. Since for the first approach, the sent and receivedmessages have to be available by the evaluating instance, the second approach is generallypreferred although it cannot be applied for very small messages (i.e., messages which are toosmall to contain the CRC field), but such messages can be saved anyhow. To avoid savingthe other longer messages in the communication log, the CRC checking is best performedon-the-fly by the test applications which then log only the CRC check result.

Checking the Correct Sequence of Messages. Sequence information is not contained in the in-put / output parameters and thus has to be added as part of the message content. By checkingthe sequence of message transmission, it shall be ensured that messages are not duplicatedor lost. For queued messages, this means to check for strictly increasing sequence identifiers(per communication link) where the identifier is incremented by 1 with each new message.For sampling messages, lost and duplicated messages are less easy to identify since the re-ceiving partition is not required to read every message and is also not hindered from readinga message several times. However, duplicated sampling messages are harmless as long asthey do not arrive after the next message, but lost sampling messages should be detectedsince sampling communication does not necessarily imply frequent updates where one lostmessage might be accepted. Therefore, for checking that no sampling message is lost, itis required to analyze which is the last sequence identifier that must already have beentransmitted (i.e., also in worst-case scenarios) based on the communication schedule or thecommunication log of the sending partition. Assuming that such an analysis is too complexto be carried out on-the-fly by the receiving test applications, checking the correct sequence

Page 240: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

220 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

of sampling messages will probably be performed offline based on the communication logs.Since the checking entity needs to have access to the sender’s as well as to the receiver’scommunication log, this is probably the central test evaluation specification. Nevertheless,the correct sequence of queuing port communication may be checked on-the-fly by the testapplication or offline by the test evaluation specifications.

Summarizing, it is suggested that (a) all checks which require complex analysis of the previousinteractions or information from several partitions are performed by central test evaluation speci-fications if it is possible to log and transmit the required information and that (b) all other checks– in particular those which would otherwise lead to extremely huge communication logs – are per-formed by the test applications. Nevertheless, the final allocation of responsibilities is dependenton the restrictions which are imposed by the network configuration.

7.4 Assessment of the Algorithm

The previous sections have shown that the number of communication schedules generated by thegeneration algorithm increases (as expected) when longer communication schedule durations areconsidered and that this causes a much longer – and often even too long – test execution timewhen all generated communication schedules are executed. Section 7.2.2 has therefore discussedhow focusing on specific test objectives (e.g., only successful API calls) can reduce the numberof generated communication schedules by applying appropriate restriction functions. It is the aimof this section to assess the relation between the characteristics of the network under test and thenumber of generated communication schedules. This shall help to identify the factors influencingthe complexity of the generation algorithm. Moreover, this assessment shall help to investigatehow the execution time of the algorithm (as one measure of its complexity) scales if longer – andthus more – communication schedules are considered.The assessment is based on the results of a prototype implementation described in the follow-ing section (Sect. 7.4.1) which is used in conjunction with several example networks which aredescribed in Sect. 7.4.2. The results of the assessment are discussed in Sect. 7.4.3.

7.4.1 Prototype Implementation of the Generation Algorithm

To perform the assessment, a prototype of the generation algorithm has been implemented andtwo restriction functions have been defined. The prototype implementation of the generation al-gorithm is meant as proof of concept and thus not especially optimized for performance – neitherfor execution time nor memory utilization. The expectation was that related problems could beovercome by running the implementation on a powerful machine (memory, CPU, etc.). However,even on a 2.2 GHz 4-CPU PC with 3 960MB RAM, the limitations of the prototype implementa-tion appeared very early which becomes apparent by the following: The prototype implementationkeeps all generated communication schedules in an internal tree representation which contains ad-ditionally references to the configuration details of the scheduled partitions and their ports. Asa consequence, the generation process runs easily out of memory because for certain examplenetworks the number of generated communication schedules may already exceed 9 million forrelatively short schedules of 6 time units length. This problem cannot be solved by upgradingthe computing hardware because, for standard Unix /Linux operating systems running on 32-bitprocessors, the maximum memory per process is limited to approximately 2GB. But differentkinds of optimizations may be pursued, including supporting intermediate storage of the internaldata structures on a hard drive, further optimizing internal data structures regarding memory effi-ciency, and splitting the generation tasks such that different processes can generate their subset ofcommunication schedules in parallel. Such (and maybe other enhancements) should, in principle,allow an implementation of the generation algorithm to calculate schedules of virtually arbitrarylength and for any kind of networks.

Page 241: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.4. ASSESSMENT OF THE ALGORITHM 221

Restriction Functions in the Prototype Implementation. The prototype implementation com-prises two restriction function examples which can be used to reduce the number of generatedcommunication schedules:

Restriction Function restriction1. The first restriction function prevents those API calls which tryto read from empty (sampling or queuing) ports or try to write into full queuing ports. This meansthat API calls are not considered to be performed if one of the following conditions holds (notethat each condition considers the previous steps of the communication schedule but disregards themessage transmission time):

• for an API call which would read from a sampling port: there has not yet been a write APIcall for the respective source port (i.e., no message yet available for reading),

• for an API call which would read from a queuing port: there have not been more write APIcalls for the respective source port than read API calls from the port (i.e., queue is empty),

• for an API call which would write into a queuing port: there have been n more write APIcalls already for this port than read API calls for the respective destination port and n is thequeue length (i.e., queue is full).

Restriction Function restriction2. The second restriction function discards the same API calls butadditionally checks that it is not tried to read messages which might, in worst-case situations, notyet have been transmitted. For this, the worst-case software and hardware latencies as well as theworst-case transmission time are considered. This means that API calls are not considered to beperformed if one of the following conditions is true:

• for an API call which would read from a sampling port: there has not yet been a write APIcall for the respective source port or not enough time has passed already from the first writeAPI call that the message must also have been transmitted in worst-case situations (i.e.,possibly no message yet available for reading),

• for an API call which would read from a queuing port: there have not been more write APIcalls for the respective source port than read API calls from the port or not enough time haspassed since the related write API call has been started to ensure that the message could alsohave been transmitted in worst-case situations (i.e., queue might be empty),

• for an API call which would write into a queuing port: there have been n more write APIcalls already for this port than read API calls for the respective destination port and n is thequeue length (i.e., queue is full).

“Restriction Function” restriction0. If no restriction function is applied when executing the pro-totype implementation, all possible API calls are considered (i.e., all API calls which can beperformed in the remaining scheduling time of the partition and which access ports assigned to thepartition).

7.4.2 Example Networks

The following assessment of the generation algorithm uses the sets of communication sched-ules generated when executing the prototype implementation for different example networks. Toachieve sound and representative results, the network examples have to be chosen attentively andhave to consider the main factors of a network configuration which may influence the generationalgorithm’s result. Different (relatively simple) network examples have been generated which varyone of the influencing factors with respect to another network example. Using artificial examplenetworks is advantageous because it allows each one to focus on only one influencing factor –

Page 242: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

222 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

which is unlikely when using different real-world network configurations. Furthermore, artificialexample networks can be tailored such that they can cope with the limitations of the prototypeimplementation. Finally, real-world network examples cannot be considered in this thesis due toconfidentiality policies of the project partners.

Network Variation Factors. For the generation algorithm described in Sect. 7.2, the main in-fluencing factors include• the number of communicating components in the network configuration (which includes allIMA and non-IMA controllers),

• the number of partitions per component (in particular per IMA module),• the number of ports per partition,• the types of the considered ports,• the scheduling configuration of each module, and• the performance characteristics of each port and communication link.

For the following assessment, some possible variations (when focusing on one influencing factor)are not considered for the following reasons:• No Variation of Scheduling Configurations. As discussed above, the prototype imple-mentation is not optimized (except for one small optimization to be discussed shortly inSect. 7.4.3.4) which becomes apparent when generating longer communication schedules.For example, for the small network introduced in Sect. 7.1, the prototype implementationof the generation algorithm can generate all communication schedules up to length 17 timeunits when applying restriction function restriction2. This is rather short when consideringthe length of each communication schedule (i.e., test case), but relatively long comparedto other example networks where the prototype implementation is only capable of generat-ing communication schedules with a maximum length between 4 and 9 (depending on thenetwork example and the applied restriction function). For this reason, it is not meaning-ful to consider example networks which vary the scheduling configuration or performancecharacteristics because their effect could not be seen.

• No Variation of Performance Characteristics. Influencing factors which obviously affectthe number of generated communication schedules are not considered in the assessment.Therefore, the worst-case software latencies are not varied because longer software latencyimplies that a partition is longer busy with the particular API call which results obviouslyin less combinations and thus less communication schedules. Consequently, shortening thesoftware latencies results in more combinations and thus more communication schedules(and is not possible in some of the cases anyhow).

• Limited Variation of Inter-dependent Influencing Factors. Some influencing factors cannotbe considered independently. For example, when varying the number of partitions, thisrequires that also the scheduling configuration and either the total number of ports or thenumber of ports per partition are changed. Each factor can be changed in various ways andthe multitude of possibilities results in an explosion of the potential number of combinationsthat cannot be reduced to a small subset of “interesting” ones, i.e., combinations that cangive information about the influence of each single factor. An assessment to such an extentis therefore outside the scope of this thesis.However, certain influencing factors cannot be varied without also changing other factors.For example, varying the number of modules necessarily varies the number of partitions ifthe scheduling of those modules which are contained in both compared networks shall notbe changed.

Page 243: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.4. ASSESSMENT OF THE ALGORITHM 223

Grouping of Example Networks. In the following assessment, 12 different example networksare considered which each consist of two or three IMA modules with communication links be-tween their partitions. For simplicity, all example networks contain only connected communica-tion links. The example networks can be grouped as follows:

Networks varying the number of modules. For investigating the influence of the number of mod-ules in the network, two example networks (network1 and network2) are compared. The first oneis the standard example network (see Sect. 7.1) which consists of two IMA modules with twoand three partitions, respectively. The second network (depicted in Fig. 7.14(c)) consists of threeIMA platforms and the new module has the same number of partitions and the same schedulingconfiguration as the first one. To ensure that only one factor (namely the number of consideredIMA modules) is varied, the ports and communication links which where considered for partitionA2 are moved to partition C2.

Networks varying the number of ports per partition. For varying the number of ports, severalnetwork examples are generated which can be subdivided according to what types of ports areconsidered.

• All types of communication ports. Two network examples belong two this group: the stan-dard example network (network1) and another example network (called network3) whichadds two ports to each partition, i.e., the number of ports is increased from 12 to 22. Theperformance characteristics of the new ports are similar to those of the other ones. Theexample network is shown in Fig. 7.15(c).

• Only queuing ports. This subgroup consists of two networks (network4 and network5) whichboth consist of two IMA platforms with three partitions each. The scheduling configurationof both modules is equal (each partition is scheduled for two time units). Each partitionhas source as well as destination queuing ports which are connected such that a source portin one partition is connected to a destination port in the respective partition of the othermodule, e.g., from partition A1 to B1. Each partition in a specific example network has thesame number of ports: Five ports per partition in network4 and ten ports in network5. Thenetworks are depicted in Fig. 7.16(a) and Fig. 7.16(c), respectively.

• Only sampling ports. Seven example networks belong to this subgroup which can be furtherdivided into two groups. The first one consists of two example networks (network6 andnetwork7) which are like network4 and network5, respectively, but use only sampling ports.They are shown in Fig. 7.17(c) and Fig. 7.18(c), respectively.The second one subsumes five example networks each of which has two IMA platformswith one partition in each module which is scheduled permanently. This means that withrespect to number of modules and partitions, these networks have been reduced to (almost)the minimum. Each partition has one or several pairs of sampling ports (i.e., one sourceand one destination port) which are connected to the respective ports of the other module’spartition. The smallest example network of this group (network8) has two ports (i.e., onepair) per partition and the others each have two more ports per partition, i.e., network9 has4, network10 has 6, network11 has 8, and network12 has 10 ports per partition. They aredepicted in Fig. 7.19(a) – 7.19(e).

7.4.3 Assessment Results

For the assessment, the prototype implementation of the generation algorithm is executed for eachof the above described example networks and provides three outputs for each run: the numberof generated communication schedules, the generated set of communication schedules (which isnot considered further in the following), and the prototype’s execution time. For each network,

Page 244: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

224 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

the generation algorithm is applied to different combinations of communication schedule lengthand restriction function. This means that for each of the three restriction functions, the commu-nication schedule length starts with 3 and is stepwise incremented until the prototype – due tothe aforementioned limitations – fails to generate all communication schedules of the given maxi-mum communication schedule length. The results of the executions on a 2.2 GHz 4-CPU PC with3 960MB RAM are summarized in Appendix F (Table F.1 – F.5).

In the following sections, the results of the prototype’s execution are investigated and comparedsuch that the influence of the varied factor can be assessed. At first, the execution results ofexample network network1 are presented (Sect. 7.4.3.1) and then the influence of varying the num-ber of modules and the number of ports is analyzed in Sect. 7.4.3.2 and Sect. 7.4.3.3, respec-tively. Finally, it is investigated how the average computation time per resulting communicationschedule changes for different schedule lengths and when applying different restriction functions(Sect. 7.4.3.4). Section 7.4.3.5 summarizes the assessment results.

7.4.3.1 Investigating the Results of an Example Network

For investigating and comparing the execution results, the results for each example network areassessed independently. This is done by comparing the results (a) when generating longer commu-nication schedules and (b) when applying different restriction functions. These assessment resultsare the basis for further comparisons and therefore a graph is provided for each example net-work. Its aim is to show, on the one hand, how the number of generated communication schedulesincreases when longer communication schedules are generated and, on the other hand, how thenumber of generated communication schedules can be reduced when applying different restrictionfunctions.

To develop a general understanding for these graphs, this section analyzes the graph for examplenetwork network1 which is shown in Fig. 7.12. The graph’s x axis shows the lengths of the gener-ated communication schedules and the y axis the number of generated communication schedulesfor the respective length. Since the number of generated communication schedules increases ex-ponentially, a logarithmic scale is used for the y axis. The graph contains six curves: The firstthree curves depict the prototype’s execution results for the three different restriction functions(using the values given in Appendix F). The curve for restriction0 is given in red, the curve forrestriction1 in green, and the curve for restriction2 in blue. The graph shows that each tighteningof the restriction function reduces the number of generated communication schedules and furtherresults in a flatter slope. The other three curves – which are included only in this graph – are fittingcurves related each to one of the previous curves. They are calculated by applying the nonlinearleast-squares Marquardt-Levenberg algorithm (as it is implemented in gnuplot) on the respectivecurve. Using these fitting curves, it is shown that the result curves can be approximated by a func-tion f (x) = e p · x with p = 1.55426 for restriction0, p = 1.00794 for restriction1, and p = 0.938944for restriction2.

The results of the other example networks are not investigated in such detail, but included forcompleteness: For example network network2 in Fig. 7.14(d), for network3 in Fig. 7.15(d), fornetwork4 in Fig. 7.16(b), for network5 in Fig. 7.16(d), for network6 in Fig. 7.17(d), for network7 inFig. 7.18(d), and for network8 to network12 in Fig. 7.19(f) (only for restriction function restriction2).For comparability, the range and the scale of the x and y axis remain unchanged across all thesegraphs.

In addition to the previous analysis, Fig. 7.13 shows how much the number of generated commu-nication schedules is reduced when applying restriction function restriction1 or restriction2. For ex-ample, applying restriction2 reduces the number of generated communication schedules of length 3by a factor of 7 compared to the generation without restriction function (i.e., restriction0), the num-ber of generated communication schedules of length 10 is even reduced by factor 595 (restriction2)

Page 245: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.4. ASSESSMENT OF THE ALGORITHM 225

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2fitting curve for restriction0fitting curve for restriction1fitting curve for restriction2

Figure 7.12: Graph for network1 showing the number of generated communication schedules (fordifferent communication schedule lengths and for different restriction functions) and additionallythe calculated fitting curves

or 259 (restriction1). The reduction factor comparing the restriction functions restriction1 and re-striction2 is between 1.1 (for schedules of length 3) and 2.7 (for schedules of length 16). Note thatFig. 7.13 uses a logarithmic scale for the y axis. This graph is not provided for other examplenetworks.

7.4.3.2 Investigating the Variation of the Number of Modules

For investigating the influence of the number of modules on the results, two example networksare compared which vary only the number of modules (and thus the total number of partitions)but not the total number of communication ports or the scheduling of the remaining modules. Theexample networks used for the comparison are network1 and network2. Figure 7.14 shows theexample networks (Fig. 7.14(a), Fig. 7.14(c)) as well as their single network results (Fig. 7.14(b),Fig. 7.14(d)) and contains additionally their comparison in Fig. 7.14(e). It shows that the numberof generated communication schedules does not change when varying the number of moduleswithout varying the total number of communication ports.

7.4.3.3 Investigating the Variation of the Number of Communication Ports

For investigating the influence of the number of communication ports, example networks are com-pared which vary only the number of communication ports per partition (and thus the total numberof communication ports) but not the number of partitions or number of modules in the network. Inaddition to varying the number of ports, it is further analyzed how the different port types influencethe number of generated communication schedules.

Page 246: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

226 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

1

10

100

1000

2 4 6 8 10 12 14 16

Red

uctio

n ra

tio

Length of communication schedules (in time units)

Ratio betweenrestriction functionsRatio r0 : r1Ratio r0 : r2Ratio r1 : r2

Figure 7.13: Comparison of reduction ratios achieved by different restriction functions

Variation When the Network Contains All Communication Port Types. The simplest vari-ation of the number of communication ports is to add (or remove) the same number of ports perpartition and connect them such that each port is connected to another new port. In example net-work network3, two ports are added to each partition (with respect to the configuration of network1)such that the type and direction of the new ports counterbalances the existing ports, e.g., by addingtwo sampling input ports to the two outgoing queuing ports of partition A2. This is depicted inFig. 7.15(c).Figure 7.15 provides the graphs for comparing the results of example network network3 with thoseof network1: Figure 7.15(b) shows the prototype’s results of example network network1 whichhave been discussed above. Figure 7.15(d) shows the curves for example network network3, i.e.,the prototype’s execution results for the three restriction functions. When approximating them bythe function f (x) = e p · x, a fitting p for restriction0 is p = 2.28, for restriction1 p = 1.4559, andfor restriction2 p = 1.38349. This means that the slope of the curves when applying restriction1or restriction2 is only a little bit smaller than when applying restriction0 for network1. This is alsoillustrated in Fig. 7.15(e) which pictures the result curves of network1 and network3.

Variation When the Network Contains Only Queuing Port Types. For comparing two exam-ple networks with different number of queuing ports (in total and/or per partition), we considera basic example network (network4) which contains only queuing ports and then enhance it byduplicating the number of ports per partition (network5). Figure 7.16 provides the graphs with theprototype’s execution results: For example network network4 in Fig. 7.16(b) and for network5 inFig. 7.16(d). The results are then compared in Fig. 7.16(e) which additionally shows that the slopefor network4 when applying no restriction function (i.e., restriction0) and for network5 when apply-ing restriction1 are (approximately) the same – at least for the short curve that can be analyzed.The results are very similar when considering networks which only contain sampling ports andthus this is not analyzed in more detail here. However, similar result graphs are given for examplenetwork network6 (comparable to network4) and network7 (comparable to network5) in Fig. 7.17(d)and Fig. 7.18(d), respectively.

Page 247: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.4. ASSESSMENT OF THE ALGORITHM 227

Comparison between variation of queuing and sampling ports. When comparing the influ-ence of the port type, two pairs of example networks are compared: network4 and network6 with5 ports per partition and network5 and network7 with 10 ports per partition. Figures 7.17 and 7.18illustrate the results and the comparisons. Both comparisons (Figures 7.17(e) and 7.18(e)) showthat (1) when applying no restriction function (i.e., restriction0), the number of generated sched-ules with queuing or with sampling ports is the same and (2) when applying restriction functionrestriction1 or restriction2, they are – as expected – more effective with queuing ports.

More comprehensive variation of number of sampling port types. To investigate the influ-ence of the number of ports more comprehensively, five very simple networks are compared whichdiffer only in the number of ports per partition (and in these examples this is the same as numberof ports per module): network8 to network12. The network examples as well as the comparisonof results (when applying restriction function restriction2) is given in Fig. 7.19. Additionally, Fig-ure 7.20 depicts how the number of communication schedules evolves if the number of ports inthe network is increased but the schedule length remains the same. Therefore, the graph shows thenumber of ports in the network as the x axis and the number of generated communication sched-ules as the y axis (again with logarithmic scale), and provides one curve for each schedule length(from 3 to 6 time units). All curves have a positive slope which slowly decreases with the numberof considered ports – at least for the range of results that can be considered here.

7.4.3.4 Assessment of the Generation Time for Each Communication Schedule

In the previous analyses, it has been shown that the restriction functions restriction1 and restriction2can significantly reduce the number of generated communication schedules, e.g., up to a factor of595 when applying restriction2 while generating the communication schedules of length 10 for ex-ample network network1 (see Fig. 7.13). However, it also has to be investigated if these reductionsare achieved at the expense of the prototype’s execution duration. Therefore, the average computa-tion time per generated communication schedule is calculated for each restriction function and allgenerated communication schedule lengths. Figure 7.21 depicts the resulting graph which showsthe length of the generated communication schedules on the x axis and the computation time (inμs) on the y axis. Note that – unlike in the other graphs – a linear scale is used for the y axis. It isobserved that applying a restriction function when generating short communication schedules ismuch more time intensive than when generating long communication schedules where the compu-tation times per generated schedule are almost the same. This effect is observed for the followingreasons:

• For short communication schedules (e.g., 3 time units length), the restriction functions re-duce the number of generated communication schedules by a factor of 7, but the sameprototype execution duration of 0.002 sec is measured. Consequently, when applying a re-striction function, the average computation time per generated schedule is 6-7 times the onewithout restriction function. However, the measured time includes starting the prototype’sexecutable and loading the example network’s configuration – tasks the duration of whichis independent of the applied restriction function. In addition, the generation duration dif-ferences can probably not be measured at sufficient precision with the available means sincean accuracy of less than 1 msec cannot be achieved here.

• For longer communication schedules (e.g., for schedules of length 10), the restriction func-tions reduce the number of generated communication schedules by a factor of 259 for re-striction1 and 595 for restriction2 while simultaneously the measured prototype’s executionduration is reduced by a factor of 214 (for restriction1) and 474 (for restriction2). Conse-quently, the average computation time per schedule is almost the same with and without

Page 248: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

228 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

applying a restriction function. The time required for the calculations of the restrictionfunctions is insignificant.

However, this is due to an optimization implemented in the prototype: When reducing thelist of possible API calls per partition, the implemented restriction functions restriction1 andrestriction2 reason about the previous communication flow of the currently generated sched-ule. To avoid repeating the possibly time consuming analysis for each API call (and againfor each API call when considering the next extension step), the prototype implementa-tion provides the required information. This information includes how often each port hasalready been accessed in the schedule – i.e., how often each API call has already been trig-gered.3 The drawback of this optimization – which aims both at simplifying the restrictionfunction’s calculations and at reducing the generation duration – is the required memory forproviding this information which probably contributes to the unsatisfactory limitations ofthe prototype implementation.

7.4.3.5 Assessment Summary

The previous subsections have analyzed the influence of both the network’s configuration and theapplied restriction function on the results of the prototype’s execution – in particular the numberof generated communication schedules. Different schedules lengths as well as different examplenetworks have been considered and the following general observations can be made:

• For all considered example networks the number of generated communication schedules in-creases exponentially with the length of the schedules, but this increase is smaller (althoughstill exponential) if restriction functions are applied.

• The effects of restriction function restriction1 and restriction2 are quite similar also for longercommunication schedules.

• The application of restriction functions does not have a significant negative impact on theprototype’s computation time for generating all schedules. Moreover, the average com-putation time per generated communication schedule is only marginally higher due to aspecifically optimized implementation in the prototype.

• The examples have shown that changes of the network’s complexity affect the number ofgenerated communication schedules depending on the changed parameter: In particular,increasing the number of modules but without changing the number of ports has no effect,but increasing the number of communication ports results in significantly more schedules.This means whenever more different communication triggers are possible, the variability ofparallel triggers increases – and so does the number of generated communication schedules.

The example networks have been created artificially to support isolated changing of individualinfluencing factors but also to gain network configurations small enough that the prototype imple-mentation of the generation algorithm can cope with them despite its limitations. Real networkconfigurations are usually much bigger and more complex than the example networks consideredhere. While the generation algorithm is equally well applicable to real network configurations,the prototype implementation is not due to its limitations. However, applying the generation algo-rithm to real networks will most likely result in even more communication schedules so that therestriction functions considered so far may not be sufficient to reduce them adequately and furtherrestriction functions may be required.

3Obviously, this information is tailored for the implemented restriction functions, but other information could alsobe provided in a similar manner if needed by specific restriction functions.

Page 249: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.4. ASSESSMENT OF THE ALGORITHM 229

The assessment of the prototype’s result for the considered example networks has not yielded anyformula which could help to estimate the number of generated communication schedules based onthe network configuration. Although this would have been desirable, it is a task difficult to achievesince each influencing factor depends on too many configuration parameters which may eveninfluence each other. Furthermore, there are no reasons for just estimating the resulting number ofschedules since, for later test execution, all communication schedules are needed anyway. Also,all executions of the prototype for the used example networks have shown that generation runscan be performed within acceptable time limits (less than one minute). Finally, the generationalgorithm is aimed to be performed offline and thus does not surprise during test execution withunexpectedly many test cases.

Page 250: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

230 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

Avionics Partition A1 Avionics Partition A2 Avionics Partition A3

Avionics Partition B1 Avionics Partition B2

IMA Module A

IMA Module B

B1p3B1p1 B2p1

A3p2A3p1A2p2A2p1A1p2

B1p4 B2p2B1p2

A1p1

SP

QPQP

QPQP

QPQP

QPQPSP QPQP

ConfigurationTables

ConfigurationTables

(a) Example network network1

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2

(b) Generated number of communication schedules for net-work1 when considering different schedule durations anddifferent restriction functions

Avionics Partition B1 Avionics Partition B2

Avionics Partition A1 Avionics Partition A2 Avionics Partition A3

Avionics Partition C1 Avionics Partition C2 Avionics Partition C3

IMA Module B

B1p3B1p1 B2p1B1p4 B2p2B1p2

IMA Module A

A3p2A3p1A1p2A1p1

IMA Module C

C2p2C2p1QP

QPQP

QPQP

QP

ConfigurationTables

SP QPQPQP

SP QP

ConfigurationTables

ConfigurationTables

(c) Example network network2

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2

(d) Generated number of communication schedules for net-work2 when considering different schedule durations anddifferent restriction functions

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0 - network 1restriction1 - network 1restriction2 - network 1restriction0 - network 2restriction1 - network 2restriction2 - network 2

(e) Comparison of the results for network1 and network2

Figure 7.14: Two example networks consisting of two or three IMA platforms (a, c), assessmentof the prototype’s results for these examples (b, d), and comparison of the results (e)

Page 251: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.4. ASSESSMENT OF THE ALGORITHM 231

Avionics Partition A1 Avionics Partition A2 Avionics Partition A3

Avionics Partition B1 Avionics Partition B2

IMA Module A

IMA Module B

B1p3B1p1 B2p1

A3p2A3p1A2p2A2p1A1p2

B1p4 B2p2B1p2

A1p1

SP

QPQP

QPQP

QPQP

QPQPSP QPQP

ConfigurationTables

ConfigurationTables

(a) Example network network1

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2

(b) Generated number of communication schedules for net-work1 when considering different schedule durations anddifferent restriction functions

Avionics Partition A1 Avionics Partition A2 Avionics Partition A3

Avionics Partition B1 Avionics Partition B2

A1p1 A1p2 A3p1 A3p2

B1p4B1p3B1p2B1p1

A2p1 A2p2

B2p1 B2p2

IMA Module A

IMA Module B

A1p4A1p3 A2p3 A2p4 A3p4

B2p4B2p3B1p6B1p5

A3p3QP

SP QP QP

QPQP

QPQP

SP QP

QPQP

SPQP

SP QPSP QP

QPSP

SPSP

ConfigurationTables

ConfigurationTables

(c) Example network network3

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2

(d) Generated number of communication schedules for net-work3 when considering different schedule durations anddifferent restriction functions

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0 - network 1restriction1 - network 1restriction2 - network 1restriction0 - network 3restriction1 - network 3restriction2 - network 3

(e) Comparison of the results for network1 and network3

Figure 7.15: Two example networks consisting of two IMA platforms but with different numberof communication ports (a, c), assessment of the prototype’s results for these examples (b, d), andcomparison of the results (e)

Page 252: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

232 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

Avionics Partition B1 Avionics Partition B2

Avionics Partition A2Avionics Partition A1 Avionics Partition A3

Avionics Partition B3

IMA Module A

IMA Module A

A1p1 A1p2 A1p3 A1p4 A1p5 A2p1 A2p2 A2p3 A2p4 A2p5 A3p1 A3p2 A3p3 A3p4 A3p5

B1p1 B1p2 B1p3 B1p4 B1p5 B2p1 B2p2 B2p3 B2p4 B2p5 B3p1 B3p2 B3p3 B3p4 B3p5

ConfiguratioBTables

ConfigurationTables

QP QP QP QP QP QP QP QP QP

QPQPQPQPQPQPQPQPQP

QP QP QP QP QP QP

QPQPQPQPQPQP

(a) Example network network4

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2

(b) Generated number of communication schedules for net-work4 when considering different schedule durations anddifferent restriction functions

Avionics Partition B2

Avionics Partition A2

Avionics Partition B3

Avionics Partition A3

Avionics Partition B1

Avionics Partition A1

B2p6 B2p9B2p8B2p7B2p5B2p4B2p3B2p1

A2p8 A2p9A2p7A2p6A2p5A2p4A2p3A2p2 A2p10A2p1

B2p2 B2p10 B3p9 B3p10B3p8B3p7B3p6B3p5B3p4B3p3B3p2B3p1

A3p8 A3p9 A3p10A3p7A3p6A3p5A3p4A3p3A3p2A3p1

IMA Module A

IMA Module B

B1p2 B1p5

A1p6A1p5A1p2A1p1 A1p9 A1p10

B1p6 B1p7B1p4B1p3B1p1 B1p10B1p8 B1p9

A1p8A1p7A1p4A1p3

QPQP

QP QPQP

QPQP

QPQPQP

QPQPQP

QPQP

QPQP

QPQP

QPQPQP

QPQPQP

QPQPQP

QPQP

QPQP

QP QPQP

QP QPQP

QPQPQP

QPQP

QPQP

QPQP

QPQP

QPQP

QPQP

QPQP

QPQP

ConfigurationTables

ConfigurationTables

QPQP

QP

(c) Example network network5

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2

(d) Generated number of communication schedules for net-work5 when considering different schedule durations anddifferent restriction functions

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0 - network 4restriction1 - network 4restriction2 - network 4restriction0 - network 5restriction1 - network 5restriction2 - network 5

(e) Comparison of the results for network4 and network5

Figure 7.16: Two example networks consisting of two IMA platforms with different number ofqueuing ports (a, c), assessment of the prototype’s results for these examples (b, d), and compari-son of the results (e)

Page 253: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.4. ASSESSMENT OF THE ALGORITHM 233

Avionics Partition B1 Avionics Partition B2

Avionics Partition A2Avionics Partition A1 Avionics Partition A3

Avionics Partition B3

IMA Module A

IMA Module A

A1p1 A1p2 A1p3 A1p4 A1p5 A2p1 A2p2 A2p3 A2p4 A2p5 A3p1 A3p2 A3p3 A3p4 A3p5

B1p1 B1p2 B1p3 B1p4 B1p5 B2p1 B2p2 B2p3 B2p4 B2p5 B3p1 B3p2 B3p3 B3p4 B3p5

ConfiguratioBTables

ConfigurationTables

QP QP QP QP QP QP QP QP QP

QPQPQPQPQPQPQPQPQP

QP QP QP QP QP QP

QPQPQPQPQPQP

(a) Example network network4 with queuing ports

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2

(b) Generated number of communication schedules for net-work4 when considering different schedule durations anddifferent restriction functions

Avionics Partition B1 Avionics Partition B2

Avionics Partition A2Avionics Partition A1 Avionics Partition A3

Avionics Partition B3

IMA Module A

IMA Module A

A1p1 A1p2 A1p3 A1p4 A1p5 A2p1 A2p2 A2p3 A2p4 A2p5 A3p1 A3p2 A3p3 A3p4 A3p5

B1p1 B1p2 B1p3 B1p4 B1p5 B2p1 B2p2 B2p3 B2p4 B2p5 B3p1 B3p2 B3p3 B3p4 B3p5

ConfiguratioBTables

ConfigurationTables

SP SP SP SP SP SP SP SP SP

SPSPSPSPSPSPSPSPSP

SP SP SP SP SP SP

SPSPSPSPSPSP

(c) Example network network6 with sampling ports

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2

(d) Generated number of communication schedules for net-work6 when considering different schedule durations anddifferent restriction functions

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0 - network 4restriction1 - network 4restriction2 - network 4restriction0 - network 6restriction1 - network 6restriction2 - network 6

(e) Comparison of the results for network4 and network6

Figure 7.17: Two example networks consisting of two IMA platforms with different types of inter-module communication ports (a, c), assessment of the prototype’s results for these examples (b,d), and comparison of the results (e)

Page 254: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

234 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

Avionics Partition B2

Avionics Partition A2

Avionics Partition B3

Avionics Partition A3

Avionics Partition B1

Avionics Partition A1

B2p6 B2p9B2p8B2p7B2p5B2p4B2p3B2p1

A2p8 A2p9A2p7A2p6A2p5A2p4A2p3A2p2 A2p10A2p1

B2p2 B2p10 B3p9 B3p10B3p8B3p7B3p6B3p5B3p4B3p3B3p2B3p1

A3p8 A3p9 A3p10A3p7A3p6A3p5A3p4A3p3A3p2A3p1

IMA Module A

IMA Module B

B1p2 B1p5

A1p6A1p5A1p2A1p1 A1p9 A1p10

B1p6 B1p7B1p4B1p3B1p1 B1p10B1p8 B1p9

A1p8A1p7A1p4A1p3

QPQP

QP QPQP

QPQP

QPQPQP

QPQPQP

QPQP

QPQP

QPQP

QPQPQP

QPQPQP

QPQPQP

QPQP

QPQP

QP QPQP

QP QPQP

QPQPQP

QPQP

QPQP

QPQP

QPQP

QPQP

QPQP

QPQP

QPQP

ConfigurationTables

ConfigurationTables

QPQP

QP

(a) Example network network5 with queuing ports

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2

(b) Generated number of communication schedules for net-work5 when considering different schedule durations anddifferent restriction functions

Avionics Partition B2

Avionics Partition A2

Avionics Partition B3

Avionics Partition A3

Avionics Partition B1

Avionics Partition A1

B2p6 B2p9B2p8B2p7B2p5B2p4B2p3B2p1

A2p8 A2p9A2p7A2p6A2p5A2p4A2p3A2p2 A2p10A2p1

B2p2 B2p10 B3p9 B3p10B3p8B3p7B3p6B3p5B3p4B3p3B3p2B3p1

A3p8 A3p9 A3p10A3p7A3p6A3p5A3p4A3p3A3p2A3p1

IMA Module A

IMA Module B

B1p2 B1p5

A1p6A1p5A1p2A1p1 A1p9 A1p10

B1p6 B1p7B1p4B1p3B1p1 B1p10B1p8 B1p9

A1p8A1p7A1p4A1p3

SPSP

SPSP

SP SPSP

SPSP

SPSPSP

SPSPSP

SPSP

SPSP

SPSP

SPSPSP

SPSPSP

SPSPSP

SPSP

SPSP

SP SPSP

SP SPSP

SPSPSP

SPSP

SPSP

SPSP

SPSP

SPSP

SPSP

SPSP

SPSP

ConfigurationTables

ConfigurationTables

SP

(c) Example network network7 with sampling ports

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2

(d) Generated number of communication schedules for net-work7 when considering different schedule durations anddifferent restriction functions

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction0 - network 5restriction1 - network 5restriction2 - network 5restriction0 - network 7restriction1 - network 7restriction2 - network 7

(e) Comparison of the results for network5 and network7

Figure 7.18: Two example networks consisting of two IMA platforms with different types of inter-module communication ports (a, c), assessment of the prototype’s results for these examples (b,d), and comparison of the results (e)

Page 255: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.4. ASSESSMENT OF THE ALGORITHM 235

Avionics Partition A1

Avionics Partition B1

A1p2

B1p2

A1p1

B1p1

IMA Module B

IMA Module A

SP

SP

SP

SP

ConfigurationTables

ConfigurationTables

(a) Example network network8 with 4 ports

Avionics Partition A1

Avionics Partition B1

A1p3

B1p3

A1p2

B1p2

A1p4

B1p4

A1p1

B1p1

IMA Module B

IMA Module A

SP

SP

SP

SP

SP

SP

SP

SP

ConfigurationTables

ConfigurationTables

(b) Example network network9 with 8 ports

Avionics Partition A1

Avionics Partition B1

A1p3

B1p3

A1p2

B1p2

A1p4

B1p4

A1p1

B1p1

A1p5

B1p5

A1p6

B1p6

IMA Module B

IMA Module A

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SPConfigurationTables

ConfigurationTables

(c) Example network network10 with 12 ports

Avionics Partition A1

Avionics Partition B1

A1p3

B1p3

A1p2

B1p2

A1p4

B1p4

A1p1

B1p1

A1p5

B1p5

A1p6

B1p6

A1p7

B1p7

A1p8

B1p8

IMA Module B

IMA Module A

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SPConfigurationTables

ConfigurationTables

(d) Example network network11 with 16 ports

Avionics Partition A1

Avionics Partition B1

A1p3

B1p3

A1p2

B1p2

A1p4

B1p4

A1p1

B1p1

A1p5

B1p5

A1p6

B1p6

A1p7

B1p7

A1p8

B1p8

A1p10

B1p10

A1p9

B1p9

IMA Module B

IMA Module A

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

SP

ConfigurationTables

ConfigurationTables

(e) Example network network12 with 20 ports

1

10

100

1000

10000

100000

1e+06

1e+07

2 4 6 8 10 12 14 16 18

Num

ber

of d

iffer

ent c

omm

unic

atio

n sc

hedu

les

Length of communication schedules (in time units)

Restriction functionrestriction2 - network 8 (4 sampling ports)restriction2 - network 9 (8 sampling ports)restriction2 - network 10 (12 sampling ports)restriction2 - network 11 (16 sampling ports)restriction2 - network 12 (20 sampling ports)

(f) Comparison of the number of communication schedules fornetwork8 - network12 when considering different schedule dura-tions and applying restriction function restriction2

Figure 7.19: Five example networks consisting of two IMA platforms with one partition each andwith different number of inter-module communication ports (a - e), comparison of the prototype’sresults for these examples when applying restriction function restriction2 (f)

Page 256: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

236 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

10

100

1000

10000

100000

1e+06

1e+07

4 6 8 10 12 14 16 18 20 22

Num

ber

of s

ched

ules

Number of sampling ports in network

Schedule lengthschedule length = 3schedule length = 4schedule length = 5schedule length = 6

Figure 7.20: Evolving of number of generated communication schedules when increasing thenumber of ports (each curve for the same communication schedule length), calculations based onthe results for network example network8 - network12

10

20

30

40

50

60

70

2 4 6 8 10 12 14 16 18

Com

puta

tion

time

(us)

Length of communication schedules (in time units)

Restriction functionrestriction0restriction1restriction2

Figure 7.21: Average computation time per resulting communication schedule for different sched-ule lengths when applying different restriction functions, calculations based on the results fornetwork example network1

Page 257: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.5. FUTURE DIRECTIONS 237

7.5 Future Directions

This chapter has described an approach for communication flow testing which aims at being able totest all possible message-based communication flow between partitions. Therefore, the test casesare based on so-called communication schedules generated by a generation algorithm which usesthe configuration of the network under test and additionally the performance information about themodules and the network. For executing the test cases, the approach is to have test applications(running in configured partitions) and test specifications (executed by the test system) which arecontrolled by test control specifications (also executed by the test system). For evaluating thetest results, test applications and/or test evaluation specifications can be used. For test control, testexecution and test evaluation, an actual implementation obviously depends on many factors (as hasbeen elaborated before). Consequently, further considerations on this subject are only possible inthe context of a particular tool environment and for a specific network configuration. Therefore, thediscussion in this section focuses on future directions concerning the generation algorithm itself.However, this does not include implementation considerations to move the generation algorithm’sprototype implementation to a full implementation which is optimized with respect to time andmemory requirements and thus allows the generation of much longer communication schedules.

Future work on the generation algorithm can be categorized into algorithm related considerations,further influencing factors, further restriction functions, and further scheduling related variations.

Algorithm Related Considerations. The generation algorithm described in Sect. 7.2.1 gener-ates the communication schedules by following a depth-first approach which first generates onecommunication schedule (up to the given length) and then completes the next one. For optimiza-tion, the algorithm determines at each intermediate step which other continuations are possible inorder to avoid re-calculating the combinations of API calls and applying the restriction function.In future work, this generation algorithm approach can be compared with breadth-first or mixeddepth-breadth approaches. Although such algorithms provide conceptually no advantage whengenerating all communication schedules of a given length, they can be superior when it is not pos-sible to determine in advance up to which length all communication schedules shall be generated.In particular, this would allow (a) to stop the generation process at any time when the preparationtime is over and (b) to cope with the limitations of the algorithm’s implementation which may,for example, enforce to stop at some point of time when no further memory is available (RAM orhard disk). However, stopping at an arbitrary point of time means that the set of already gener-ated communication schedules contains all schedules of length n and might additionally containschedules of length n + x (with x = 1 for breadth-first algorithms and x ≥ 1 for mixed algorithms)which then have to be cut off if all communication schedules shall have the same length.

Further Influencing Factors. When considering the configuration and performance informa-tion, the previous sections have abstracted from certain factors which may differ for each com-munication link and thus are worth to vary when generating the communication schedules. Thisincludes:

• To consider that different message sizeswithin the allowed range for each port help to ensurethat the tested communication flow is possible for arbitrary combinations of legal messages– at least with respect to their size.

• To consider message-size-dependent performance information which addresses that readingor writing of small messages is usually faster than of larger ones.

• To consider that in most cases an average API call duration can be expected to be smallerthan denoted by the worst-case software latency.

Page 258: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

238 CHAPTER 7. APPROACH FOR TESTING A NETWORK OF IMA PLATFORMS

• To consider different communication techniques which is also reflected in more detailedperformance information.

However, some of the above factors can only be considered when the respective information isavailable in the generation algorithm. The advantage of addressing some or all of these furtherinfluencing factors obviously helps to consider more variations of allowed and also occurringcommunication flow and thus intensifies the tests. But, this also means that the number of gen-erated communication schedules increases significantly and further restriction functions have toaddress these new factors.

Further Restriction Functions. When considering different message sizes, the set of messagesizes per communication link depends on the range between the minimum possible and the max-imum allowed values and the chosen granularity. For example, when considering that for writinginto a port the possible messages are between 64 and 8192 bytes in size and it is determined ad-ditionally that all allowed message sizes have to be a power of 2, this results in eight differentmessage sizes. For generating all possible communication flow schedules, all eight values have tobe considered as parameters of the respective API call and, consequently, the number of API callcombinations where this API call is considered is multiplied by eight. A restriction function canreduce the set of different message sizes in several ways and three examples are described in thefollowing which are based on the typical considerations for test data generation:

• By considering only the smallest and the biggest value (i.e., the limits of the range) andpossibly also some value in between.

• By considering only one value of each equivalence class which may be determined by con-sidering the corresponding performance information details – all different message sizeswhich have the same performance properties are grouped and only one value of each groupis considered.

• By considering the limits as well as one value of each equivalence class.

The combinatorial explosion problem caused by considering different message sizes whenever anAPI call is possible can also be addressed by a restriction function working in a different way thanthe aforementioned examples which only downsize the set of considered message sizes. Such arestriction function can just limit the number of combinations and select each time that the respec-tive API call is considered a (possibly) different subset of message sizes. The selection of thesesubsets can be random or driven by statistical means. In the latter case, the selection function can,for example, ensure a uniform distribution of the used message sizes for this API call in generaland particular in combination with other API calls.It is also possible that a restriction function combines the latter two restriction function approachesby (a) limiting the number of overall combinations (i.e., by reducing the set of possible values to aprobably small subset of considered values), (b) reserving each time a few of these combinationsfor the most important values (e.g., the limits of the range or one value of each equivalence class),and (c) selecting the other considered values from the remaining set of possible values. For exam-ple, the number of combinations can be limited to four different values from which two are fromthe set of most important values and another two selected according to some distribution function.The advantage of the latter two types of restriction functions is that they allow to consider a broadervariety of values (than just the few most important values) and still noticeably reduce the numberof generated communication schedules.

When considering different communication techniques, a restriction function can reduce the testcommunication flows to use only one type of communication technique, for example, only AFDX.

Page 259: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

7.5. FUTURE DIRECTIONS 239

When considering normal and worst-case API call durations, a restriction function can restrictthe generated communication schedules such that only (a) average API call durations and averageloads or (b) worst-case API call durations and maximum load are considered since this increasesthe probability of successful API calls. Such restriction functions can also be supported by statisticinformation.

Further Scheduling Related Variations. The current algorithm assumes that the modules arealways powered on such that the partition scheduling starts synchronously in all modules. Thisis a good starting point for considering communication flow testing. Nevertheless, the generatedcommunication schedules thus only represent a small fraction of observable communication flowsince the modules usually start up without synchronization and thus begin partition scheduling atdifferent points of time. This means, for example, that one module schedules its first partitionwhile the other already schedule the second or third. When enhancing the algorithm, it may re-flect the unsynchronized start up behavior by considering all combinations of partition schedulingstart times. However, this multiplies the set of communication schedules and finding adequaterestriction functions to reduce the resulting set of communication schedule sets is difficult.

Summarizing, all future directions of the generation algorithm which consider further influencingfactors or other variations increase the number of generated communication schedules and newrestriction functions can limit this increase only to some extent. Necessarily, this means thatmore time would be required for test preparation (i.e., executing the generation algorithm) andparticularly for test execution – calling, in term, for the maximum automation of the eventual testexecution.

Page 260: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

240

Page 261: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Part IV

Lessons Learned

241

Page 262: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das
Page 263: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Chapter 8

Evaluation of the Test Suite forAutomated Bare IMA Platform Testing

In Chap. 6, the components of the test suite for automated bare IMA module testing have beendescribed. While these descriptions were focusing on motivating and explaining the implemen-tation decisions, this chapter aims at critically analyzing the approach and the resulting test suiteimplementation, comparing it with other approaches, and discussing possible improvements. Thestructure of this chapter follows these objectives: At first, the pros and cons of the approach and itsimplementation are discussed in Sect. 8.1. Then, it is compared with other approaches in Sect. 8.2.In both sections, possible improvements are discussed were applicable.

8.1 Assessment

In this section, the approach for testing bare IMA modules (as described in Sect. 5.2.1) as well asthe resulting test suite implementation (described in Chap. 6) are analyzed and assessed.The assessment addresses the following aspects:

• the general approach and its implementation,• the use of generic test applications and external test specifications,• the configuration requirements of the generic test application,• the rule-based generation of configurations,• the selected test specification formalism,• the used test bench,• the test design process.

General Approach and its Implementation. The implementation of the approach has resultedin an (almost) fully automated test suite. It comprises a generic test suite, a configuration library,a test procedure template library, and all required tools to prepare and execute the test proceduresin an automated way (with manual data loading as the only exception). Thus, a higher degreeof test coverage can be achieved within the same execution duration than with manual testing orsemi-automated testing. Moreover, the automated test suite enables to repeat the test execution bygaining from the prepared test procedures and pre-defined test configurations. Also, the possibil-ity to perform all tests with minor manual interactions helps to eliminate many of the error-prone

243

Page 264: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

244 CHAPTER 8. EVALUATION OF THE TEST SUITE

and time-consuming tasks. The test execution may need to be repeated for several reasons, forexample, for different types of IMA modules (which differ only in the number of hardware inter-faces per interface type), for different versions of the same type (i.e., with changed hardware orwith improved operating system software), for additional features (e.g., when implementing theextended services as described in [ARINC653P2]). In the latter case, the existing test suite is usedfor regression testing to ensure that the standard services have not been changed by mistake; forthe new services, additional test procedures and potentially further test configurations have to beimplemented by making use of the available test execution environment (including the CSP envi-ronment, the test bench, the test instantiation environment, and the load generation environment).The test concept uses templates of test procedures which can be instantiated with several matchingtest configurations. This enables to check the IMA module’s behavior when configured differently– an important task since the configuration represents an extensive and substantial part of theoperating system’s complexity and, therefore, has to be tested as carefully as the functionality ofthe operating system. However, instantiating every test procedure template with every possibletest configuration usually results in very many test procedures that require, even for automatedtesting, too much time for test execution (and each round of regression testing). But the test suitealso provides means to manage this problem by allowing that each test procedure template can listthe test configurations which are most interesting for testing.The approach and its implementation have also proven to be successful in finding errors and in-consistencies – in the implementation as well as in the accompanying documents. When appliedwithin the VICTORIA research project and in succeeding aircraft programs, errors and inconsis-tencies have been found which were not discovered by the supplier’s preceding platform verifica-tion activities.Summarizing, the approach and the implemented test suite can be considered as a big step towardslasting improvement of quality assurance. The test suite’s inherent limitations become visible inthe following discussions of implementation details.

Use of Generic Test Applications and External Test Specifications. The test suite uses a two-part testing strategy: Generic test applications are running in the avionics partitions of the IMAmodule under test and behave as commanded. These commands are triggers for API service callswith specific parameter values and are sent by external test specifications executed by the systemunder test. Thus, the generic test applications provide powerful means that can (conceptually) beused for any sequence of function calls. In practice, however, the required command transmissionincurs delays so that the mechanism cannot be used for sequences of function calls that have to beperformed in rapid succession. Therefore, such sequences of API calls have to be implemented asscenarios that are contained in the generic test application. This means the generic test applicationscontain tailored source code (in addition to the standard command processing) that is often onlyused by few test cases.If the test suite shall be extended (e.g., to allow testing of the additional services described in[ARINC653P2]), new test procedures will need to be designed and implemented that may requireadding further scenarios. This, however, has various implications for the test suite: Firstly, everyextension (as in such cases) increases the memory requirements for the test application and mayrequire to increase its minimum configuration requirements – this may have a serious impact onthe existing test procedures. Secondly, if it is necessary to reduce these memory requirements1,the test application code has to be tailored such that it comprises only the standard behavior andthe command processing but not the scenarios. Even if handling of two different types of generictest applications may be feasible, there may be other test cases that demand reduced memory

1Reducing the memory requirements is required (a) to load a test application in a partition with small code area or(b) to execute the test application process in a partition that provides less than the minimum required process stack sizefor each process to be created.

Page 265: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

8.1. ASSESSMENT 245

requirements but still need a specific scenario. Thirdly, any changes of the test application’s coderequire its recertification.

From this discussion, the question arises if the use of tailored test applications specifically gener-ated for each test procedure and accompanied by simple test specifications for global test evalua-tion would have been an alternative to cope with such different requirements. Firstly, the minimumconfiguration requirements could be adjusted to the actual needs of each test application, but wouldalso require to check them separately for each combination of test application and test configu-ration. Secondly, manual generation of all these tailored test applications is too time consuming,but for automated generation, a formal specification of the test procedures or the system under testwould be required. Currently, such formal specifications do not exist and are often not feasible tocompile as a complete formal specification, particularly, if the system’s hardware specifications aswell as its real-time aspects shall also be considered. Thirdly, with the current limitations of thetool chain, this would require manual interaction for data loading for each test procedure which isconsidered to be too time-consuming compared to the implemented approach. Fourthly, if the testprocedure’s behavior is implemented by separate test applications (e.g., one for each partition oreven one for each process) in order to tailor its size and to avoid the otherwise required commandand result ports, test management activities become much more difficult. This includes aspectssuch as definition of a common point for test starting (e.g., when have all partitions completedtheir initialization if they have no means to inform the external test specifications), specificationof a dedicated memory area for logging, means for transmitting the test execution logs from thetest applications to the external test specifications, and means for globally evaluating all logs bytest evaluation specifications. Summarizing, this alternative does not appear better feasible giventhe current environmental limitations.

Configuration Requirements of the Generic Test Application. For each partition executingtest application processes, minimum configuration requirements are defined which include re-quired interfaces for commanding, specific minimum memory requirements, and naming require-ments for ports. To ensure that the test application processes (in particular the main process) canperform the described minimum standard behavior, each test configuration has to comply withthese requirements. Thus, only a subset of the generally possible test configurations can be con-sidered for bare IMA module testing. Some of the restricted configurations are unlikely to becomefinal configurations (e.g., configurations having partitions without any ports or with extremelysmall memory settings) – others, however, might become final configurations (e.g., configurationswhich have only sampling ports or only ports to other partitions at the same IMA module but notto external ones). The latter type of potential final configurations is thus not addressed by bareIMA module testing but only by configured IMA module testing.

Reducing the minimum configuration requirements such that the allowed test configurations in-clude more possible final configurations can be achieved in two ways: Firstly, several differenttypes of generic test applications can be used which are each tailored to a specific purpose (e.g.,to perform a specific scenario, to require a minimum amount of memory, etc.) and combined ac-cording to the needs of the test procedure. The pros and cons of this approach have been discussedabove. Secondly, a subset of the configuration requirements can be defined which have to beabided also by final configurations. At first glance, this seems to only move the problem, but alsoincreases the testability of the final configurations. For example, if it is required for each (final)configuration to provide adequate command and result ports for each partition, it would not benecessary to develop any tailored test control mechanisms (e.g., command routing via other parti-tions, commanding via sampling ports, etc.) when testing the respective configurations. However,it has to be analyzed whether the configuration requirements for final configurations should onlybe a subset of those for the test configurations.

Page 266: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

246 CHAPTER 8. EVALUATION OF THE TEST SUITE

Rule-Based Generation of Configurations. For generating test configurations, the test suitecomprises a configuration generator which interprets rules files and, for each one, generates oneor several only slightly different configurations. For this purpose, a set of powerful generic andspecific rules has been defined which supports and simplifies the generation of huge test configu-rations based on template configurations and template objects. Thus, it is possible that the processof specifying a rules file can focus on the specific requirements of the described configuration.The applicability of the provided set of rules for generating many different and partially quiteelaborated test configurations has been proven in practice when developing the IMA configurationlibrary.The provided rules are used like editor commands which, consequently, means that the sameset of rules applied in a different sequence can result in a totally different set of configurations.Since some of the rules can cause extensive changes in several parts of the configuration, a highdegree of user experience is required to define the correct batch of rules. For unexperienced users,the imperative style is often considered as non-intuitive – in particular, in combination with thecomplex configuration table structure that are used by the IMA modules.

Selected Test Specification Formalism. The chosen test system RT-Tester 5 constructs andevaluates tests based on decomposed Timed CSP test specifications which are, for this purpose,transformed into labeled transition systems (using the CSP model checker FDR). Thus, a for-mal specification technique is used which has been explicitly designed to describe and verifydistributed and reactive real-time systems. However, when writing test specifications, its maindrawbacks become apparent: Firstly, CSP can generally not handle data types like strings. Conse-quently, all occurrences of strings had to be replaced by appropriate integers, (e.g., object nameshad to be referred to via object indices) which might be considered as less intuitive. Secondly, allpossible communication events (i.e., test data) have to be explicitly defined as channels. Generally,it is good practice to explicitly define them in advance. However, to allow all possible test data fornormal and robustness testing, this results in large sets of events. Thirdly, CSP has no global orstatic variables and supports only local process parameters whose values can only be made knownto other processes by sending it via a structured channel. This means that most processes usuallyrequire (a) several process parameters and (b) several channels for inter-process communicationvia events.These drawbacks are more pronounced when modeling large systems because, then, many events(i.e., many structured channels with large data types) and many process parameters with large setsof possible values are required. When such a test specification is transformed into its correspond-ing labeled transition system, this easily leads to a state explosion that cannot be handled by themodel checker FDR. Generally, the problem can be reduced by considering the transformationeffects of particular channels or process definitions and by tuning the CSP specification appropri-ately. For the test suite implementation, the imposed restrictions required extensive tailoring ofthe channel definitions and the data types as described in detail in Sect. 6.2. Apart from the timeconsuming tailoring process, this also resulted in an execution environment that requires thoroughexpertise for implementing and validating test specification templates and involves a certain riskthat implementing new test procedures may require extensions and adaptations.As described above, handling of the state explosion problem imposed by the transformation algo-rithm used by the FDR tool is of utmost importance to avoid tailoring and data abstraction. Theproblem of state explosions can be reduced by using a different representation of the CSP specifica-tion that can also be used for testing. This approach developed by Dahlweid and Schulze ([DS04])does not require to calculate the complete state space (as necessary for the transformation intolabeled transition system). Instead, it generates extended transition graphs which subsume severalstates of the specification under a single location without evaluating at first all variables, expres-sions, assignments and conditions of the respective CSP specification. However, this promisingapproach was not available in the test system RT-Tester at the time of developing the test suite.

Page 267: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

8.1. ASSESSMENT 247

Another solution to overcome the state explosion problem is to use another test system and/orspecification formalism. However, the specification formalism has to be equally suitable for spec-ifying tests for distributed, reactive real-time systems and the test system has to support automatedtesting, i.e., the same requirements as discussed in Sect. 5.4.2. Without carrying out a survey tofind out which test systems and test specification formalisms are most meaningful for this pur-pose, the RT-Tester 6 (as introduced in Sect. 5.4.2.2) with its test specification language RTTLis definitely one of the possibilities because the handling of large data structures is much easier.However, this test system was not available at the time of developing the test suite described inthis thesis.

Summarizing, using decomposed Timed CSP test specifications for testing “real-world” systems,which are usually large and complex, may require additional time consuming (and thus costly)tailoring of the test execution environment in order to handle the state explosion problem imposedby the transformation algorithm used by the FDR tool. Different solutions are now available toreduce or avoid the problem of which new test suite developments or the extension of the currenttest suite can profit. However, at the time of developing the test suite, such methods were notavailable and, thus, the tailoring of the test execution environment was required as described inthis thesis.

Used Test Bench. The test bench used for executing the test suite comprises, on the one hand,the software environment for test instantiation, load generation, data loading, and test executionand, on the other hand, the necessary hardware as required by these tools for their execution andtheir connection with each other and to the system under test. For example, the test engine clusterwith its hardware interfaces to all external interfaces of the IMA module provides the executionenvironment for the test system RT-Tester 5 that is used for automated test execution. In general,the test bench for automated bare IMA module testing has to fulfill one main requirement: Inorder to support the described approach and to minimize the test execution time and cost, the testbench should make available a fully automated tool chain that allows a “one-button approach”. Inpractice, this means that it should be possible to prepare and execute any sequence of selected testprocedures without further manual interactions. The combination of test system and test enginesupports such an approach since the test engine provides means to access all hardware interfacesto and from the module under test in an automated way – a characteristic that is, for example,required to set and reset the digital input signal that triggers a reset of the IMA module undertest. This support for automated resets between different test procedures or even between differentsteps of the same test procedure enables the test system to establish the same pre-condition foreach test procedure so that its behavior is independent of the previously performed test steps ortest procedures.In contrast, the concrete tool chain for test preparation only partially supports a completely au-tomated approach for configuration generation, test instantiation, load generation, and then dataloading. While all tools except the latter enable the test management system to perform the re-spective steps in an automated way, data loading (i.e., loading of test configurations and test ap-plications) requires manual interaction. In order to avoid manual interaction for each single testprocedure, the test suite provides means to sort a set of selected test procedures not only accord-ing to their test objective but also according to their required test configuration. Thus, all testprocedures using the same test configuration are performed one after the other in an automatedway.

Summarizing, the described deficiency of the test bench with respect to automation reveals thattestability considerations may not have been addressed during tool development as required by theapproach for automated bare IMA module testing. Nevertheless, it was possible to significantlyincrease the degree of automation resulting in higher test coverage than possible with manualtesting. For future enhancements of the test suite and to decrease the test execution time and costs

Page 268: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

248 CHAPTER 8. EVALUATION OF THE TEST SUITE

for regression testing further, the test preparation environment should be improved such that itcontains means for fully automated data loading.

Test Design Process. Before any tests can be implemented and performed, the design of the testcases and test procedures has to be defined based on the requirement specifications of the systemunder test. Ideally, the test design documents specify the tests using formal specification tech-niques that can either be used directly for testing or only require automated transformation stepsbefore being usable by an appropriate test system. However, for the implemented approach, thetest design documents were based on textual requirement specification documents and, thus, werecompiled manually using structured text to describe the test procedures. Starting from such tex-tual test designs as an input for the test suite development consequently required manual and timeconsuming implementation activities. Although each manual step involves a certain risk that newfaults are introduced (e.g., that the test implementation changes the test design such that an errorcannot be detected), it also gives the chance to use the test engineers’ domain expertise which mayhelp to identify inconsistent, incomplete, or ambiguous test design documents or SUT requirementspecifications. The experience gained from implementing and executing the test suite supports thelatter proposition since the test engineers found several errors and imprecisions in specificationdocuments, i.e., in the sources used for the test design. For future system specification and testdesign activities, current research projects (e.g., KATO) analyze and develop new approaches toimprove the development, verification and validation processes (see [OH05]).

Assessment Summary. The above assessment reveals that the approach for automatedhardware-in-the-loop testing of bare IMA modules and the implemented test suite are a big steptowards a lasting improvement of quality assurance and have significantly increased the degree ofautomation resulting in higher test coverage than possible with manual or semi-automated testing.At the time of designing and implementing the test suite, state-of-the-art test tools, testing hard-ware as well as tool sets were used. However, for automated testing in the selected approach, somecaused several limitations but the test suite has implemented various means to cope with them.

Page 269: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

8.2. COMPARISON WITH OTHER APPROACHES 249

8.2 Comparison with Other Approaches

While the previous section has assessed the approach for bare IMA module testing and the imple-mented test suite, this section compares it with other approaches.Testing and verification in the avionics domain has been studied for many years, but there is onlya limited number of publications addressing approaches for IMA testing and IMA module testing.In particular, approaches for testing the ARINC 653 operating system of the IMA modules arerarely described in much detail. This is not surprising since most of the RTOS with ARINC 653-1support are commercially developed and the respective companies provide such details only totheir customers in order to support a certification according to RTCA/DO178-B and other appli-cable standards. Even widening the scope to testing of Integrated Modular Electronics (IME) as itis applied in the automotive domain yields no further relevant approaches since most publications(e.g., [TTT], [POT+05], [POAK05]) address architectural considerations and fault diagnosis ratherthan testing approaches.Out of the approaches available in literature (e.g., [Bol99], [HMN00], [CM01]), the conformitytest specification described in part 3 of ARINC 653 ([ARINC653P3d3]) seems to be the mostrelevant one for comparison with the approach presented in Chap. 6 of this thesis. The other ap-proaches are also relevant for testing of IMA modules or IMA systems, but focus on different testlevels (e.g., HW testing, application testing, analysis of final configurations) that complement bareIMA module testing; a comparison of these approaches is beyond the scope of this section. There-fore, the following subsection compares the test suite for automated bare IMA module testingpresented with the ARINC 653 conformity test specification.

8.2.1 Comparison with ARINC 653 Conformity Test Specification

The ARINC 653 conformity test specification has been compiled by the APEX working group([APEX-WG]) which aimed at updating the ARINC 653 basic services (ARINC 653 part 1),adding optional extended services to support data base and file management applications (ARINC653 part 2), and defining the conformity test specifications (ARINC 653 part 3). The workinggroup is chaired by Airbus and Boeing and gets contributions from several companies that areactive in the avionics industry or develop real-time operation systems. The results of the work-ing group are a revised version of ARINC 653 part 1 [ARINC653P1-2] and the new standarddocuments [ARINC653P2] and [ARINC653P3].The working group’s objective for standardizing a conformity test specification is to provide meansto support the portability of applications by increasing the confidence in the compatibility of differ-ent ARINC 653-“compliant” OS implementations. Therefore, a set of test specifications has beendefined that can be used “to demonstrate and to prove that the interface behavior [of a specific OSimplementation] is in compliance with the ARINC 653 specification” ([ARINC653P3d3], p. 7).For the compliance demonstration, the ARINC 653 part 3 document contains one or several testspecifications for each basic service2 which can demonstrate the correct functionality under nor-mal conditions as well as under specific error conditions (e.g., calling the service with out-of-rangeparameters or in wrong operational mode). For future updates of the document, it is also plannedto define test specifications for the optional services specified in ARINC 653 part 2 in order to alsosupport compliance testing for one or several of these extensions.In the following, the differences and similarities with the approach defined in this thesis are dis-cussed in more detail. To simplify the distinction of the two approaches in the subsequent text,the following terms are used: (a) bare module testing to refer to the approach for automated bareIMA platform testing as defined in Chap. 6 and (b) OS conformity testing when referencing theconformance testing approach defined in [ARINC653P3d3].

2Section 3.1.2 describes briefly these basic services defined by ARINC 653 part 1.

Page 270: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

250 CHAPTER 8. EVALUATION OF THE TEST SUITE

Comparison of the General Approaches. Both approaches use a hardware/software integra-tion testing approach, i.e., test the operating system on the target hardware. However, while theOS conformity testing approach focuses exclusively on demonstrating the conformance of the OSimplementation by using the API, the bare module testing approach addresses testing of the OSand its drivers on the API level (using test applications) as well as on the HW interface level (us-ing external test specifications). Thus, the bare module testing approach stimulates and checks allinterfaces of the IMA module: the internal as well as the external ones. The difference betweenthe two approaches is caused by the fact that the ARINC 653 part 1 specification does not ad-dress possible HW interfaces because they are standardized separately (cf. [ARINC664], [CAN],[ARINC429]). However, it means that the bare module testing approach is more comprehensivein testing the type of IMA modules considered in this thesis, the Core Processing I/O modules(CPIOMs) with various types of interfaces to other components in the aircraft.

In addition, the scope of the defined test procedures differs: For bare module testing, an IMAmodule is functionally tested with many different test configurations because each test proceduretemplate is usually executed with several different configurations. Thus, the approach allows toshow the compliance with the API and all other specification documents of the module, but alsoincreases the confidence that the module’s configurability conforms to the current as well as tothe future configuration demands. Furthermore, the bare module testing approach includes limitedperformance tests (e.g., can all API calls be executed in the specified worst-case execution time)to ensure compatibility with current and future needs. In contrast, the OS conformity testing ap-proach also “defines test procedures to demonstrate compliance of the API behavior” but “doesnot include performance tests (e.g., benchmark-tests), nor demonstration of capability of the op-erating system to manage combinations of configuration parameters” ([ARINC653P3d3], p. 8).This means that instead of using many different configurations for testing, the approach selects “arepresentative and suitable set of configuration data” ([ARINC653P3d3], p. 8).

Executability of the Approaches. The biggest difference between the two approaches is theirexecutability. For the bare module testing approach, a test suite puts the defined approach intopractice by transforming the textually specified test designs into executable test procedures andgeneric test applications, by defining in detail all configurations to be used for testing, and byimplementing methods to log the SUT’s behavior and check its correctness in an automated way.Moreover, the test suite uses the tool chain environment that partially mirrors normal operation(e.g., data load generation, data loading) which means that – as a side-effect – these tools are alsochecked for their interoperability and their usability.3

For the OS conformity testing approach, the standard only describes the test procedures to bepassed for compliance, but does not consider execution-related details like IMA platform-specificparameters, configuration details, loading of applications, or capturing of test results. In particular,the information provided for module configuration is intentionally kept very general and incom-plete (“A representative and suitable set of configuration data shall be used.” [ARINC653P3d3],p. 8) and, thus, it cannot be ensured that all tests can be executed successfully with a configura-tion that only complies to the few concrete values provided in the document. Moreover, this meansthat the configuration tables used by different test suite implementations may vary enormously andmight even change the test result because IMA module configurations (as used by IMA modulesdescribed in this thesis) are very complex and contain many interdependent configuration param-eters.However, the aforementioned details are not addressed by the standard document because the re-spective information are considered implementation dependent. Nevertheless, it prevents that thedefined set of test procedures can be implemented in different test suites in a way that allows com-parison of the results. Since this limitation is well-known to the authors of the standard document,

3Note that these tests are driven by practical needs rather than by pre-defined test cases.

Page 271: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

8.2. COMPARISON WITH OTHER APPROACHES 251

they suggest that a vendor-independent organization4 shall provide a conformity test suite acces-sible for all OS suppliers. This approach to get an executable test suite will necessarily require todefine in more detail configuration aspects as well as probably data loading and result capturing.

Comprehensiveness of the Test Specifications. As described in the previous paragraphs, theobjective of the OS conformity testing approach is to demonstrate the compliance with the ARINC653 specification without addressing performance testing, timing constraints, I/O testing, or testingwith different configurations. In particular, the standard document states that “passing the CTS[Conformance Test Suite] does not itself constitute proof of fitness of the operating system forany purpose, only adherence to the API specification ARINC 653 part 1” ([ARINC653P3d3],p. 9). In contrast, the bare module testing approach also addresses these areas in order to gainconfidence that, firstly, the IMA modules comply with the ARINC 653 specification and all otherspecification documents and, secondly, can be used for their intended purposes. The latter isachieved as follows: (1) By providing test procedures that address timing, performance as wellas I/O usage (depending on the area, with varying intensity), (2) by using a huge variety of testconfigurations which (ideally) also include realistic final configurations, and (3) by succeedingthe bare module testing with so-called configured IMA module tests which repeat all relevant testprocedures but use IMA modules with final configurations for testing.Moreover, the approaches also differ in their intensity of testing each basic service. Although bothapproaches test in normal and robustness test procedures that the API services show the speci-fied functionality and return the specified return codes, the OS conformity testing approach testseach service in separate test procedures and uses other services only to achieve the required pre-conditions. For example, the API services CREATE BUFFER, SEND BUFFER, and RECEIVE BUFFER aretested by separate sets of test procedures. Additionally, the OS conformity testing approach testseach service only with the minimum set of parameter variation that is required for checking allfunctionality variations but not necessarily all parameter boundaries. For example, the test proce-dure addressing CREATE BUFFER in normal case testing uses only one set of parameters (the buffercan store up to 5 messages with maximum message size 100 bytes) while the test procedures fortesting SEND BUFFER only use two other sets of buffer parameters (buffer 1 can store up to 128messages with maximum message size 128 bytes; buffer 2 can also store 128 messages but limitsthe maximum message size to 32 bytes). Thereby, the buffer parameters seemed to have beenchosen randomly since the respective values do not vary a lot and also do not include the definedmaximum values (i.e., SYSTEM LIMIT NUMBER OF MESSAGES and SYSTEM LIMIT MESSAGE SIZE). Incontrast, the bare module testing approach often combines testing of several basic services in onetest procedure. This is, for example, demonstrated in the test procedure example for intra-partitioncommunication testing that has been presented in Sect. 6.4.3. In this test procedure, the API ser-vices CREATE BUFFER, SEND BUFFER, and RECEIVE BUFFER are used to create buffers with differentparameters (including the parameter boundaries) which are then filled and emptied several times.Since the functionality of all three API services is tested in this test procedure, it is possible toverify the correct implementation of buffers (not only of the API services accessing buffers) whichincludes to check that messages are not corrupted or received in wrong order and that attempts tooverflow or underflow the buffer result in the correct error codes. Summarizing, the bare moduletesting approach seems to be more comprehensive since the API services (and thus also the imple-mented concepts) are tested usually in more different situations (e.g., different operational modes,various parameter combinations) than in the OS conformity testing approach. For showing a basicconformance with the ARINC 653 specification, the specified test procedures are sufficient; forincreasing the confidence that the IMA module and its respective OS implementation can be usedfor the intended purposes, more tests are required and are, therefore, included in the test suite forbare module testing.

4In the standard document, the authors suggest that the Open Group could manage the test suite as it is also doingfor POSIX, among others.

Page 272: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

252 CHAPTER 8. EVALUATION OF THE TEST SUITE

Extensibility of the Approach’s Test Suite In order to also cope with future extensions of theARINC 653 specification, it is important to investigate and compare which changes are necessaryto address further services. If further test procedures (e.g., for new services) shall be added to thetest suite for bare module testing, additional external test specifications are implemented and, ifrequired, the generic test application is extended. As discussed in the previous section, the lattermay require to expand the minimum configuration requirements which, in a worst case, can causechanges for all existing test procedures and may result in usage of several tailored test applications.However, the impact of required changes has to be evaluated on a case-by-case basis. In contrast,if the set of test procedures defined by the OS conformity testing approach shall be extended,further textual test procedures have to be defined in the standard document. An implementationof the described approach will probably use different tailored test applications and, consequently,one or several new test applications will have to be implemented for the new test procedureswithout affecting the existing set of test applications. This means that extending the bare moduletesting may – in some cases – require more far-reaching changes than extending the OS conformitytesting. In other cases, however, the implementation costs for extending the OS conformity testingmay exceed the costs for extending the bare module testing test suite.

Conclusion of the Comparison When summarizing the above discussions, the bare moduletesting approach seems to be more advantageous for conformity testing as well as usability testingthan the OS conformity testing approach. This is achieved by several factors: Firstly, the set oftest procedures is more comprehensive and includes test areas which are not covered by the OSconformity testing approach. Secondly, the approach is implemented in an (almost fully) auto-mated test suite which allows automated test execution and test evaluation in an environment thatmirrors normal usage of the tools for load generation and data loading. Moreover, it includes athought-out approach for test result capturing and logging which only requires minimum configu-ration requirements but no additional test instrumentation services. In contrast, the OS conformitytesting approach leaks several details which are required for a test suite implementation. Thirdly,the bare module testing approach uses several different configurations which are clearly defined inthe test suite and are used to instantiate several test procedure templates. Fourthly, the bare mod-ule testing approach uses a generic test application while an implementation of the OS conformitytesting approach would probably use various tailored test applications. However, in case that thegeneric test application of the bare module testing approach has to be extended (e.g., to addressnew services to be tested or to add additional tailored application behavior) the effects have to bethoroughly evaluated since it may cause costly tailoring.

Page 273: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Chapter 9

Evaluation of the Approach for Testinga Network of IMA Platforms

In Chap. 7, an approach for testing the communication flow in a network of configured IMAmodules has been described which uses generated test cases to control the inter-system communi-cation and to evaluate the observed behavior. While that chapter focused primarily on the test casegeneration algorithm and on considerations for implementing the approach, this chapter aims atcritically analyzing the approach, discussing possible improvements, and comparing it with otherapproaches. The structure of this chapter follows these objectives: Section 9.1 discusses the prosand cons of the approach and suggests possible improvements; then, Section 9.2 addresses thecomparison with other work.

9.1 Assessment

This section analyzes and assesses the approach for testing a network of configured IMA platformsas it has been described in Sect. 5.3.1 and detailed in Chap. 7.The assessment addresses the following aspects:

• the general approach,• the communication schedule generation algorithm,• handling of huge amounts of generated test cases,• considerations for test execution and test evaluation,• the lessons learnt from the algorithm’s prototype implementation, and• applicability considerations.

General Approach. The approach addresses means for testing the inter-system communica-tion in a network of IMA platforms which means testing various combinations of intra-moduleand inter-module communications as they can occur between the network elements. Thereby, theaim of the approach is to verify that all inter-system communication flow is possible (within therange of the network configuration and the platform specifications) and to also show that the net-work provides its characteristics under any kind of legal operating conditions. Thus, the approachcomplements the other system test steps (particularly the preceding steps for platform testing andsystem integration) and ensures that, for the tested networks, any kind of allowed applicationcommunication behavior is supported.

253

Page 274: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

254 CHAPTER 9. EVALUATION OF THE NETWORK TESTING APPROACH

To achieve this aim, the approach suggests, on the one hand, to restrict the number of tested IMAnetwork configurations and to focus exclusively on a network of IMA modules with their finalconfigurations and, on the other hand, to use specific test applications for test execution instead ofthe respective real applications in order to expand the tested communication behavior. This com-bination of measures is advantageous because it allows the communication flow of the specificnetwork configuration (i.e., the specific combination of IMA module configurations) to be testedas comprehensively as possible without being restricted by the pre-defined communication behav-ior of the real applications and, at the same time, reduces significantly the number of test casesby not considering the potentially high number of possible platform configuration combinations.Thus, the goal of achieving full test coverage is only limited by the time provided for this teststep. But restricting the testing to only one combination of platform configurations – the presum-ably final configuration – is also disadvantageous because, in case of configuration changes, allpresent test execution results become outdated and re-generation and re-execution of the test casesis required. However, the rationale behind focusing exclusively on the (presumably) final config-uration is twofold: (1) The automated test case generation approach facilitates the re-generation.(2) The inter-system communication testing is performed after various platform, application andsystem tests using exactly these platform configurations. Since any configuration changes wouldalso require to repeat all these tests, configuration changes are expected to be avoided wheneverpossible in order to avert the related testing costs. In contrast, late application changes or improve-ments – usually altering the communication behavior – can often not be avoided and, therefore, itis advantageous to pursue a pro-active testing approach by verifying all possible communicationbehavior and ensuring that communication behavior changes are unlikely to encounter flaws inthe platform communication behavior, the platform performance, or the network communicationcharacteristics.The main feature of the approach is its strong focus on automation: (a) On an automated test datageneration algorithm, (b) on means for automated handling of the generated test cases to filter outand sort them, and (c) on considerations for automated test preparation, test execution and testevaluation. This means that the approach, on the one hand, supports the automated generation ofall possible communication schedules which each represent one test case but, on the other hand,also comprises means to handle the huge amounts of generated test cases by providing automated(but configurable) restriction and sorting functions. More detailed assessments of these aspectsare given in the following paragraphs.Although the main parts of the approach can be automated and allow quite flexible reaction tonetwork changes, the effort required for implementing the generation algorithm and appropriaterestriction and sorting rules as well as for the test execution setup should not be underestimatedand has to be assessed with respect to the expected findings on a case-by-case base. The follow-ing paragraphs about the lessons learnt from the algorithm’s prototype implementation and aboutgeneral applicability considerations discuss these topics.

Communication Schedule Generation Algorithm. The approach for communication flow test-ing in a network of configured IMA platforms is based on an algorithm which describes how togenerate the test cases needed for testing all possible communication flow situations in the networkconfiguration under test. The test cases generated by the generation algorithm are so-called com-munication schedules. Each schedule describes the (communication) function calls of one possiblecommunication flow situation, i.e., when which partition shall be able to perform a function callfor receiving or sending a message. Due to the nature of the communication schedules, the gener-ation algorithm first defines very short initial communication schedules (one for each initial set ofpossible simultaneous platform behavior) and then extends these initial communication schedulesstep-by-step by determining for each one all possible subsequent combinations of function calls.Thus, it generates all combinations of simultaneous and sequential communication with respect tointerleaving and temporal variance. Since the aim of the approach is to generate (and execute) all

Page 275: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

9.1. ASSESSMENT 255

test cases, using the algorithm-based generation approach ensures that there are no unintentionallyforgotten test cases (as they are likely to occur when test cases are designed manually). More-over, a clearly defined algorithm enables the automated test case generation which, furthermore,means that the test cases can be generated and stored in a way that simplifies further automatedhandling and supports automated test case execution. This automation support is of particular im-portance since for most comprehensive testing many different factors have to be considered whosecombinations naturally result in very many test cases.

Nevertheless, neither the clearly defined generation algorithm nor the possibility to automate thedifferent test phases can overcome the problem that testing all possible communication flow sce-narios in a specific network of IMA platforms might practically not be possible due to the timerequired for generating and, more critically, for executing the huge amount of test cases. Thisproblem can be addressed by combining the following: By optimizing the communication sched-ule generation algorithm and its implementation, by providing means to influence and handle thealgorithm’s results (particularly by focusing on a reduced set of test objects), and by optimizingthe test execution. When focusing on the generation algorithm’s aspects of these suggestions, thishas been addressed in Chap. 7 by (a) directly incorporating the restriction functions such that non-selected test cases are excluded from the generation, (b) analyzing the pros and cons of depth-first,breadth-first and mixed depth-breadth algorithms, and (c) discussing performance optimizationissues (with respect to execution time and memory utilization) to be considered when implement-ing the generation algorithm and selecting the hardware for its execution. Since these solutionsinclude conceptual as well as practical optimization suggestions, it can be assumed that achievinga suitable implementation of the generation algorithm is not the restricting factor for implement-ing the approach (i.e., it can probably easily generate more test cases than can be executed in theavailable test execution time).

The problem of having very many (or even too many) test cases becomes even more apparentwhen considering that the generation algorithm described in Sect. 7.2 takes into account only themost important influencing factors – mainly the network properties (i.e., platform configurationswith scheduling information, communication link characteristics and their related performanceproperties) – and does not yet consider other influencing factors as they have been mentioned inSect. 7.5: further communication flow variations (e.g., variations of message sizes and messagecontent types within the allowed ranges, usually in common with consideration of more detailedperformance information) and execution related variations (e.g., variations of platform schedul-ing start times), among others. Since each would naturally multiply the number of generated testcases, it emphasizes once more how important it is that the approach supports various means forefficient handling of this problem (from restriction functions to sorting functions to optimized testcase execution). However, it also shows that realizing the aim to test all possible communicationflow situations is very unlikely and probably has to be restricted to a (minor) subset of the possi-ble communication flow situations instead. This also highlights the importance of implementingappropriate restriction and sorting functions.

Summarizing, the suggested generation algorithm provides the means to generate systematicallyand as effective as possible (a scalable subset of) all possible communication schedules that ful-fill the selected test objectives but also limit the testing effort as required. To achieve this, thegeneration algorithm (which itself ensures to generate all combinations of simultaneous and se-quential communication flow behavior) integrates restriction and sorting functions to be defineand fine-tuned by test case designers. To support test automation, the algorithm is supposed to beimplemented and minor performance issues can be dealt with because of its offline application.

Handling of Huge Amounts of Generated Test Cases. The generation of all possible com-munication schedules for a realistic network configuration usually results in huge amounts of testcases whose generation and execution time go beyond the limits of reasonable testing time. To

Page 276: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

256 CHAPTER 9. EVALUATION OF THE NETWORK TESTING APPROACH

deal with the problem and to complement the suggestions for optimizing the generation algo-rithm’s implementation and the test execution approach (as discussed in previous and followingparagraphs), a possible and feasible approach is to concentrate on a subset of the test objectives inorder (a) to reduce the number of generated test cases and (b) to sort the remaining ones such thatthe more important test cases (i.e., those which are more likely to uncover problems) are testedfirst. Thereby, the network testing approach suggests that user-configurable and deterministic re-striction functions as well as user-configurable and deterministic sorting functions are integratedinto the generation algorithm: The restriction function can avoid the generation of test cases oncethe beginning of a set of test cases has been identified as less important or not relevant for thetest objectives. The sorting function is applied on the set of generated test cases, sorts them ac-cording to their importance and then allows to select only the subset of most important test cases.This means that the usage of restriction and sorting functions provide a systematic way to restrictthe number of test cases to a much smaller subset. However, this is also the main disadvantageof this approach because it emphasizes the problem of finding appropriate restriction and sortingfunctions: While manually generated test cases might restrict (by their nature) in a less systematicway and thus might still include test cases which reveal faulty situations, automated algorithmscan benefit from a test case designer’s experience only through the knowledge incorporated in therestriction and sorting functions. Moreover, common rules for finding the most effective func-tions cannot be given because their definition depends – among others – on the network undertest and the selected test objectives. This means such problems have to be handled on a case bycase basis and require detailed knowledge and experience of the test case designers defining therestriction and sorting function. Nevertheless, the problem can be dealt with partially by includingalso random restriction functions.The restriction and sorting functions addressed so far in Chap. 7 are very basic (e.g., the restrictionfunctions are particularly focusing on API calls with return codes that indicate no error) and nottailored according to application-specific test objectives (particularly because there is no specificapplication or real network configuration to be considered). However, applied to the examplenetworks, they have proven to reduce the number of generated communication schedules quite ef-fectively and, although they cannot prevent that the number of communication schedules increasesexponentially with the communication schedule’s length, it shows that more specialized restrictionfunctions can be based on them.Following the above considerations, handling the huge amounts of communication schedulesseems to necessitate their reduction using appropriate restriction functions. The basic restrictionfunction examples discussed so far were focusing on limitations based on previous communicationbehavior and expected return codes of the API function calls – always assuming that the state ofthe system (i.e., of the network as well as of the IMA platforms) changes at each time tick or witheach communication interaction and thus repeating a specific sequence of communication flow be-havior is worth testing (e.g., to detect buffer overflows). In order to gain more effective restrictionfunctions, it should be investigated if a definition for equivalent system states can be gained thatallows to detect recursion and could contribute to reducing the number of communication sched-ules by eliminating recursive sequences. Mechanisms described, for example, in [HdMR04] cansupport finding an appropriate heuristic function.Summarizing, the approach suggests two different means to handle the huge amounts of gener-ated communication schedules which have proven to effectively reduce the number of generatedcommunication schedules and provide heuristic sorting. Nevertheless, the examples have alsodemonstrated that, for real network configurations (which are usually much more complex thanthe considered example networks), it is required to define specialized functions – a task whichremains to be solved using the test case designer’s experience.

Considerations for Test Execution and Test Evaluation. The generation of the communicationschedules is an essential pre-requisite for the communication flow testing which is to be comple-

Page 277: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

9.1. ASSESSMENT 257

mented by approaches for test execution as well as test monitoring and evaluation. The generalapproach assumes these steps to be as automated as possible (with no or very few manual steps)because only this very high degree of automation ensures that, during test execution, subsequenttest data can be provided in hard real-time and the respective test results can be evaluated on-the-fly or transmitted immediately for centralized offline evaluation. In addition, it enables a cost- andtime-conscious handling of the huge number of test cases during testing. However, this automatedhandling depends to a large extent on the characteristics of the network configuration under test– particularly, on the availability of specific ports for test case distribution and test log collection.To ensure that the overall approach for communication flow testing can be implemented for vari-ous network configurations, Sect. 7.3 discusses several approaches for distributing the test cases,interpreting them, and collecting the test results for test evaluation. Thus, the limiting factorsof different network configurations are known and can be avoided if a specific approach shall beselected (e.g., by providing appropriate ports).As discussed above, the main problem of the communication flow testing approach is the hugenumber of possible communication flow test cases with respect to the available testing time. Thesolutions discussed so far address only the test case generation. However, besides reducing thenumber of test cases, another solution is to further optimize the test execution and test evalua-tion. Assuming that these phases are already automated, the execution time of a single test casecannot be further accelerated. However, it is possible to duplicate the test setup and thus allowparallel execution of different test cases. This means by providing many similar networks undertest and respective test systems, the testing time can be reduced significantly. The limiting factorof this solution is the cost for the duplicated hardware (and the laboratory space required for theirsetup). Since the hardware can potentially be shared with proceeding or succeeding test phases,this solution can clearly contribute to mitigating the problem.It has to be remarked that – since the main focus of Chap. 7 is on the generation algorithm and noton a complete communication flow test suite – the respective sections naturally and intentionallydo not contain the description of practice-proven approaches and algorithms. This includes, amongothers, an algorithm for test evaluation based on the communication schedule and the test execu-tion log, clock synchronization of the IMA modules (required when comparing the test executionlogs from different IMAmodules), and synchronized partition scheduling starting (to be compliantwith the partition scheduling start points assumed in the communication schedules). Mechanismsto deal with the clock synchronization are, for example, addressed in [Rus02b], p. 6 (discussingclock synchronization in the Timed Triggered Architecture) and may be borrowed to solve theseissues. Future case studies implementing the approach for one or several network configurationscan reveal more data for detailed assessments.

Lessons Learnt from the Algorithm’s Prototype Implementation. The generation algorithmdescribed in Sect. 7.2 is based on a recursive function which appends all possible combinations ofsimultaneous communication behavior to the input communication schedule and then recurs foreach new communication schedule in a depth-first approach. The prototype implementation uses atree-like internal representation for the intermediate and final communication schedules. Applyingthe prototype implementation of the generation algorithm to simple and small example networkshas revealed that – due to the huge amount of test cases – it is necessary to use an optimized internalstorage of the communication schedules which goes beyond the usage of memory-efficient datastructures. This means that, in most cases, the internal communication schedule tree (containingthe intermediate as well as the final communication schedules) needs more memory than canbe stored in the RAM of the execution platform which limits the amount of test cases that canbe generated. Future implementations should therefore implement means to store the internalcommunication schedule tree on the executing platform’s harddisk (e.g., file, database).Although the execution time of the prototype’s implementation of the generation algorithm has notbeen the limiting factor, the generation of the communication schedules can be further optimized

Page 278: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

258 CHAPTER 9. EVALUATION OF THE NETWORK TESTING APPROACH

by parallelizing the execution of the recursive function. This requires the algorithm’s implemen-tation to be designed for parallel execution (e.g., parallel execution for each initial combinationof communication flow triggers) as well as the availability of an appropriate execution platform(i.e., a multi-CPU platform or a network of single- or multi-CPU platforms). Using a network ofseveral platforms would also partially solve the aforementioned memory problem because eachnetwork element would use its own memory to store the internal communication schedule tree. Aspecific load balancing algorithm can further improve this approach. Nevertheless, the cost for thespecific execution platform also has to be considered.

Applicability Considerations. The above discussions have shown the following: On the onehand, there are solutions to implement the approach for testing all communication flow situationswhich includes test generation, test execution as well as test evaluation. On the other hand, manyof these solutions require very tailored approaches to contain the test execution costs which itselfcan cause costly developments or require expensive purchases. This means that for each projectapplying this approach it is necessary to define the measures for cost and efficiency with respect to(a) generating the set of test cases to achieve the coverage target and (b) executing the produced testset. In particular, this should address the means for reducing the amount of test cases by selectingthe most appropriate ones and the possibilities to optimize the test case execution. Moreover, itshould be assessed how well the approach – within the specific application context – complementsthe preceeding and succeeding test steps.

9.2 Comparison

While the previous section has assessed the approach for communication flow testing in a net-work of IMA platforms, this section discusses briefly the issues in finding related approaches forcomparison.

Approaches for Network Integration Testing. As described in Sect. 5.1, using an IMA basedsystem architecture affects the testing approach at all levels – from platform testing to applica-tion testing to system integration to multi-system integration. In particular, this is caused by theuse of shared IMA platforms which usually host the applications of several systems. At the sys-tem integration level, this means that no system supplier can provide a fully integrated networkof controllers and peripheral equipment because the applications of other systems (usually com-ing from other suppliers) can only be integrated by the integrator. Thus, the integrator has tocompile a network of fully integrated IMA modules and all their peripherals during system /multi-system integration. To support fault localization during these integration steps, precedingnetwork integration testing activities like the described approach for communication flow testingcan demonstrate the compliance and compatibility with the network configuration independentlyof the system applications to be hosted by the IMA modules because they are applied in a networkof fully configured (but otherwise non-integrated) IMA modules. However, approaches for net-work integration testing are rarely addressed. This is not surprising for several reasons: (1) Thistest step has been newly introduced for testing of systems based on integrated modular avionics– a technology which has only recently been taken into use. (2) Network integration testing isonly performed at the integrator’s site which means that there is no need to discuss the details ofthe testing approach with several system suppliers or even to standardize it – and, hence, to pro-vide corresponding documents. (3) Moreover, the testing step which is performed only to preparemulti-system integration and simplify fault localization in case of failure situations is not relevantfor system qualification and thus might be performed in a less systematic and less detailed way(than aimed for by the discussed communication flow testing approach) or might even be skipped.As a consequence, the approach introduced in this thesis cannot be directly compared with otherapproaches for communication flow testing in a network of IMA platforms.

Page 279: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

9.2. COMPARISON 259

Other Approaches. Since a comparison of the communication flow testing approach with otherapproaches for network integration testing in an IMA architecture is currently not possible, thescope of comparable approaches could be widened. Depending on the selected other approach,this necessarily requires to constrain the comparison to certain (probably rather small) parts of theoverall communication flow testing approach rather than performing a comprehensive compari-son. Moreover, this assessment should give priority to investigating how the communication flowapproach (and especially the test case generation part) could benefit from the other approachesand experience gained with them. For finding suitable other approaches, it has to be consideredthat the described communication schedule generation algorithm uses a model (i.e., an appropri-ate representation of the network configuration as well as of the module and network performanceinformation) for generating all possible test cases (i.e., all possible communication schedules).For applying this approach, it is assumed that the set of all possible test cases has to be restrictedto a subset which most likely focuses on one specific test objective and reduces the tested com-munication flow to application-like interactions. Bearing these characteristics in mind, futuredevelopment can gain from the field of model-based test case generation according to given cover-age criteria, the area of workload generation for performance testing, schedulability analysis, andnetwork integration testing in telecommunications, among other areas. Further investigation (andselection) of specific other approaches is outside the scope of this thesis because it depends entirelyon the needs revealed during future case studies. Future work should address this analysis in moredetail and should particularly consider the findings (and practical needs) from applying the de-scribed communication flow testing approach as one part of the system / multi-system integrationin an IMA architecture.

Page 280: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

260

Page 281: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Chapter 10

Conclusion

In this thesis, system testing in avionics has been addressed from two angles: At first, this thesishas elaborated – from a systems engineering point of view – on the development and V&V ofavionics systems, particularly of IMA-based systems. Thereafter, the described general testingactivities during system integration have been detailed in two case studies which considered twonewly identified testing areas for IMA-based architectures. The first case study has described atest suite for automated testing of bare IMA platforms. The second case study has presented anapproach for testing the communication flow in a network of IMA platforms.

In the systems engineering part, this thesis has described how avionics systems are currently de-veloped and how the technology and the development processes have evolved – particularly withrespect to system testing in avionics. This has been achieved by (a) introducing the general pro-cesses and approaches to be considered when developing and testing avionics systems, (b) sum-marizing the characteristics of avionics systems, and (c) discussing the effects of evolving towardsintegrated modular avionics. This has resulted in two complementary “models”: A process modeland a system model. The process model has assembled the development and testing activitiesfrom aircraft level to equipment level and has elaborated on the differences depending on the cho-sen architecture model and the used platform types. Moreover, it has compiled how the activitiesand responsibilities have evolved due to the usage of integrated modular avionics and which newtesting activities have been identified for IMA-based systems. In addition, the developed processmodel has addressed general testing-related processes, methods and tools (e.g., for test case selec-tion, test data generation, test execution, test evaluation) and assessed them with respect to theirinfluence on automated testing. The system model has assembled the knowledge about avionicssystems with respect to architecture models, redundancy concepts, used platform types, and com-mon aircraft networking technologies and has described how these have evolved. The focus hasbeen on systems based on integrated modular avionics: Besides analyzing IMA-based system ar-chitectures, the specific characteristics of IMA platforms like the IMA operating system as definedin the ARINC 653 specification and the configurability of IMA platforms have been detailed. To-gether these two models have delivered an encompassing reflection of system testing in avionicswhich was also needed as the conceptual foundation for the other parts of this thesis.

In the process model, it has been revealed that the system test approach for integrated modularavionics differs with respect to the general system test approach due to the specific characteristicsof IMA modules and the IMA approach, particularly, because IMA modules are highly config-urable, shared resources that provide a standardized API and standardized HW interfaces whichallows them to be provided by different suppliers. This has led to the identification of two newtesting areas: Testing of bare IMA platforms and testing the communication flow in a networkof configured IMA platforms. One possible approach for each of these two areas has been ad-dressed by case studies in the second part of this thesis. These two approaches have aimed to(a) describe means that verify the respective objectives as generically and comprehensively as

261

Page 282: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

262 CHAPTER 10. CONCLUSION

possible (within the given testing time) and (b) support automated testing. The former has beenachieved in both approaches by using test applications instead of any avionics system-specific ap-plications. The test suite for bare IMA platform testing has additionally considered many differenttest configurations (instead of limiting the testing to the set of final configurations) and has alsoused test specification templates that abstract from concrete configuration data and can be instan-tiated for several (appropriate) test configurations. Automated testing has been addressed withdifferent means: In the case study addressing bare IMA module testing, it has been supported byautomating the rule-based generation of test configurations, by providing an automated tool chainfor instantiating the test procedures of the test suite and preparing the IMA platform under test forthe test execution, and by using a test bench which allows (almost) fully automated testing. In thecase study addressing communication flow testing in a network of IMA platforms, test automationhas particularly been supported by providing a sophisticated generation algorithm that allows togenerate and prioritize the set of test cases that can verify the inter-system communication flowof a specific network configuration in the given testing time. Together the two case studies haveprovided detailed implementation-related insights into possible test approaches for these two newtesting areas and have thus complemented the systems engineering part.

This thesis, particularly the systems engineering part, has been compiled using various publica-tions about testing, avionics systems, systems engineering, and software engineering for real-timesystems, among others, which has allowed to deliver a comprehensive overview about avionicssystems and their development and testing. Naturally, the described process and system modelsare not all-embracing and complete because this thesis has used only publicly available material(which excluded the huge amounts of existing confidential documents). This has naturally con-strained this thesis to a high level perspective. It has focused on describing the general systemdevelopment and testing approaches (instead of describing the tailored processes and methodsagreed upon for the certification of a specific aircraft project) and also on the generic aspects ofavionics architectures and system specifications (instead of providing the architecture and systemspecifications for a specific aircraft). The delivered process and system models have provided acomprehensive overview with consistent terminology. The broad discussion of the pros and consof IMA have given useful guidelines for the reorganization of processes and methods requiredfor IMA-based systems. Moreover, these results can be used as a reference, for example, whendefining the processes and system specifications for a specific aircraft project.

For the case studies, the “lessons learned” part of this thesis has evaluated the described ap-proaches, assessed the achievements, and discussed the pros and cons of the respective approach’simplementation and of possible improvement options. The main points can be summarized as fol-lows: The assessments have – as expected – confirmed that test automation is essential for assuringthe quality of the class of systems under test considered in this thesis, for example, by achievinga higher degree of test coverage (compared to manual testing within the same execution time) andby enabling repeated execution of the same set of test cases whenever the system under test haschanged. The two case studies have also shown that, on the one hand, the test process clearlybenefits from a fully automated test suite but that, on the other hand, the level of automation andthe areas of the test suite that need to be automated may vary depending on the approach and thetest objectives. Moreover, in some cases, it has emerged to be advantageous that certain test stepshave to be executed manually because it allows using the test engineer’s domain expertise.For the first case study, this assessment has become evident from the following observations: (a)It has been essential to provide an automated rule-based test configuration generation tool (par-ticularly to avoid manual creation of all the test configuration tables) and a test bench for (almostfully) automated test preparation and test execution. (b) It has been possible to accept the specificlimitations of the tool chain (with respect to data loading) and the restrictions of the test system’sspecification formalism (particularly with respect to the handling of large data structures) becausethe test suite has implemented various means to cope with them. (c) It has been acceptable thatthe test design of the test procedures has been provided as structured text which, consequently, has

Page 283: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

263

required manual implementation of the test procedures but also has allowed to find several errorsand inconsistencies in the underlying specification documents.For the second case study, these findings have been substantiated by the following: (a) For theapproach, it has been fundamental to use an automatable test case generation algorithm becauserealistic networks of IMA platforms are very complex and it is not feasible to manually gen-erate all combinations of simultaneous and sequential communication behavior (with respect tointerleaving and temporal variance) or even to repeat the test case generation when the networkconfiguration has changed. (b) It has been considered essential to rely on means for fully auto-mated test preparation, test execution and test evaluation because the expected huge amounts ofgenerated test cases could not be executed otherwise. (c) It has been regarded beneficial to usethe test engineer’s domain knowledge for (manually) defining the needed restriction and sortingfunctions because the investigated restriction and sorting functions have proven to be effective butnot sufficient for realistic network configurations.The two case studies included in this thesis have extensively covered the two new testing areasand discussed in depth the approaches as well as alternatives and implementation options. Futureaircraft projects can benefit from these achievements and adapt the approaches or the implemen-tations to the respective new environment (e.g., a new test system, an improved fully automatedtool chain).

Future Work

System testing in avionics has been addressed in this thesis, on the one hand, by elaborating onthe processes and means to be followed while developing and testing the systems and, on the otherhand, by detailing two test approaches for testing single IMA platforms and a network of IMAplatforms. The discussion in the following focuses on future directions concerning the findingsand observations gained in this thesis.

Interaction with other domains. In this thesis, the development and testing of avionics systems(particularly of IMA-based systems) has been addressed in detail. As described, avionics systemsare usually safety-critical real-time systems – a type of systems that can also be found in other do-mains like, for example, aerospace, automotive and railway. Since these domains – especially theautomotive domain – encounter similar demands and requirements as the avionics domain (e.g.,with respect to cost, size, weight, power consumption, and increasing functionality), there hasbeen a similar evolution towards common standardized platforms and new networking technolo-gies. One aspect of future work should focus in more detail on the similarities and differences andassess how the avionics domain could benefit from synergy effects in the following areas: (a) Theprocesses and methods for developing and testing avionics systems and their components, (b) thesystem specifications and the standards for avionics components and networking technologies, (c)approaches for testing of single IMA platforms, and (d) approaches for testing a network of IMAplatforms. Such an assessment may additionally incorporate technologies, products, standards,and development and testing processes developed for other markets (e.g., the area of telecommu-nications). Similarly, it could be analyzed how the findings achieved in the avionics domain canbe used in these other domains or tailored according to their needs.Another aspect of future work could be further contributions via the APEX working group to theARINC 653 specifications – particularly to the conformity test specification defined in part 3 ofthe ARINC 653. Activities in this area had been started by providing comments to the confor-mity test suite (see [Ott05]). Future activities could involve the following: The findings obtainedwhen implementing the test procedure templates for the bare IMA module test suite could be usedas a base for examining the ARINC 653 specifications for consistency and completeness of thespecification. Furthermore, the observations gained when comparing the approach for bare IMA

Page 284: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

264 CHAPTER 10. CONCLUSION

module testing with the conformity test suite (see Sect. 8.2.1) could be analyzed regarding theimprovement of future versions of the conformity test suite.

Future work on processes and means. The systems engineering part of this thesis has elabo-rated on the processes and means for the development and V&V of avionics systems, particularlyof IMA-based systems. These descriptions have focused on describing the evolution as well as thecurrent state. Currently, the airframers envisage to apply more test-objective-oriented integrationstrategies which may probably lead to a set of new integration processes. Future work may ana-lyze (a) whether the testing approaches detailed in this thesis can still be applied and (b) how theseapproaches and the other processes and approaches for each integration testing step will then haveto be changed.

Independently of these potential integration strategy changes, it has been observed in the two casestudies that the current test bench does not support fully automated testing because automated dataloading is not supported. Future work could examine how the processes which address the creationof new tools can be improved such that automated testing is appropriately considered during thedevelopment. In future development processes, it should be assured that, among others, such toolsare generally suitable for usage in an automated environment, interoperability with other tools inthe tool chain has been considered, and a common tool chain execution environment has beendefined.

Future work on testing of bare IMA platforms. Considering the test suite for bare IMA plat-form testing, future work may also address the following areas: Firstly, if integrators continue touse the implemented test suite for regression testing, it should be analyzed whether the current testpreparation environment (which requires manual data loading) can be improved such that it con-tains means for fully automated data loading. For example, this could mean to take an improvedversion of the current data loading tool into use, to find and integrate a new and more suitable dataloading tool, or to develop a new data loading tool that supports automated testing.Secondly, if the integrators plan to extend the current test suite (e.g., such that the optional ex-tended services described in part 2 of the ARINC 653 specification can also be tested), the newtest procedures will require changes to the test suite implementation, particularly to the generictest application and the CSP environment for commanding. Based on the assessment provided inSect. 8.1, it should then be evaluated thoroughly, if the test suite should be re-implemented usinganother test system (for example, the RT-Tester 6) to avoid costly tailoring.

Future work on testing a network of IMA platforms. Considering the approach for testing anetwork of IMA platforms, future work may address (a) the generation algorithm, (b) its imple-mentation, (c) a possible test bench, (d) the definition of effective restriction and sorting functions,and (e) the overall approach. Future activities can address these areas independently of a specificnetwork configuration and a particular test bench (the first two areas and the last one) or in thecontext of specific case studies which apply the described approach to realistic (and thus probablymore complex) network configurations (the other two areas).Future work on the generation algorithm has been discussed in Sect. 7.5 and has consideredalgorithm-related changes, further influencing factors, further generic (i.e., non-application-specific) restriction functions, and further scheduling-related variations.In the scope of this thesis, a prototype implementation of the algorithm has been developed which,for future case studies, may need to be taken further towards a full implementation. This taskshould be based on the assessment of the existing prototype implementation (see Sect. 7.4.1) inorder to ensure that the full implementation is optimized with respect to time and memory require-ments and thus allows the generation of much longer communication schedules.Future work on the test bench should be performed in the context of a case study which defines the

Page 285: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

265

specific network configuration characteristics (e.g., the availability of test control ports) and thetool environment. For implementing the approach, the most important task is then to implementthe means required for test execution and test evaluation (e.g., means for distributing the generatedtest cases to the test applications and means for collecting the communication logs during or aftertest execution).Future work concerning the restriction and sorting functions may consider the improvement ofthe existing (application-independent) functions by gaining from other research areas, may ex-amine the usage of application-specific functions, or may combine these improvement directions.For the former, future development may gain from the field of model-based test case generationaccording to given coverage criteria, the area of workload generation for performance testing,schedulability analysis, and network integration testing in telecommunications, among others. Forthe application-specific functions, detailed knowledge about the domain is required. Since thismeans that the definition of appropriate functions relies on the test engineer’s expertise, it shouldbe analyzed how the overall integration testing approach can be improved such that application-specific knowledge can be gained in a more automated way.Future work concerning the overall approach may also analyze how recent work in the area ofstatic analysis and automated generation of test cases for hybrid systems (cf. [BFPT06], [PLK07],and [RT-Tester 6.2]) can be used for testing the communication flow of a network of IMA mod-ules. In particular, this means how it can be applied to the problem of generating the set of mostrelevant test cases such that these test cases can be executed in the given testing time. For applyingthese new methods, it is required to find an appropriate transformation of the provided information(i.e., the given network configuration as well as of the module and network performance informa-tion) into a model that can be handled by the new test automation framework (i.e., into one of thesupported input formalisms or directly into the intermediate model representation used by the testautomation system). Moreover, an appropriate test strategy has to be selected which guides thesymbolic test case generator and thus determines which test cases are generated. Since differenttest strategies are supported, their applicability and effectiveness may be analyzed in detail: (1) Itmay be assessed if generic test strategies which are driven by structural coverage considerationsare powerful enough to effectively reduce the number of generated test cases and simultaneouslyfocus on generating the relevant ones. (2) It may be investigated if an appropriate fault model canbe specified (probably using the test engineer’s domain expertise) such that the symbolic test casegenerator constructs test cases which can verify the absence of the respective failure types.

Finally, taking a broader perspective, future work may (and probably will have to) investigate howthe basic ideas behind the evolution towards integrated modular avionics could be developed fur-ther – particularly, in order to reduce to an even greater degree the necessary space allocation, theresulting weight (for components and wiring), and the resulting power consumption. Obviously,these requirements could partially be achieved by improving the existing components but shouldalso be addressed from a different perspective – assuming that it is possible to gain a longer-lastingeffect. Although all new or improved types of computing platforms, peripheral components, andnetworking technologies will have to comply with the same safety requirements, it should be in-vestigated how new approaches could decrease the number of components while, for example,still assuring a similar level of redundancy for each system. For example, it could be examinedif it is possible to use common standby components on which applications can get activated oncethey have been identified as faulty on their active component.1 Naturally, such changes may in-corporate new requirements for the components (e.g., on-the-fly (re-)configuration according tothe needs of the newly hosted application). Moreover, it is expected to have a significant effect onthe processes and methods for integration testing and certification because the new processes andmeans would probably have to deal with an increased complexity of the system under test.

1An approach like this is based on the assumption that only a limited number of components or applications arefailing at the same time.

Page 286: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

266

Page 287: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Part V

Appendices

267

Page 288: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das
Page 289: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Appendix A

CSP

This chapter shall provide an informal definitions of special processes, channels and events, CSPoperators, and data types. The expressions use the (machine readable) ASCII syntax of CSPM asused for the model-checker tool FDR and the test system RT-Tester (see [FDR] for the completesyntax of CSPM). For testing with the test system RT-Tester, decomposed Timed CSP specifi-cations are used which requires specific timer channels (see [RT-Tester 5] for more details and[Mey01] for the theoretical background).

CSP also allows to use sequences and sets and provides respective functions to operate on them(e.g., concatenation of sequences) which are extensively used for the declaration of data types andchannels in Sect. 6.4.

A.1 Processes

STOP Special process that never engages in any of the events (i.e., representsa deadlock).

SKIP Special process denoting successful termination. It enables, in particu-lar in combination with the sequence operator, a modular approach forwriting specifications (e.g., by defining macro processes which can berepeatedly used for recurring behavior).

A.2 Channels and Events

Atomic channel An unstructured channel can transmit only one event or signal. Exam-ple: channel e

Structuredchannel

A structured channel can transmit different events which all use the sameprefix (the channel name) and a structure of data components. Eachdata component is of a previously defined data type (see Appendix A.4).Example: channel c : { 1..3 }The resulting set of events to be transmitted via the channel c is denotedby {|c|} and contains c.1, c.2, c.3.

Output event Event produced by the current process Q. Example: Q = c!2 -> SKIPThe notation is only used to improve the readability of the specification.

269

Page 290: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

270 APPENDIX A. CSP

Input event(constrained)

Event produced by another process and accepted by the current pro-cess P. Used to accept only a specific event of the channel. Example:P = c?1 -> SKIP The process only accepts c.1.The notation is only used to improve the readability of the specification.

Input event(unconstrained)

Event produced by another process and accepted by the current pro-cess P. The process accepts all events of the channel and saves therespective data item in a local variable of the process (the variablename must be different from the respective data type elements). It isa shortcut for prefixed processes composed by external choice. Exam-ple: P = c?x -> Q(x) accepts all events in {|c|}.The notation is only used to improve the readability of the specification.

Set timerchannel

A specific structured channel whose events are used in RT-Tester testspecifications to start a specific timer. The possible timer identifiersare explicitly defined by the data type and the length of each timer(either fixed or random within a given range) is specified in a sepa-rate configuration file. For identification in an RT-Tester test speci-fication, the respective channel declaration is preceeded by the labelpragma AM SET TIMER. Example: channel setTimer : TIMER

Elapsed timerchannel

A specific structured channel whose events are generated by the test sys-tem RT-Tester to denote that the specified timer has expired. The pos-sible timer identifiers are explicitly defined by the data type and shouldat least comprise those which can be used for timer starting. For iden-tification in an RT-Tester test specification, the respective channel dec-laration is preceeded by the label pragma AM ELAPSED TIMER. Example:channel elapsedTimer : TIMER

Reset timerchannel

A specific structured channel whose events are used in RT-Tester testspecifications to stop a timer before it expires. The possible timeridentifiers are explicitly defined by the data type and should at leastcomprise those which can be used for timer starting. For identifi-cation in an RT-Tester test specification, the respective channel dec-laration is preceeded by the label pragma AM RESET TIMER. Example:channel resetTimer : TIMER

Input, output,internal, error orwarning channel

In an RT-Tester test specification, it is necessary to define for whichpurpose a channel is used within the abstract machine: Input and out-put channels are used for communication with the SUT or other ab-stract machines, internal channels are used only within an abstract ma-chine, and error or warning channels are used to indicate failure situa-tions. Therefore, the respective channels are grouped and preceeded bythe label pragma AM INPUT, pragma AM OUTPUT, pragma AM INTERNAL,pragma AM ERROR, or pragma AM WARNING, respectively.

A.3 Operators

Operators are used to combine different CSP processes to a system or network of processes. Theoperators can be used for modeling

• sequences of events,

Page 291: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

A.3. OPERATORS 271

• branching,• loops,• concurrency,• parallelism, and• abstraction from certain events.

Further operators exist but are typically not used for specifying test specifications. In particular,they are not needed in Sect. 6.4. A more complete list can be found, for example, in [FDR],[Mey01] and [DS04].

A.3.1 Modeling Sequences of Events

P = e -> Q Event prefixingProcess P first engages in event e and then behaves exactly as describedby Q (see also process reference).

P = Q ; R Sequential compositionProcess P first behaves like process Q and, after the successful termina-tion of Q, like R.

A.3.2 Modeling Branching

P = Q [] R external choiceProcess P behaves like the process which can accept the initial event. Ifthe initial event can be accepted by both, it behaves like process Q or R(chosen non-deterministically).

P =

[] x:{ 1..3 } @c.x -> Q(x)

Replicated external choiceProcess P behaves like one of the processes that are represented bythe above expression x:{ 1..3 } @ c.x -> Q(x), i.e., the replicated ex-ternal choice is a shortcut for the unfolded external choice expressionP = (c.1 -> Q(1)) [] (c.2 -> Q(2)) [] (c.3 -> Q(3)).

P = Q |˜| R Internal choice (non-deterministic choice)Process P selects non-deterministically to behave like process Q or likeprocess R without giving the environment the possibility to influence thedecision.

P =

|˜| x:{ 1..3 } @c.x -> Q(x)

Replicated internal choiceProcess P selects non-deterministically one of the processes representedby the above expression x:{ 1..3 } @ c.x -> Q(x), i.e., the replicatedinternal choice is a shortcut for the unfolded internal choice expressionP = (c.1 -> Q(1)) |˜| (c.2 -> Q(2)) |˜| (c.3 -> Q(3)).

P(a,b) =

if (a < b)

then Q

else R

if-then-else operatorProcess P behaves like Q if the boolean condition evaluates to true and,otherwise, like R. The condition expression can use constants, processparameter and process local variables (i.e., communicated values).This operator is introduced by CSPM but also commonly used for writ-ing plain CSP specifications.

Page 292: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

272 APPENDIX A. CSP

P = b & Q Boolean guardProcess P behaves like Q if the boolean condition evaluates totrue. It is a shortcut for the expression if (b) then Q else STOP.It is commonly used with the external choice operator. Example:P = in?b1.b2 -> (((b1 or b2) & Q) [] ((b1 and b2) & R))

This operator is introduced by CSPM but also commonly used for writ-ing plain CSP specifications.

A.3.3 Modeling Loops

P = a -> P RecursionProcess P accepts or generates event a and the behaves like process P. Itis not possible to unfold a recursive expression completely.

P = Q Process referencesProcess P behaves like the process named Q. Process references are usedto structure a process, to reuse a particular process specification, andfinally allows to model loops by recursion. This means, whenever Q isnot specified recursively, the name of the process can be substituted byits defining process term.Note that each valid process term ends with a process reference – ei-ther one of the predefined processes (i.e., STOP and SKIP as described inAppendix A.1) or another defined process.

A.3.4 Modeling Concurrency

P = Q ||| R Interleaving operatorProcess Q and R are executed interleaved (i.e., do not have to synchronizefor common events).

P =

||| x:{ 1..3 } @Q(x)

Replicated interleaving operatorProcess P behaves like the interleaved execution of the of pro-cesses defined by the expression x:{ 1..3 } @ Q(x), i.e., the repli-cated interleaving is a shortcut for the unfolded interleaving expressionP = Q(1) ||| Q(2) ||| Q(3).

A.3.5 Modeling Parallelism

P = Q [| a |] R Synchronized parallel operator (sharing)Process Q and process R are executed jointly and shared events (i.e., theevents in set a) are executed synchronously (hand-shaken communica-tion).

A.3.6 Modeling Abstraction

P = Q \ s Hiding operatorProcess P behaves like process Q but the events in set s have been inter-nalized and are thus not visible.

Page 293: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

A.4. DATA TYPES 273

A.3.7 Additional Operators

P(x) =

c?x -> Q(x+1)

Process parametersProcess P behaves dependent on the process local variable x, i.e., P(x)is a generic specification and a notation for describing families of pro-cesses. The process template can be instantiated with concrete valuesfor the process parameter. In the given example, the possible parame-ter values depend on the definition of channel c. The following processinstances are possible for channel c : { 1..3 } (note that the currentvalue of x can be viewed as part of the process name): P1 = c?1 -> Q2,P2 = c?2 -> Q3, and P3 = c?3 -> Q4

A.4 Data Types

Pre-defined types Only two data types are predefined: the type of boolean values Booland the type of integer values Int. There are pre-defined functionsfor integer arithmetics and boolean expressions which are listed in[FDR] (Appendix A). Example: P = in?x.y -> out.(x+y) -> SKIP,P = (b1 and b2) & Q

Named types Used to associate a name with a type expression, e.g.,Values = { 1..128 }. Named types can also be constructed byother named types, e.g., Range = Values.Values.

Constants The simplest form of a named type, e.g., Max Value = 128.

Data types Defines in its simplest form an enumeration type that isused to define a number of atomic constants, e.g., datatypeColors = red | green | blue. It can also define variants of types,e.g., datatype All Colors = Colors.Intensity | black | white.

A.5 Sequences and Sets

A.5.1 Sequences

The following list introduces the sequence related functions used in Sect. 6.4. A complete list canbe found in [FDR], p. 49.

< > empty sequence< 1,2,3> sequence with three elements< m..n> closed range sequence (from integer m to n inclusive)sˆt sequence concatenation (i.e., sequence t appended to sequence s)length(s) length of sequence snull(s) test if sequence s is emptyhead(s) give first element of the non-empty sequence stail(s) give all but the first element of the non-empty sequence selem(x,s) test if element x occurs in sequence s

Page 294: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

274 APPENDIX A. CSP

A.5.2 Sets

The following list introduces the set related functions used in Sect. 6.4. A complete list can befound in [FDR], p. 50.

{ } empty set{ 1,5,10 } set with three elements{ m..n } closed range set (from integer m to n)union(s,t) set union (i.e., all elements of set s and of set t)inter(s,t) set intersection (i.e., all elements which are contained both in set s and

in set t)diff(s,t) set difference (i.e., all elements of set s which are not contained in set t)member(x,s) test if element x is contained in set scard(s) cardinality of set sempty(s) test if set s is emptyset(seq) convert sequence seq into a set

Page 295: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Appendix B

IMA Test Execution Environment –Examples

B.1 CSP Type Definitions

B.1.1 CSP Types for Commanding of API Calls

IMA_types1.csp:

--------------------------------------------------------------------------------

-- Constants for the communication protocol

--------------------------------------------------------------------------------

-- process index of first test application which provides router

-- functionality to all other test applications (in NORMAL MODE)

-- and may be used as standard test application also

FIRST_TA_IDX = 1

-- Definitions of buffer attributes for communication with a

-- standard TA

-- max message size for the communication buffer

TA_BUF_MSG_SIZE = 64

-- max number of messages queued in the communication buffer

-- constraint: TA_BUF_MSG_NUM <= max(buffer_msg_range_t)

TA_BUF_MSG_NUM = 10

--------------------------------------------------------------------------------

-- Constant values defined in the API

--------------------------------------------------------------------------------

SYSTEM_LIMIT_NUMBER_OF_PARTITIONS = 32 -- module scope

SYSTEM_LIMIT_NUMBER_OF_MESSAGES = 512 -- module scope

SYSTEM_LIMIT_MESSAGE_SIZE = 8192 -- module scope

SYSTEM_LIMIT_NUMBER_OF_PROCESSES = 128 -- partition scope

SYSTEM_LIMIT_NUMBER_OF_SAMPLING_PORTS = 512 -- partition scope

SYSTEM_LIMIT_NUMBER_OF_QUEUING_PORTS = 512 -- partition scope

SYSTEM_LIMIT_NUMBER_OF_BUFFERS = 256 -- partition scope

SYSTEM_LIMIT_NUMBER_OF_BLACKBOARDS = 256 -- partition scope

SYSTEM_LIMIT_NUMBER_OF_SEMAPHORES = 256 -- partition scope

SYSTEM_LIMIT_NUMBER_OF_EVENTS = 256 -- partition scope

INFINITE_SYSTEM_TIME_VALUE = -1

MIN_PRIORITY_VALUE = 1

MAX_PRIORITY_VALUE = 63

MAX_LOCK_LEVEL = 16

APERIODIC_PROCESS = INFINITE_SYSTEM_TIME_VALUE

MAX_SEMAPHORE_VALUE = 32767

MAX_ERROR_MESSAGE_SIZE = 64

275

Page 296: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

276 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

--------------------------------------------------------------------------------

-- General datatype declarations

--------------------------------------------------------------------------------

datatype actpass_t = active | passive

--------------------------------------------------------------------------------

-- API datatype declarations

--------------------------------------------------------------------------------

------ datatypes for ENUM representation

-- return values of API calls

-- enums starting with ret_TA refere to errors which are detected by the test

-- application

datatype retcode_t = ret_NO_ERROR | ret_NO_ACTION | ret_NOT_AVAILABLE |

ret_INVALID_PARAM | ret_INVALID_CONFIG | ret_INVALID_MODE |

ret_TIMED_OUT |

ret_TA_ID_ERROR | ret_TA_STATUS_ERROR |

ret_TA_MESSAGE_ERROR | ret_TA_UNKNOWN_ERROR

-- possible operating modes

datatype operating_mode_t = op_IDLE | op_COLD_START | op_WARM_START | op_NORMAL

-- direction types for communication port creation

datatype port_direction_t = dir_SOURCE | dir_DESTINATION

-- queuing discipline for buffers

datatype queuing_discipline_t = qd_FIFO | qd_PRIORITY

datatype deadline_t = deadline_HARD | deadline_SOFT

datatype process_status_t = proc_DORMANT | proc_READY | proc_RUNNING | proc_WAITING

datatype start_condition_t = sc_NORMAL_START | sc_PARTITION_RESTART |

sc_HM_MODULE_RESTART | sc_HM_PARTITION_RESTART

datatype validity_type_t = vt_INVALID | vt_VALID

datatype empty_indicator_type_t = ei_EMPTY | ei_OCCUPIED

datatype event_state_type_t = es_DOWN | es_UP

datatype error_code_value_type_t = ec_DEADLINE_MISSED | ec_APPLICATION_ERROR |

ec_NUMERIC_ERROR | ec_ILLEGAL_REQUEST |

ec_STACK_OVERFLOW | ec_MEMORY_VIOLATION |

ec_HARDWARE_FAULT | ec_POWER_FAIL

--------------------------------------------------------------------------------

-- Sets and Sequences related for different purposes

--------------------------------------------------------------------------------

-- set of possible sequence identifiers for consecutive messages with the same

-- message identifier (i.e., the message payload is defined by the

-- message identifier, the message sequence identifier, and the message length)

-- (0) is used as an undefined sequence identifier for messages which are too

-- short to contain a sequence identifier

msg_seq_id_t = {0..6}

-- timeout selection for SEND_QUEUING_MESSAGE, RECEIVE_QUEUING_MESSAGE,

-- SEND_BUFFER, RECEIVE_BUFFER, READ_BLACKBOARD

-- (-1) is used for coding INFINITE_SYSTEM_TIME_VALUE

msg_timeout_t = {0,10,50,100,500,1000,10000,100000,500000,-1}

--------------------------------------------------------------------------------

-- Sets and Sequences related with Partition Management

--------------------------------------------------------------------------------

-- avionics partitions usable with this types declaration file

-- 0 is used as reference to external data (avionics_part_t is used in

Page 297: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.1. CSP TYPE DEFINITIONS 277

-- receive channels for sender identification)

avionics_part_t = {0..2}

-- partition identifier (as used in the configuration table)

-- returned by GET_PARTITION_STATUS

-- (i.e., no other values should be used in the configuration table)

part_ident_t = {0..32}

-- partition period

part_period_t = {0,1,10,50,100,250,500,750,1000,1500,3000,5000}

-- partition duration

part_duration_t = {0,1,10,50,100,200,250,500,750,1000,5000}

--------------------------------------------------------------------------------

-- Sets and Sequences related with Process Management

--------------------------------------------------------------------------------

-- set of possible process indices (internally associated with process names)

process_number_idx_t = {1..128}

-- set of possible process indices for robustness testing

-- index 0 : mapped to process with empty name

-- index 1..128: mapped to respective process in mapping table

-- index 129 : for robustness testing when creating more than

-- SYSTEM_LIMIT_NUMBER_OF_PROCESSES;

-- also used as index for the error handler process

-- index 130 : main process

process_number_idx_rob_t = {0..130}

-- set of possible stack sizes

-- (-1) is used for coding the maximum stack size (ULONG_MAX)

-- valid stack sizes are multiples of 4KB, so only a small number of

-- invalid sizes is represented here (0,1,2048,30000)

stack_size_t = { -1,0,1,2048,4096,8192,12288,30000,32768,65536}

-- legal and illegal process priorities

-- { MIN_PRIORITY_VALUE-1, MIN_PRIORITY_VALUE..MAX_PRIORITY_VALUE, MAX_PRIORITY_VALUE+1}

-- = { 0, 1..63, 64}

priority_t = {0..64}

-- set of process periods

-- -1 represents the aperidoic process (with period INFINITE_SYSTEM_TIME_VALUE)

period_t = { -1,0,1,10,50,100,250,500,750,1000,1500,3000,5000,10000,65535,100000}

-- set of process’ time capacities

-- (-1) represents INFINITE_SYSTEM_TIME_VALUE (i.e., no deadline)

time_capacity_t = { -1,0,1,10,50,100,250,500,750,1000,1500,3000,5000,10000,65535,100000}

-- lock level

-- this type is only used for parameters in IMA_output_channels and

-- should therefore not be changed in IMA_types.

-- {0..MAX_LOCK_LEVEL}

lock_level_t = {0..16}

-- timeout (e.g. for SUSPEND_SELF)

-- (-1) is used for coding INFINITE_SYSTEM_TIME_VALUE

timeout_t = {0,10,50,100,500,1000,10000,100000,500000,-1}

-- delay (e.g. for DELAYED_START, TIMED_WAIT)

-- (-1) is used for coding INFINITE_SYSTEM_TIME_VALUE

delay_t = {0,10,50,100,500,1000,10000,100000,500000,-1}

-- replenish time (e.g. for REPLENISH)

-- (-1) is used for coding INFINITE_SYSTEM_TIME_VALUE

budget_t = {0,10,50,100,500,1000,10000,100000,500000,-1}

--------------------------------------------------------------------------------

-- Sets and Sequences related with Inter-Partition Communication

--------------------------------------------------------------------------------

--

Page 298: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

278 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- Sampling Ports

--

-- set of sampling port indices

-- index 0 : mapped to sampling port with empty name

-- index 1..512: mapped to respective sampling port in mapping table

-- index 513 : for robustness testing when creating more than

-- SYSTEM_LIMIT_NUMBER_OF_SAMPLING_PORTS

sampling_port_idx_t = {0..513}

-- possible sampling port sizes for sampling port creation

-- - as set for channel definitions and

-- - as ordered sequence for determination of the largest possible size for a

-- defined sampling port

sampling_port_msg_size_t = {0,1,4,8,12,16,20,64,128,512,1024,1471,1472,2048,8192,8193}

sampling_port_msg_size_seq = <0,1,4,8,12,16,20,64,128,512,1024,1471,1472,2048,8192,8193>

-- possible refresh periods

-- (-1) is used for coding INFINITE_SYSTEM_TIME_VALUE

refresh_period_t = {0,10,50,100,500,1000,10000,100000,500000,-1}

refresh_period_seq = < -1, 0,10,50,100,500,1000,10000,100000,500000>

--

-- Queuing Ports

--

-- set of queuing port indices

-- index 0 : mapped to queuing port with empty name

-- index 1..512: mapped to respective queuing port in mapping table

-- index 513 : for robustness testing when creating more than

-- SYSTEM_LIMIT_NUMBER_OF_QUEUING_PORTS

queuing_port_idx_t = {0..513}

-- possible queuing port sizes for queuing port creation

-- - as set for channel definitions and

-- - as ordered sequence for determination of the largest possible size for a

-- defined queuing port

queuing_port_msg_size_t = {0,1,4,8,12,16,20,64,128,512,1024,2048,4096,8192,8193}

queuing_port_msg_size_seq = <0,1,4,8,12,16,20,64,128,512,1024,2048,4096,8192,8193>

-- max number of messages for queuing port

-- { 0 .. SYSTEM_LIMIT_NUMBER_OF_MESSAGES}

queuing_port_msg_range_t = {0..512}

--------------------------------------------------------------------------------

-- Sets and Sequences related with Intra-Partition Communication

--------------------------------------------------------------------------------

--

-- Buffer

--

-- set of buffer indices to be used

-- index 0 : mapped to buffer with empty name

-- index 1..256: mapped to respective buffer in mapping table

-- index 257 : for robustness testing when creating more than

-- SYSTEM_LIMIT_NUMBER_OF_BUFFERS

buffer_idx_t = {0..257}

-- possible buffer message sizes for buffer creation

-- - as set for channel definitions and

-- - as ordered sequence for determination of the largest possible size for a

-- defined buffer

buffer_msg_size_t = {0,1,4,64,128,512,1024,2048,4096,8192,8193}

buffer_msg_size_seq = <0,1,4,64,128,512,1024,2048,4096,8192,8193>

-- max number of messages for buffer

-- { 0 .. SYSTEM_LIMIT_NUMBER_OF_MESSAGES}

buffer_msg_range_t = {0..512}

--

Page 299: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.1. CSP TYPE DEFINITIONS 279

-- Blackboards

--

-- set of blackboard indices to be used

-- index 0 : mapped to blackboard with empty name

-- index 1..256: mapped to respective blackboard in mapping table

-- index 257 : for robustness testing when creating more than

-- SYSTEM_LIMIT_NUMBER_OF_BLACKBOARDS

blackboard_idx_t = {0..257}

-- possible blackboard message sizes for blackboard creation

-- - as set for channel definitions and

-- - as ordered sequence for determination of the largest possible size for a

-- defined blackboard

blackboard_msg_size_t = {0,1,4,64,128,512,1024,2048,4096,8192,8193}

blackboard_msg_size_seq = <0,1,4,64,128,512,1024,2048,4096,8192,8193>

--

-- Semaphores

--

-- set of semaphore indices to be used

-- index 0 : mapped to semaphore with empty name

-- index 1..256: mapped to respective semaphore in mapping table

-- index 257 : for robustness testing when creating more than

-- SYSTEM_LIMIT_NUMBER_OF_SEMAPHORES

semaphore_idx_t = {0..257}

-- possible semaphore values, restricted to 0..256 here (orignal: 0-32767)

semaphore_value_t = {0..256}

-- possible semaphore timeout values

-- (-1) is used for coding INFINITE_SYSTEM_TIME_VALUE

semaphore_timeout_t = {0,10,50,100,500,1000,10000,100000,500000,-1}

--

-- Events

--

-- set of event indices to be used

-- index 0 : mapped to event with empty name

-- index 1..256: mapped to respective event in mapping table

-- index 257 : for robustness testing when creating more than

-- SYSTEM_LIMIT_NUMBER_OF_EVENTS

event_idx_t = {0..257}

-- possible event timeout values

-- (-1) is used for coding INFINITE_SYSTEM_TIME_VALUE

event_timeout_t = {0,10,50,100,500,1000,10000,100000,500000,-1}

--------------------------------------------------------------------------------

-- Sets and Sequences related with Health Monitoring

--------------------------------------------------------------------------------

-- error code

-- as a set for channel definitions using the elements defined in

-- datatye error_code_value_type_t

error_code_value_t = {ec_DEADLINE_MISSED, ec_APPLICATION_ERROR, ec_NUMERIC_ERROR,

ec_ILLEGAL_REQUEST, ec_STACK_OVERFLOW, ec_MEMORY_VIOLATION,

ec_HARDWARE_FAULT, ec_POWER_FAIL}

-- error messages to be used with RAISE_APPLICATION_ERROR and GET_ERROR_STATUS

-- - 0 : empty message

-- - 1..MAX_ERROR_MESSAGE_SIZE: allowed error message size (1..64)

-- - MAX_ERROR_MESSAGE_SIZE+1 : for robustness testing (65)

error_msg_size_t = {0, 1, 16, 32, 48, 52, 64, 65}

--------------------------------------------------------------------------------

-- Sets and Sequences for Scenarios

Page 300: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

280 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

--------------------------------------------------------------------------------

-- number of scenarios

scen_number_t = {0..100}

-- maximum number of parameters per scenario

scen_parameter_num_t = {1..10}

-- possible values for the scenario parameters

scen_parameter_value_t = {0..1000}

-- possible parameter values for special scenarios

-- (values are set using a special channel)

scen_ext_parameter_value_t = {1023, 1024, 1028, 1029, 1471, 1472, 1500, 2048,

4096, 8192, 10000, 16384, 32767, 100000, 349525,

500000, 524287, -1}

-- possible values for the scenario return parameters

scen_ret_parameter_value_t = {0..1000}

-- possible return parameter values for special scenarios

scen_ext_ret_parameter_value_t = {1023, 1024, 1471, 1472, 1500, 2048, 4096, 8192,

10000, 16384, 32767, 100000, 349525, 500000,

524287, -1}

B.1.2 CSP Types for Communication Flow Scenario

IMA_com_flow_types.csp:

--------------------------------------------------------------------------------

-- maximum number of receivers

COM_FLOW_MAX_RECEIVER = 20

-- index of possible receivers (1..COM_FLOW_MAX_RECEIVER)

-- Note: In the communication flow message the array element is index-1.

com_flow_receiver_t = {1..20}

-- predefined values for communication flow message sizes.

-- (340 bytes is the smallest communication flow message

-- if COM_FLOW_MAX_RECEIVER=20)

com_flow_msg_size_t = { 340,512,1024,2048,4096,8192 }

-- set of sequence IDs for communication flow messages

com_flow_sequence_id_t = { 1..10 }

-- maximum number of different messages which each describe one communication flow

COM_FLOW_MAX_MESSAGE = 4

-- set of message IDs (1..COM_FLOW_MAX_MESSAGE)

com_flow_msg_number_t = {1..4}

-- possible ports to send messages to

datatype port_types_t = port_QUEUING_PORT | port_SAMPLING_PORT | port_BUFFER | port_BLACKBOARD

--------------------------------------------------------------------------------

Page 301: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.2. CSP CHANNEL DEFINITIONS 281

B.2 CSP Channel Definitions

B.2.1 CSP Channels for Commanding of API Calls and Scenarios

IMA_input_channels.csp:

--------------------------------------------------------------------------------

--

-- Channels for commanding API calls

--

-- API calls can have parameters which are defined in ARINC 653.

-- There are two types of channels:

-- * special channels to set the parameters of the consecutive API call and

-- * channels to trigger the API call with the before set parameters

-- (the API call will be performed by the denoted module.partition.process)

--

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

-- Partition Management

--------------------------------------------------------------------------------

------ API call: GET_PARTITION_STATUS

-- perform API call

channel API_call_GET_PARTITION_STATUS : MOD.PART.PROC

------ API call: SET_PARTITION_MODE

-- set API call parameter

channel SET_PARTITION_MODE_set_operating_mode : MOD.PART.PROC.operating_mode_t

-- perform API call with previously given parameter

channel API_call_SET_PARTITION_MODE : MOD.PART.PROC

--------------------------------------------------------------------------------

-- Process Management

--------------------------------------------------------------------------------

------ API call: GET_PROCESS_ID

-- set API call parameter

channel GET_PROCESS_ID_set_process_name : MOD.PART.PROC.process_number_idx_rob_t

-- perform API call with previously given parameter

channel API_call_GET_PROCESS_ID : MOD.PART.PROC

------ API call: GET_PROCESS_STATUS

-- set API call parameter

channel GET_PROCESS_STATUS_set_process_name : MOD.PART.PROC.process_number_idx_rob_t

-- perform API call with previously given parameter

channel API_call_GET_PROCESS_STATUS : MOD.PART.PROC

------ API call: CREATE_PROCESS

-- set API call parameters

-- Note: entry point for the process is the test application

channel CREATE_PROCESS_attribute_set_process_name : MOD.PART.PROC.process_number_idx_rob_t

channel CREATE_PROCESS_attribute_set_stack_size : MOD.PART.PROC.stack_size_t

channel CREATE_PROCESS_attribute_set_base_priority : MOD.PART.PROC.priority_t

channel CREATE_PROCESS_attribute_set_period : MOD.PART.PROC.period_t

channel CREATE_PROCESS_attribute_set_time_capacity : MOD.PART.PROC.time_capacity_t

channel CREATE_PROCESS_attribute_set_deadline : MOD.PART.PROC.deadline_t

-- perform API call with previously given parameters

channel API_call_CREATE_PROCESS : MOD.PART.PROC

------ API call: SET_PRIORITY

-- set API call parameters

channel SET_PRIORITY_set_process_id : MOD.PART.PROC.process_number_idx_rob_t

channel SET_PRIORITY_set_priority : MOD.PART.PROC.priority_t

-- perform API call with previously given parameters

channel API_call_SET_PRIORITY : MOD.PART.PROC

Page 302: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

282 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

------ API call: SUSPEND_SELF

-- set API call parameter

channel SUSPEND_SELF_set_time_out : MOD.PART.PROC.timeout_t

-- perform API call with previously given parameter

channel API_call_SUSPEND_SELF : MOD.PART.PROC

------ API call: SUSPEND

-- set API call parameter

channel SUSPEND_set_process_id : MOD.PART.PROC.process_number_idx_rob_t

-- perform API call with previously given parameter

channel API_call_SUSPEND : MOD.PART.PROC

------ API call: RESUME

-- set API call parameter

channel RESUME_set_process_id : MOD.PART.PROC.process_number_idx_rob_t

-- perform API call with previously given parameter

channel API_call_RESUME : MOD.PART.PROC

------ API call: STOP_SELF

-- perform API call

channel API_call_STOP_SELF : MOD.PART.PROC

------ API call: STOP

-- set API call parameter

channel STOP_set_process_id : MOD.PART.PROC.process_number_idx_rob_t

-- perform API call with previously given parameter

channel API_call_STOP : MOD.PART.PROC

------ API call: START

-- set API call parameter

channel START_set_process_id : MOD.PART.PROC.process_number_idx_rob_t

-- perform API call with previously given parameter

channel API_call_START : MOD.PART.PROC

------ API call: DELAYED_START

-- set API call parameters

channel DELAYED_START_set_process_id : MOD.PART.PROC.process_number_idx_rob_t

channel DELAYED_START_set_delay_time : MOD.PART.PROC.delay_t

-- perform API call with previously given parameters

channel API_call_DELAYED_START : MOD.PART.PROC

------ API call: LOCK_PREEMPTION

-- perform API call

channel API_call_LOCK_PREEMPTION : MOD.PART.PROC

------ API call: UNLOCK_PREEMPTION

-- perform API call

channel API_call_UNLOCK_PREEMPTION : MOD.PART.PROC

------ API call: GET_MY_ID

-- perform API call

channel API_call_GET_MY_ID : MOD.PART.PROC

--------------------------------------------------------------------------------

-- Time Management

--------------------------------------------------------------------------------

------ API call: TIMED_WAIT

-- set API call parameter

channel TIMED_WAIT_set_delay_time : MOD.PART.PROC.delay_t

-- perform API call with previously given parameter

Page 303: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.2. CSP CHANNEL DEFINITIONS 283

channel API_call_TIMED_WAIT : MOD.PART.PROC

------ API call: PERIODIC_WAIT

-- perform API call

channel API_call_PERIODIC_WAIT : MOD.PART.PROC

------ API call: GET_TIME

-- perform API call

channel API_call_GET_TIME : MOD.PART.PROC

------ API call: REPLENISH

-- set API call parameter

channel REPLENISH_set_budget_time : MOD.PART.PROC.budget_t

-- perform API call with previously given parameter

channel API_call_REPLENISH : MOD.PART.PROC

--------------------------------------------------------------------------------

-- Inter-Partition Communication

--------------------------------------------------------------------------------

--

-- Sampling Ports

--

------ API call: CREATE_SAMPLING_PORT

-- set API call parameters

channel CREATE_SAMPLING_PORT_set_sampling_port_name : MOD.PART.PROC.sampling_port_idx_t

channel CREATE_SAMPLING_PORT_set_max_message_size : MOD.PART.PROC.sampling_port_msg_size_t

channel CREATE_SAMPLING_PORT_set_port_direction : MOD.PART.PROC.port_direction_t

channel CREATE_SAMPLING_PORT_set_refresh_period : MOD.PART.PROC.refresh_period_t

-- perform API call with previously given parameters

channel API_call_CREATE_SAMPLING_PORT : MOD.PART.PROC

------ API call: WRITE_SAMPLING_MESSAGE

-- set API call parameters

-- Note: the parameters define only the message size and the encoded sequence ID,

-- the message to be sent is generated by a helper function in the TA.

channel WRITE_SAMPLING_MESSAGE_set_sampling_port_id : MOD.PART.PROC.sampling_port_idx_t

channel WRITE_SAMPLING_MESSAGE_set_msg_size : MOD.PART.PROC.sampling_port_msg_size_t

channel WRITE_SAMPLING_MESSAGE_set_msg_seq_id : MOD.PART.PROC.msg_seq_id_t

-- perform API call with previously given parameters

channel API_call_WRITE_SAMPLING_MESSAGE : MOD.PART.PROC

------ API call: READ_SAMPLING_MESSAGE

-- set API call parameter

channel READ_SAMPLING_MESSAGE_set_sampling_port_id : MOD.PART.PROC.sampling_port_idx_t

-- perform API call with previously given parameter

channel API_call_READ_SAMPLING_MESSAGE : MOD.PART.PROC

------ API call: GET_SAMPLING_PORT_ID

-- set API call parameter

channel GET_SAMPLING_PORT_ID_set_sampling_port_name : MOD.PART.PROC.sampling_port_idx_t

-- perform API call with previously given parameter

channel API_call_GET_SAMPLING_PORT_ID : MOD.PART.PROC

------ API call: GET_SAMPLING_PORT_STATUS

-- set API call parameter

channel GET_SAMPLING_PORT_STATUS_set_sampling_port_id : MOD.PART.PROC.sampling_port_idx_t

-- perform API call with previously given parameter

channel API_call_GET_SAMPLING_PORT_STATUS : MOD.PART.PROC

--

-- Queuing Ports

Page 304: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

284 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

--

------ API call: CREATE_QUEUING_PORT

-- set API call parameters

channel CREATE_QUEUING_PORT_set_queuing_port_name : MOD.PART.PROC.queuing_port_idx_t

channel CREATE_QUEUING_PORT_set_max_message_size : MOD.PART.PROC.queuing_port_msg_size_t

channel CREATE_QUEUING_PORT_set_max_nb_message : MOD.PART.PROC.queuing_port_msg_range_t

channel CREATE_QUEUING_PORT_set_port_direction : MOD.PART.PROC.port_direction_t

channel CREATE_QUEUING_PORT_set_queuing_discipline : MOD.PART.PROC.queuing_discipline_t

-- perform API call with previously given parameters

channel API_call_CREATE_QUEUING_PORT : MOD.PART.PROC

------ API call: SEND_QUEUING_MESSAGE

-- set API call parameters

-- Note: the parameters define only the message size and the encoded sequence ID,

-- the message to be sent is generated by a helper function in the TA

channel SEND_QUEUING_MESSAGE_set_queuing_port_id : MOD.PART.PROC.queuing_port_idx_t

channel SEND_QUEUING_MESSAGE_set_msg_size : MOD.PART.PROC.queuing_port_msg_size_t

channel SEND_QUEUING_MESSAGE_set_msg_seq_id : MOD.PART.PROC.msg_seq_id_t

channel SEND_QUEUING_MESSAGE_set_time_out : MOD.PART.PROC.msg_timeout_t

-- perform API call with previously given parameters

channel API_call_SEND_QUEUING_MESSAGE : MOD.PART.PROC

------ API call: RECEIVE_QUEUING_MESSAGE

-- set API call parameters

channel RECEIVE_QUEUING_MESSAGE_set_queuing_port_id : MOD.PART.PROC.queuing_port_idx_t

channel RECEIVE_QUEUING_MESSAGE_set_time_out : MOD.PART.PROC.msg_timeout_t

-- perform API call with previously given parameters

channel API_call_RECEIVE_QUEUING_MESSAGE : MOD.PART.PROC

------ API call: GET_QUEUING_PORT_ID

-- set API call parameter

channel GET_QUEUING_PORT_ID_set_queuing_port_name : MOD.PART.PROC.queuing_port_idx_t

-- perform API call with previously given parameter

channel API_call_GET_QUEUING_PORT_ID : MOD.PART.PROC

------ API call: GET_QUEUING_PORT_STATUS

-- set API call parameter

channel GET_QUEUING_PORT_STATUS_set_queuing_port_id : MOD.PART.PROC.queuing_port_idx_t

-- perform API call with previously given parameter

channel API_call_GET_QUEUING_PORT_STATUS : MOD.PART.PROC

--------------------------------------------------------------------------------

-- Intra-Partition Communication

--------------------------------------------------------------------------------

--

-- Buffer

--

------ API call: CREATE_BUFFER

-- set API call parameters

channel CREATE_BUFFER_set_buffer_name : MOD.PART.PROC.buffer_idx_t

channel CREATE_BUFFER_set_max_message_size : MOD.PART.PROC.buffer_msg_size_t

channel CREATE_BUFFER_set_max_nb_message : MOD.PART.PROC.buffer_msg_range_t

channel CREATE_BUFFER_set_queuing_discipline : MOD.PART.PROC.queuing_discipline_t

-- perform API call with previously given parameters

channel API_call_CREATE_BUFFER : MOD.PART.PROC

------ API call: SEND_BUFFER

-- set API call parameters

-- Note: the parameters define only the message size and the encoded sequence ID,

-- the message to be sent is generated by a helper function in the TA

channel SEND_BUFFER_set_buffer_id : MOD.PART.PROC.buffer_idx_t

channel SEND_BUFFER_set_msg_size : MOD.PART.PROC.buffer_msg_size_t

channel SEND_BUFFER_set_msg_seq_id : MOD.PART.PROC.msg_seq_id_t

channel SEND_BUFFER_set_time_out : MOD.PART.PROC.msg_timeout_t

Page 305: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.2. CSP CHANNEL DEFINITIONS 285

-- perform API call with previously given parameters

channel API_call_SEND_BUFFER : MOD.PART.PROC

------ API call: RECEIVE_BUFFER

-- set API call parameters

channel RECEIVE_BUFFER_set_buffer_id : MOD.PART.PROC.buffer_idx_t

channel RECEIVE_BUFFER_set_time_out : MOD.PART.PROC.msg_timeout_t

-- perform API call with previously given parameters

channel API_call_RECEIVE_BUFFER : MOD.PART.PROC

------ API call: GET_BUFFER_ID

-- set API call parameter

channel GET_BUFFER_ID_set_buffer_name : MOD.PART.PROC.buffer_idx_t

-- perform API call with previously given parameter

channel API_call_GET_BUFFER_ID : MOD.PART.PROC

------ API call: GET_BUFFER_STATUS

-- set API call parameter

channel GET_BUFFER_STATUS_set_buffer_id : MOD.PART.PROC.buffer_idx_t

-- perform API call with previously given parameter

channel API_call_GET_BUFFER_STATUS : MOD.PART.PROC

--

-- Blackboards

--

------ API call: CREATE_BLACKBOARD

-- set API call parameters

channel CREATE_BLACKBOARD_set_blackboard_name : MOD.PART.PROC.blackboard_idx_t

channel CREATE_BLACKBOARD_set_max_message_size : MOD.PART.PROC.blackboard_msg_size_t

-- perform API call with previously given parameters

channel API_call_CREATE_BLACKBOARD : MOD.PART.PROC

------ API call: DISPLAY_BLACKBOARD

-- set API call parameters

-- Note: the parameters define only the message size and the encoded sequence ID,

-- the message to be sent is generated by a helper function in the TA

channel DISPLAY_BLACKBOARD_set_blackboard_id : MOD.PART.PROC.blackboard_idx_t

channel DISPLAY_BLACKBOARD_set_msg_size : MOD.PART.PROC.blackboard_msg_size_t

channel DISPLAY_BLACKBOARD_set_msg_seq_id : MOD.PART.PROC.msg_seq_id_t

-- perform API call with previously given parameters

channel API_call_DISPLAY_BLACKBOARD : MOD.PART.PROC

------ API call: READ_BLACKBOARD

-- set API call parameters

channel READ_BLACKBOARD_set_blackboard_id : MOD.PART.PROC.blackboard_idx_t

channel READ_BLACKBOARD_set_time_out : MOD.PART.PROC.msg_timeout_t

-- perform API call with previously given parameters

channel API_call_READ_BLACKBOARD : MOD.PART.PROC

------ API call: CLEAR_BLACKBOARD

-- set API call parameter

channel CLEAR_BLACKBOARD_set_blackboard_id : MOD.PART.PROC.blackboard_idx_t

-- perform API call with previously given parameter

channel API_call_CLEAR_BLACKBOARD : MOD.PART.PROC

------ API call: GET_BLACKBOARD_ID

-- set API call parameter

channel GET_BLACKBOARD_ID_set_blackboard_name : MOD.PART.PROC.blackboard_idx_t

-- perform API call with previously given parameter

channel API_call_GET_BLACKBOARD_ID : MOD.PART.PROC

------ API call: GET_BLACKBOARD_STATUS

Page 306: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

286 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- set API call parameter

channel GET_BLACKBOARD_STATUS_set_blackboard_id : MOD.PART.PROC.blackboard_idx_t

-- perform API call with previously given parameter

channel API_call_GET_BLACKBOARD_STATUS : MOD.PART.PROC

--

-- Semaphores

--

------ API call: CREATE_SEMAPHORE

-- set API call parameters

channel CREATE_SEMAPHORE_set_semaphore_name : MOD.PART.PROC.semaphore_idx_t

channel CREATE_SEMAPHORE_set_current_value : MOD.PART.PROC.semaphore_value_t

channel CREATE_SEMAPHORE_set_maximum_value : MOD.PART.PROC.semaphore_value_t

channel CREATE_SEMAPHORE_set_queuing_discipline : MOD.PART.PROC.queuing_discipline_t

-- perform API call with previously given parameters

channel API_call_CREATE_SEMAPHORE : MOD.PART.PROC

------ API call: WAIT_SEMAPHORE

-- set API call parameters

channel WAIT_SEMAPHORE_set_semaphore_id : MOD.PART.PROC.semaphore_idx_t

channel WAIT_SEMAPHORE_set_time_out : MOD.PART.PROC.semaphore_timeout_t

-- perform API call with previously given parameters

channel API_call_WAIT_SEMAPHORE : MOD.PART.PROC

------ API call: SIGNAL_SEMAPHORE

-- set API call parameter

channel SIGNAL_SEMAPHORE_set_semaphore_id : MOD.PART.PROC.semaphore_idx_t

-- perform API call with previously given parameter

channel API_call_SIGNAL_SEMAPHORE : MOD.PART.PROC

------ API call: GET_SEMAPHORE_ID

-- set API call parameter

channel GET_SEMAPHORE_ID_set_semaphore_name : MOD.PART.PROC.semaphore_idx_t

-- perform API call with previously given parameter

channel API_call_GET_SEMAPHORE_ID : MOD.PART.PROC

------ API call: GET_SEMAPHORE_STATUS

-- set API call parameter

channel GET_SEMAPHORE_STATUS_set_semaphore_id : MOD.PART.PROC.semaphore_idx_t

-- perform API call with previously given parameter

channel API_call_GET_SEMAPHORE_STATUS : MOD.PART.PROC

--

-- Events

--

------ API call: CREATE_EVENT

-- set API call parameter

channel CREATE_EVENT_set_event_name : MOD.PART.PROC.event_idx_t

-- perform API call with previously given parameter

channel API_call_CREATE_EVENT : MOD.PART.PROC

------ API call: SET_EVENT

-- set API call parameter

channel SET_EVENT_set_event_id : MOD.PART.PROC.event_idx_t

-- perform API call with previously given parameter

channel API_call_SET_EVENT : MOD.PART.PROC

------ API call: RESET_EVENT

-- set API call parameter

channel RESET_EVENT_set_event_id : MOD.PART.PROC.event_idx_t

-- perform API call with previously given parameter

Page 307: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.2. CSP CHANNEL DEFINITIONS 287

channel API_call_RESET_EVENT : MOD.PART.PROC

------ API call: WAIT_EVENT

-- set API call parameters

channel WAIT_EVENT_set_event_id : MOD.PART.PROC.event_idx_t

channel WAIT_EVENT_set_time_out : MOD.PART.PROC.event_timeout_t

-- perform API call with previously given parameters

channel API_call_WAIT_EVENT : MOD.PART.PROC

------ API call: GET_EVENT_ID

-- set API call parameter

channel GET_EVENT_ID_set_event_name : MOD.PART.PROC.event_idx_t

-- perform API call with previously given parameter

channel API_call_GET_EVENT_ID : MOD.PART.PROC

------ API call: GET_EVENT_STATUS

-- set API call parameter

channel GET_EVENT_STATUS_set_event_id : MOD.PART.PROC.event_idx_t

-- perform API call with previously given parameter

channel API_call_GET_EVENT_STATUS : MOD.PART.PROC

--------------------------------------------------------------------------------

-- Health Monitoring

--------------------------------------------------------------------------------

------ API call: REPORT_APPLICATION_MESSAGE

-- set API call parameters

channel REPORT_APPLICATION_MESSAGE_set_msg_size : MOD.PART.PROC.app_msg_size_t

channel REPORT_APPLICATION_MESSAGE_set_msg_seq_id : MOD.PART.PROC.msg_seq_id_t

-- perform API call with previously given parameters

channel API_call_REPORT_APPLICATION_MESSAGE : MOD.PART.PROC

------ API call: CREATE_ERROR_HANDLER

-- set API call parameter

channel CREATE_ERROR_HANDLER_set_stack_size : MOD.PART.PROC.stack_size_t

-- perform API call with previously given parameter

channel API_call_CREATE_ERROR_HANDLER : MOD.PART.PROC

------ API call: GET_ERROR_STATUS

-- perform API call

channel API_call_GET_ERROR_STATUS : MOD.PART.PROC

------ API call: RAISE_APPLICATION_ERROR

-- set API call parameters

channel RAISE_APPLICATION_ERROR_set_error_code : MOD.PART.PROC.error_code_value_type_t

channel RAISE_APPLICATION_ERROR_set_error_msg_size : MOD.PART.PROC.error_msg_size_t

-- perform API call with previously given parameters

channel API_call_RAISE_APPLICATION_ERROR : MOD.PART.PROC

--==============================================================================

--------------------------------------------------------------------------------

--

-- Channels for commanding scenarios

--

-- Scenarios can have up to 10 parameters which includes

-- * the scenario number

-- * the scenario parameters

--

--------------------------------------------------------------------------------

------ Scenario: Set scenario parameter

-- set scenario parameters (values between 1 and 1000)

channel SCENARIO_set_parameter : MOD.PART.PROC.scen_parameter_num_t.scen_parameter_value_t

Page 308: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

288 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- set scenario parameters (values defined in a specific set of allowed values)

channel SCENARIO_set_ext_parameter : MOD.PART.PROC.scen_parameter_num_t.scen_ext_parameter_value_t

-- perform scenario call

channel SCENARIO_activate : MOD.PART.PROC.scen_number_t

--------------------------------------------------------------------------------

IMA_output_channels.csp:

--------------------------------------------------------------------------------

--

-- Channels for receiving the output parameter values of triggered API calls

--

-- API calls can have two types of output parameters which is shown by two types

-- channels:

-- * the channel for the return code of the API function

-- * channels for the output parameters of the API function (only generated if

-- the return code is NO_ERROR)

--

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

-- General Monitoring Channels

--------------------------------------------------------------------------------

-- Test application up and running in returned mode

channel TAP_proc_mode : MOD.PART.PROC.operating_mode_t

--------------------------------------------------------------------------------

-- Partition Management

--------------------------------------------------------------------------------

------ GET_PARTITION_STATUS

-- get return code

channel API_out_GET_PARTITION_STATUS_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_PARTITION_STATUS_identifier : MOD.PART.PROC.part_ident_t

channel API_out_GET_PARTITION_STATUS_period : MOD.PART.PROC.part_period_t

channel API_out_GET_PARTITION_STATUS_duration : MOD.PART.PROC.part_duration_t

channel API_out_GET_PARTITION_STATUS_lock_level : MOD.PART.PROC.lock_level_t

channel API_out_GET_PARTITION_STATUS_operating_mode : MOD.PART.PROC.operating_mode_t

------ SET_PARTITION_MODE

-- get return code

-- Note: Usually, the return code is NOT received, since a successfull partition

-- mode change always leads to deactivation of the current process.

-- Expected return values are only NO_ACTION and INVALID_PARAM in case of

-- error situations .

channel API_out_SET_PARTITION_MODE_ret_code : MOD.PART.PROC.retcode_t

--------------------------------------------------------------------------------

-- Process Management

--------------------------------------------------------------------------------

------ GET_PROCESS_ID

-- get return code

channel API_out_GET_PROCESS_ID_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_PROCESS_ID_process_id : MOD.PART.PROC. process_number_idx_rob_t

------ GET_PROCESS_STATUS

-- get return code

channel API_out_GET_PROCESS_STATUS_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

-- Note: Data of PROCESS_ATTRIBUTE_TYPE is not returned but checked within the

-- TA if it matches the definition of the addressed process. Returning of

-- the correct process index means that the entries are ok

Page 309: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.2. CSP CHANNEL DEFINITIONS 289

channel API_out_GET_PROCESS_STATUS_process_attributes : MOD.PART.PROC.process_number_idx_rob_t

channel API_out_GET_PROCESS_STATUS_current_priority : MOD.PART.PROC.priority_t

-- Note: Output parameter deadline_time not relevant outside the TA.

channel API_out_GET_PROCESS_STATUS_process_status : MOD.PART.PROC.process_status_t

------ CREATE_PROCESS

-- get return code

channel API_out_CREATE_PROCESS_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

-- Note: Output parameter process_id is not relevant outside the TA and therefore

-- stored in the TA’s internal mapping table.

------ SET_PRIORITY

-- get return code

channel API_out_SET_PRIORITY_ret_code : MOD.PART.PROC.retcode_t

------ SUSPEND_SELF

-- get return code

-- Note: The return code is usually not received immediately, but when the process

-- is resumed by another process. In case of error situations, the return codes

-- are returned immediately.

channel API_out_SUSPEND_SELF_ret_code : MOD.PART.PROC.retcode_t

------ SUSPEND

-- get return code

channel API_out_SUSPEND_ret_code : MOD.PART.PROC.retcode_t

------ RESUME

-- get return code

channel API_out_RESUME_ret_code : MOD.PART.PROC.retcode_t

------ API call: STOP_SELF

-- no return code

------ STOP

-- get return code

channel API_out_STOP_ret_code : MOD.PART.PROC.retcode_t

------ START

-- get return code

channel API_out_START_ret_code : MOD.PART.PROC.retcode_t

------ DELAYED_START

-- get return code

channel API_out_DELAYED_START_ret_code : MOD.PART.PROC.retcode_t

------ LOCK_PREEMPTION

-- get return code

channel API_out_LOCK_PREEMPTION_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_LOCK_PREEMPTION_lock_level : MOD.PART.PROC.lock_level_t

------ UNLOCK_PREEMPTION

-- get return code

channel API_out_UNLOCK_PREEMPTION_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_UNLOCK_PREEMPTION_lock_level : MOD.PART.PROC.lock_level_t

------ GET_MY_ID

-- get return code

channel API_out_GET_MY_ID_ret_code : MOD.PART.PROC.retcode_t

Page 310: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

290 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- get output parameter

channel API_out_GET_MY_ID_process_id : MOD.PART.PROC.process_number_idx_rob_t

--------------------------------------------------------------------------------

-- Time Management

--------------------------------------------------------------------------------

------ TIMED_WAIT

-- get return code

channel API_out_TIMED_WAIT_ret_code : MOD.PART.PROC.retcode_t

------ PERIODIC_WAIT

-- get return code

channel API_out_PERIODIC_WAIT_ret_code : MOD.PART.PROC.retcode_t

------ GET_TIME

-- get return code

channel API_out_GET_TIME_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

-- Note: The output parameter SYSTEM_TIME cannot be represented as CSP channel.

------ REPLENISH

-- get return code

channel API_out_REPLENISH_ret_code : MOD.PART.PROC.retcode_t

--------------------------------------------------------------------------------

-- Inter-Partition Communication

--------------------------------------------------------------------------------

--

-- Sampling Ports

--

------ CREATE_SAMPLING_PORT

-- get return code

channel API_out_CREATE_SAMPLING_PORT_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

-- Note: Output parameter sampling_port_id is not relevant outside the TA

-- and therefore stored in the TA’s internal mapping table.

------ WRITE_SAMPLING_MESSAGE

-- get return code

channel API_out_WRITE_SAMPLING_MESSAGE_ret_code : MOD.PART.PROC.retcode_t

------ READ_SAMPLING_MESSAGE

-- get return code

channel API_out_READ_SAMPLING_MESSAGE_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_READ_SAMPLING_MESSAGE_msg_size : MOD.PART.PROC.sampling_port_msg_size_t

channel API_out_READ_SAMPLING_MESSAGE_validity : MOD.PART.PROC.validity_type_t

-- Note: The output message is not returned here. Instead information provided

-- when sending the message is extracted and provided here.

channel API_out_READ_SAMPLING_MESSAGE_msg_seq_id : MOD.PART.PROC.msg_seq_id_t

channel API_out_READ_SAMPLING_MESSAGE_src_mod : MOD.PART.PROC.MOD

channel API_out_READ_SAMPLING_MESSAGE_src_part : MOD.PART.PROC.avionics_part_t

channel API_out_READ_SAMPLING_MESSAGE_src_proc : MOD.PART.PROC.process_number_idx_rob_t

------ GET_SAMPLING_PORT_ID

-- get return code

channel API_out_GET_SAMPLING_PORT_ID_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_SAMPLING_PORT_ID_sampling_port_id : MOD.PART.PROC.sampling_port_idx_t

------ GET_SAMPLING_PORT_STATUS

-- get return code

channel API_out_GET_SAMPLING_PORT_STATUS_ret_code : MOD.PART.PROC.retcode_t

Page 311: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.2. CSP CHANNEL DEFINITIONS 291

-- get output parameter

-- Note: The status data max_message_size, port_direction, refresh_period are

-- checked within TA against the information stored in the mapping tables.

channel API_out_GET_SAMPLING_PORT_STATUS_last_message_validity : MOD.PART.PROC.validity_type_t

--

-- Queuing Ports

--

------ CREATE_QUEUING_PORT

-- get return code

channel API_out_CREATE_QUEUING_PORT_ret_code : MOD.PART.PROC.retcode_t

-- Note: Output parameter queuing_port_id is not relevant outside the TA

-- and therefore stored in the TA’s internal mapping table.

------ SEND_QUEUING_MESSAGE

-- get return code

channel API_out_SEND_QUEUING_MESSAGE_ret_code : MOD.PART.PROC.retcode_t

------ RECEIVE_QUEUING_MESSAGE

-- get return code

channel API_out_RECEIVE_QUEUING_MESSAGE_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_RECEIVE_QUEUING_MESSAGE_msg_size : MOD.PART.PROC.queuing_port_msg_size_t

-- Note: The output message is not returned here. Instead information provided

-- when sending the message is extracted and provided here.

channel API_out_RECEIVE_QUEUING_MESSAGE_msg_seq_id : MOD.PART.PROC.msg_seq_id_t

channel API_out_RECEIVE_QUEUING_MESSAGE_src_mod : MOD.PART.PROC.MOD

channel API_out_RECEIVE_QUEUING_MESSAGE_src_part : MOD.PART.PROC.avionics_part_t

channel API_out_RECEIVE_QUEUING_MESSAGE_src_proc : MOD.PART.PROC.process_number_idx_rob_t

------ GET_QUEUING_PORT_ID

-- get return code

channel API_out_GET_QUEUING_PORT_ID_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_QUEUING_PORT_ID_queuing_port_id : MOD.PART.PROC.queuing_port_idx_t

------ GET_QUEUING_PORT_STATUS

-- get return code

channel API_out_GET_QUEUING_PORT_STATUS_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

-- Note: The status data max_nb_msg, max_msg_size, port_direction are

-- checked within TA against the information stored in the mapping tables.

channel API_out_GET_QUEUING_PORT_STATUS_nb_message : MOD.PART.PROC.queuing_port_msg_range_t

channel API_out_GET_QUEUING_PORT_STATUS_waiting_processes : MOD.PART.PROC.process_number_idx_rob_t

--------------------------------------------------------------------------------

-- Intra-Partition Communication

--------------------------------------------------------------------------------

--

-- Buffer

--

------ CREATE_BUFFER

-- get return code

channel API_out_CREATE_BUFFER_ret_code : MOD.PART.PROC.retcode_t

-- Note: Output parameter buffer_id is not relevant outside the TA

-- and therefore stored in the TA’s internal mapping table.

------ SEND_BUFFER

-- get return code

channel API_out_SEND_BUFFER_ret_code : MOD.PART.PROC.retcode_t

------ RECEIVE_BUFFER

Page 312: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

292 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- get return code

channel API_out_RECEIVE_BUFFER_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_RECEIVE_BUFFER_msg_size : MOD.PART.PROC.buffer_msg_size_t

-- Note: The output message is not returned here. Instead information provided

-- when sending the message is extracted and provided here.

channel API_out_RECEIVE_BUFFER_msg_seq_id : MOD.PART.PROC.msg_seq_id_t

channel API_out_RECEIVE_BUFFER_src_mod : MOD.PART.PROC.MOD

channel API_out_RECEIVE_BUFFER_src_part : MOD.PART.PROC.PART

channel API_out_RECEIVE_BUFFER_src_proc : MOD.PART.PROC.process_number_idx_rob_t

------ GET_BUFFER_ID

-- get return code

channel API_out_GET_BUFFER_ID_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_BUFFER_ID_buffer_id : MOD.PART.PROC.buffer_idx_t

------ GET_BUFFER_STATUS

-- get return code

channel API_out_GET_BUFFER_STATUS_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_BUFFER_STATUS_nb_message : MOD.PART.PROC.buffer_msg_range_t

-- Note: The status data max_nb_message, max_message_size are

-- checked within TA against the information stored in the mapping tables.

channel API_out_GET_BUFFER_STATUS_waiting_processes : MOD.PART.PROC.process_number_idx_rob_t

--

-- Blackboards

--

------ CREATE_BLACKBOARD

-- get return code

channel API_out_CREATE_BLACKBOARD_ret_code : MOD.PART.PROC.retcode_t

-- Note: Output parameter blackboard_id is not relevant outside the TA

-- and therefore stored in the TA’s internal mapping table.

------ DISPLAY_BLACKBOARD

-- get return code

channel API_out_DISPLAY_BLACKBOARD_ret_code : MOD.PART.PROC.retcode_t

------ READ_BLACKBOARD

-- get return code

channel API_out_READ_BLACKBOARD_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_READ_BLACKBOARD_msg_size : MOD.PART.PROC.blackboard_msg_size_t

-- Note: The output message is not returned here. Instead information provided

-- when sending the message is extracted and provided here.

channel API_out_READ_BLACKBOARD_msg_seq_id : MOD.PART.PROC.msg_seq_id_t

channel API_out_READ_BLACKBOARD_src_mod : MOD.PART.PROC.MOD

channel API_out_READ_BLACKBOARD_src_part : MOD.PART.PROC.PART

channel API_out_READ_BLACKBOARD_src_proc : MOD.PART.PROC.process_number_idx_rob_t

------ CLEAR_BLACKBOARD

-- get return code

channel API_out_CLEAR_BLACKBOARD_ret_code : MOD.PART.PROC.retcode_t

------ GET_BLACKBOARD_ID

-- get return code

channel API_out_GET_BLACKBOARD_ID_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_BLACKBOARD_ID_blackboard_id : MOD.PART.PROC.blackboard_idx_t

------ GET_BLACKBOARD_STATUS

-- get return code

channel API_out_GET_BLACKBOARD_STATUS_ret_code : MOD.PART.PROC.retcode_t

Page 313: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.2. CSP CHANNEL DEFINITIONS 293

-- get output parameter

channel API_out_GET_BLACKBOARD_STATUS_empty_indicator : MOD.PART.PROC.empty_indicator_type_t

-- Note: The status data max_message_size is checked within TA against the

-- information stored in the mapping tables.

channel API_out_GET_BLACKBOARD_STATUS_waiting_processes : MOD.PART.PROC.process_number_idx_rob_t

--

-- Semaphores

--

------ CREATE_SEMAPHORE

-- get return code

channel API_out_CREATE_SEMAPHORE_ret_code : MOD.PART.PROC.retcode_t

-- Note: Output parameter semaphore_id is not relevant outside the TA

-- and therefore stored in the TA’s internal mapping table.

------ WAIT_SEMAPHORE

-- get return code

channel API_out_WAIT_SEMAPHORE_ret_code : MOD.PART.PROC.retcode_t

------ SIGNAL_SEMAPHORE

-- get return code

channel API_out_SIGNAL_SEMAPHORE_ret_code : MOD.PART.PROC.retcode_t

------ GET_SEMAPHORE_ID

-- get return code

channel API_out_GET_SEMAPHORE_ID_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_SEMAPHORE_ID_semaphore_id : MOD.PART.PROC.semaphore_idx_t

------ GET_SEMAPHORE_STATUS

-- get return code

channel API_out_GET_SEMAPHORE_STATUS_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_SEMAPHORE_STATUS_current_value : MOD.PART.PROC.semaphore_value_t

-- Note: The status data max_value is checked within TA against the information

-- stored in the mapping tables.

channel API_out_GET_SEMAPHORE_STATUS_waiting_processes : MOD.PART.PROC.process_number_idx_rob_t

--

-- Events

--

------ CREATE_EVENT

-- get return code

channel API_out_CREATE_EVENT_ret_code : MOD.PART.PROC.retcode_t

-- Note: Output parameter event_id is not relevant outside the TA

-- and therefore stored in the TA’s internal mapping table.

------ SET_EVENT

-- get return code

channel API_out_SET_EVENT_ret_code : MOD.PART.PROC.retcode_t

------ RESET_EVENT

-- get return code

channel API_out_RESET_EVENT_ret_code : MOD.PART.PROC.retcode_t

------ WAIT_EVENT

-- get return code

channel API_out_WAIT_EVENT_ret_code : MOD.PART.PROC.retcode_t

------ GET_EVENT_ID

-- get return code

channel API_out_GET_EVENT_ID_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

Page 314: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

294 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

channel API_out_GET_EVENT_ID_event_id : MOD.PART.PROC.event_idx_t

------ GET_EVENT_STATUS

-- get return code

channel API_out_GET_EVENT_STATUS_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_EVENT_STATUS_event_state : MOD.PART.PROC.event_state_type_t

channel API_out_GET_EVENT_STATUS_waiting_processes : MOD.PART.PROC.process_number_idx_rob_t

--------------------------------------------------------------------------------

-- Health Monitoring

--------------------------------------------------------------------------------

------ REPORT_APPLICATION_MESSAGE

-- get return code

channel API_out_REPORT_APPLICATION_MESSAGE_ret_code : MOD.PART.PROC.retcode_t

------ CREATE_ERROR_HANDLER

-- get return code

channel API_out_CREATE_ERROR_HANDLER_ret_code : MOD.PART.PROC.retcode_t

------ GET_ERROR_STATUS

-- get return code

channel API_out_GET_ERROR_STATUS_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

channel API_out_GET_ERROR_STATUS_error_code : MOD.PART.PROC.error_code_value_type_t

-- Note: The output message is not returned here. Instead the size of the error

-- message is provided here.

channel API_out_GET_ERROR_STATUS_error_msg_size : MOD.PART.PROC.error_msg_size_t

channel API_out_GET_ERROR_STATUS_failed_process : MOD.PART.PROC.process_number_idx_rob_t

------ RAISE_APPLICATION_ERROR

-- get return code

channel API_out_RAISE_APPLICATION_ERROR_ret_code : MOD.PART.PROC.retcode_t

--==============================================================================

--------------------------------------------------------------------------------

--

-- Channels for receiving the scenario output values

--

--------------------------------------------------------------------------------

------ Scenario

-- get return code

-- Note: The return code is issued by the scenario and depends on internal

-- checking results (e.g., when all triggered API calls have failed

-- as expected the return code can still be NO_ERROR).

channel SCENARIO_out_ret_code : MOD.PART.PROC.retcode_t

-- get output parameter

-- Note: The scenario output parameters can either be in the range of type

-- scen_ret_parameter_num_t or scen_ext_ret_parameter_value_t. The channel

-- used depends on that.

channel SCENARIO_out_ret_value : MOD.PART.PROC.scen_parameter_num_t.scen_ret_parameter_value_t

channel SCENARIO_out_ext_ret_value : MOD.PART.PROC.scen_parameter_num_t.scen_ext_ret_parameter_value_t

--------------------------------------------------------------------------------

Page 315: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.2. CSP CHANNEL DEFINITIONS 295

B.2.2 CSP Channels for Communication Flow ScenarioIMA_com_flow_in.csp:--------------------------------------------------------------------------------

--

-- Channels for generating and sending communication flow messages

--

-- Communication flow message parameters:

-- * message size

-- * message sequence identifier

-- * receiver array defining for each one port index and port type

-- (possible receivers are queuing or sampling ports, buffer and blackboards)

--

-- Types of channels:

-- * for setting the message parameters

-- * for sending a previously generated message

-- * for deleting a previously generated message

--

--------------------------------------------------------------------------------

-- channel to set the message size

channel AFDX_com_flow_message_set_msg_size : com_flow_msg_number_t.com_flow_msg_size_t

-- channel to set the sequence identifier

channel AFDX_com_flow_set_seqID : com_flow_msg_number_t.com_flow_sequence_id_t

-- channels to add a new receiver to the receiver array (port_idx and

-- port_type are set by one channel because the port_type is coded in

-- the channel name)

channel AFDX_com_flow_message_add_queuing_port : com_flow_msg_number_t.queuing_port_idx_t

channel AFDX_com_flow_message_add_sampling_port : com_flow_msg_number_t.sampling_port_idx_t

channel AFDX_com_flow_message_add_buffer : com_flow_msg_number_t.buffer_idx_t

channel AFDX_com_flow_message_add_blackboard : com_flow_msg_number_t.blackboard_idx_t

-- channel to send the message via an AFDX connection

channel AFDX_com_flow_send_message : com_flow_msg_number_t.afdx_port_idx_t

-- channel to clear the previously defined message

channel AFDX_com_flow_clear_message : com_flow_msg_number_t

--------------------------------------------------------------------------------

IMA_com_flow_out.csp:--------------------------------------------------------------------------------

--

-- Channels for denoting the reception of communication flow messages

--

-- Types of channels:

-- * for denoting if the received message was correctly transmitted

-- * for denoting the message parameters

--

--------------------------------------------------------------------------------

-- channel which denotes the reception of a communication flow message

-- Note: The correctnes of the message is checked using the CRC an denoted by

-- the boolean parameter.

channel AFDX_com_flow_receive_message : afdx_port_idx_t.Bool

-- channel which denotes the message size of the received message

-- Note: This channel is only generated if the communication flow message was

-- correctly received.

channel AFDX_com_flow_receive_message_msg_size : afdx_port_idx_t.com_flow_msg_size_t

-- channel which denotes the sequence id of the received message

-- Note: This channel is only generated if the communication flow message was

-- correctly received.

channel AFDX_com_flow_receive_message_seq_id : afdx_port_idx_t.com_flow_sequence_id_t

--------------------------------------------------------------------------------

Page 316: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

296 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

B.3 CSP Macros

B.3.1 CSP Macros for Commanding of API Calls

IMA_API_handling.csp:

--------------------------------------------------------------------------------

-- Macro process for commanding an API call with given parameters

--

-- Process parameters:

-- "tapid": module.partition.process to perform the API call

-- others: input parameters of the respective API service

-- Note: TA maps index to ID or name (using the mapping tables)

--

-- sequence of API calls according to ARINC 653 specification

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

-- Partition Management

--------------------------------------------------------------------------------

-- perform GET_PARTITION_STATUS call

GET_PARTITION_STATUS(tapid) =

API_call_GET_PARTITION_STATUS.tapid ->

SKIP

-- set parameter and perform SET_PARTITION_MODE call

SET_PARTITION_MODE(tapid,mode) =

SET_PARTITION_MODE_set_operating_mode.tapid.mode ->

API_call_SET_PARTITION_MODE.tapid ->

SKIP

--------------------------------------------------------------------------------

-- Process Management

--------------------------------------------------------------------------------

-- set parameter and perfom GET_PROCESS_ID call

-- (TA mapping table: process index -> process name)

GET_PROCESS_ID (tapid,process_idx) =

GET_PROCESS_ID_set_process_name.tapid.process_idx ->

API_call_GET_PROCESS_ID.tapid ->

SKIP

-- set parameter and perform GET_PROCESS_STATUS

-- (TA mapping table: process index -> process ID)

GET_PROCESS_STATUS(tapid, process_idx) =

GET_PROCESS_STATUS_set_process_name.tapid.process_idx ->

API_call_GET_PROCESS_STATUS.tapid ->

SKIP

-- set parameters and perform CREATE_SAMPLING_PORT call

-- (TA mapping table: process index -> process name)

CREATE_PROCESS (tapid, process_idx, stack_size, base_priority,

period, time_capacity, deadline) =

CREATE_PROCESS_set_attribute_process_name.tapid.process_idx ->

CREATE_PROCESS_set_attribute_stack_size.tapid.stack_size ->

CREATE_PROCESS_set_attribute_base_priority.tapid.base_priority ->

CREATE_PROCESS_set_attribute_period.tapid.period ->

CREATE_PROCESS_set_attribute_time_capacity.tapid.time_capacity ->

CREATE_PROCESS_set_attribute_deadline.tapid.deadline ->

API_call_CREATE_PROCESS.tapid ->

SKIP

-- set parameter and perform SET_PRIORITY call

-- (TA mapping table: process index -> process ID)

SET_PRIORITY(tapid, process_idx, priority) =

SET_PRIORITY_set_process_id.tapid.process_idx ->

SET_PRIORITY_set_priority.tapid.priority ->

API_call_SET_PRIORITY.tapid ->

SKIP

Page 317: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 297

-- set parameter and perform SUSPEND_SELF call

SUSPEND_SELF(tapid, timeout) =

SUSPEND_SELF_set_time_out.tapid.timeout ->

API_call_SUSPEND_SELF.tapid ->

SKIP

-- set parameter and perform SUSPEND call

-- (TA mapping table: process index -> process ID)

SUSPEND(tapid, process_id) =

SUSPEND_set_process_id.tapid.process_id ->

API_call_SUSPEND.tapid ->

SKIP

-- set parameter and perform RESUME call

-- (TA mapping table: process index -> process ID)

RESUME(tapid, process_id) =

RESUME_set_process_id.tapid.process_id ->

API_call_RESUME.tapid ->

SKIP

-- perform STOP_SELF call

STOP_SELF(tapid) =

API_call_STOP_SELF.tapid ->

SKIP

-- set parameter and perform STOP call

-- (TA mapping table: process index -> process ID)

-- Note: macro process name not conform to naming convention,

-- since STOP is reserved within CSP

STOP_PROCESS(tapid, process_idx) =

STOP_set_process_id.tapid.process_idx ->

API_call_STOP.tapid ->

SKIP

-- set parameter and perform START call

-- (TA mapping table: process index -> process ID)

START(tapid, process_idx) =

START_set_process_id.tapid.process_idx ->

API_call_START.tapid ->

SKIP

-- set parameter and perform DELAYED_START call

-- (TA mapping table: process index -> process ID)

DELAYED_START(tapid, process_idx, delay_time) =

DELAYED_START_set_process_id.tapid.process_idx ->

DELAYED_START_set_delay_time.tapid.delay_time ->

API_call_DELAYED_START.tapid ->

SKIP

-- perform LOCK_PREEMPTION call

LOCK_PREEMPTION(tapid) =

API_call_LOCK_PREEMPTION.tapid ->

SKIP

-- perform UNLOCK_PREEMPTION call

UNLOCK_PREEMPTION(tapid) =

API_call_UNLOCK_PREEMPTION.tapid ->

SKIP

-- perform GET_MY_ID call

GET_MY_ID(tapid) =

API_call_GET_MY_ID.tapid ->

SKIP

--------------------------------------------------------------------------------

-- Time Management

--------------------------------------------------------------------------------

-- set parameter and perform TIMED_WAIT call

TIMED_WAIT(tapid, delay_time) =

TIMED_WAIT_set_delay_time.tapid.delay_time ->

API_call_TIMED_WAIT.tapid ->

Page 318: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

298 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

SKIP

-- set parameter and perform PERIODIC_WAIT call

PERIODIC_WAIT(tapid) =

API_call_PERIODIC_WAIT.tapid ->

SKIP

-- perform GET_TIME call

GET_TIME(tapid) =

API_call_GET_TIME.tapid ->

SKIP

-- perform REPLENISH call

REPLENISH(tapid, budget_time) =

REPLENISH_set_budget_time.tapid.budget_time ->

API_call_REPLENISH.tapid ->

SKIP

--------------------------------------------------------------------------------

-- Inter-Partition Communication

--------------------------------------------------------------------------------

--

-- Sampling Ports

--

-- set parameters and perform CREATE_SAMPLING_PORT call

-- (TA mapping table: sampling port index -> sampling port name)

CREATE_SAMPLING_PORT(tapid, sp_idx, max_msg_size, port_dir, refresh_period) =

CREATE_SAMPLING_PORT_set_sampling_port_name.tapid.sp_idx ->

CREATE_SAMPLING_PORT_set_max_message_size.tapid.max_msg_size ->

CREATE_SAMPLING_PORT_set_port_direction.tapid.port_dir ->

CREATE_SAMPLING_PORT_set_refresh_period.tapid.refresh_period ->

API_call_CREATE_SAMPLING_PORT.tapid ->

SKIP

-- set parameters and call WRITE_SAMPLING_MESSAGE call

-- Note: the parameters define only the message size and the encoded sequence ID,

-- the message to be sent is generated by a helper function in the TA

-- (TA mapping table: sampling port index -> sampling port ID)

WRITE_SAMPLING_MESSAGE(tapid, sampling_port_idx, msg_size, seq_id) =

WRITE_SAMPLING_MESSAGE_set_sampling_port_id.tapid.sampling_port_idx ->

WRITE_SAMPLING_MESSAGE_set_msg_size.tapid.msg_size ->

WRITE_SAMPLING_MESSAGE_set_msg_seq_id.tapid.seq_id ->

API_call_WRITE_SAMPLING_MESSAGE.tapid ->

SKIP

-- set parameters and call WRITE_SAMPLING_MESSAGE call

-- (TA mapping table: sampling port index -> sampling port ID)

READ_SAMPLING_MESSAGE(tapid, sampling_port_idx) =

READ_SAMPLING_MESSAGE_set_sampling_port_id.tapid.sampling_port_idx ->

API_call_READ_SAMPLING_MESSAGE.tapid ->

SKIP

-- set parameters and perform GET_SAMPLING_PORT_ID call

-- (TA mapping table: sampling port index -> sampling port name)

GET_SAMPLING_PORT_ID(tapid, sp_idx) =

GET_SAMPLING_PORT_ID_set_sampling_port_name.tapid.sp_idx ->

API_call_GET_SAMPLING_PORT_ID.tapid ->

SKIP

-- set parameters and perform GET_SAMPLING_PORT_STATUS call

-- (TA mapping table: sampling port index -> sampling port ID)

GET_SAMPLING_PORT_STATUS(tapid, sp_idx) =

GET_SAMPLING_PORT_STATUS_set_sampling_port_id.tapid.sp_idx ->

API_call_GET_SAMPLING_PORT_STATUS.tapid ->

SKIP

--

-- Queuing Ports

--

Page 319: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 299

-- set parameters and perform CREATE_QUEUING_PORT call

-- (TA mapping table: queuing port index -> queuing port name)

CREATE_QUEUING_PORT(tapid, qp_idx, max_msg_size, max_nb_msg, port_dir,

queuing_discpl) =

CREATE_QUEUING_PORT_set_queuing_port_name.tapid.qp_idx ->

CREATE_QUEUING_PORT_set_max_message_size.tapid.max_msg_size ->

CREATE_QUEUING_PORT_set_max_nb_message.tapid.max_nb_msg ->

CREATE_QUEUING_PORT_set_port_direction.tapid.port_dir ->

CREATE_QUEUING_PORT_set_queuing_discipline.tapid.queuing_discpl ->

API_call_CREATE_QUEUING_PORT.tapid ->

SKIP

-- set parameters and perform SEND_QUEUING_MESSAGE call

-- Note: the parameters define only the message size and the encoded sequence ID,

-- the message to be sent is generated by a helper function in the TA

-- (TA mapping table: queuing port index -> queuing port ID)

SEND_QUEUING_MESSAGE(tapid, qp_idx, msg_size, seq_id, timeout) =

SEND_QUEUING_MESSAGE_set_queuing_port_id.tapid.qp_idx ->

SEND_QUEUING_MESSAGE_set_msg_size.tapid.msg_size ->

SEND_QUEUING_MESSAGE_set_msg_seq_id.tapid.seq_id ->

SEND_QUEUING_MESSAGE_set_time_out.tapid.timeout ->

API_call_SEND_QUEUING_MESSAGE.tapid ->

SKIP

-- set parameters and perform RECEIVE_QUEUING_MESSAGE call

-- (TA mapping table: queuing port index -> queuing port ID)

RECEIVE_QUEUING_MESSAGE(tapid, qp_idx, timeout) =

RECEIVE_QUEUING_MESSAGE_set_queuing_port_id.tapid.qp_idx ->

RECEIVE_QUEUING_MESSAGE_set_time_out.tapid.timeout ->

API_call_RECEIVE_QUEUING_MESSAGE.tapid ->

SKIP

-- set parameter and perform GET_QUEUING_PORT_ID call

-- (TA mapping table: queuing port index -> queuing port name)

GET_QUEUING_PORT_ID(tapid, qp_idx) =

GET_QUEUING_PORT_ID_set_queuing_port_name.tapid.qp_idx ->

API_call_GET_QUEUING_PORT_ID.tapid ->

SKIP

-- set parameter and perform GET_QUEUING_PORT_ID call

-- (TA mapping table: queuing port index -> queuing port ID)

GET_QUEUING_PORT_STATUS(tapid, qp_idx) =

GET_QUEUING_PORT_STATUS_set_queuing_port_id.tapid.qp_idx ->

API_call_GET_QUEUING_PORT_STATUS.tapid ->

SKIP

--------------------------------------------------------------------------------

-- Intra-Partition Communication

--------------------------------------------------------------------------------

--

-- Buffer

--

-- set parameters and perform CREATE_BUFFER call

-- (TA mapping table: buffer index -> buffer name)

CREATE_BUFFER(tapid, buffer_idx, max_msg_size, max_nb_msg,

queuing_discpl) =

CREATE_BUFFER_set_buffer_name.tapid.buffer_idx ->

CREATE_BUFFER_set_max_message_size.tapid.max_msg_size ->

CREATE_BUFFER_set_max_nb_message.tapid.max_nb_msg ->

CREATE_BUFFER_set_queuing_discipline.tapid.queuing_discpl ->

API_call_CREATE_BUFFER.tapid ->

SKIP

-- set parameters and perform SEND_BUFFER call

-- Note: the parameters define only the message size and the encoded sequence ID,

-- the message to be sent is generated by a helper function in the TA

-- (TA mapping table: buffer index -> buffer ID)

SEND_BUFFER(tapid, buffer_idx, msg_size, seq_id, timeout) =

SEND_BUFFER_set_buffer_id.tapid.buffer_idx ->

Page 320: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

300 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

SEND_BUFFER_set_msg_size.tapid.msg_size ->

SEND_BUFFER_set_msg_seq_id.tapid.seq_id ->

SEND_BUFFER_set_time_out.tapid.timeout ->

API_call_SEND_BUFFER.tapid ->

SKIP

-- set parameters and perform RECEIVE_BUFFER call

-- (TA mapping table: buffer index -> buffer ID)

RECEIVE_BUFFER(tapid, buffer_idx, timeout) =

RECEIVE_BUFFER_set_buffer_id.tapid.buffer_idx ->

RECEIVE_BUFFER_set_time_out.tapid.timeout ->

API_call_RECEIVE_BUFFER.tapid ->

SKIP

-- set parameter and perform GET_BUFFER_ID call

-- (TA mapping table: buffer index -> buffer name)

GET_BUFFER_ID(tapid, buffer_idx) =

GET_BUFFER_ID_set_buffer_name.tapid.buffer_idx ->

API_call_GET_BUFFER_ID.tapid ->

SKIP

-- set parameter and perform GET_BUFFER_STATUS call

-- (TA mapping table: buffer index -> buffer ID)

GET_BUFFER_STATUS(tapid, buffer_idx) =

GET_BUFFER_STATUS_set_buffer_id.tapid.buffer_idx ->

API_call_GET_BUFFER_STATUS.tapid ->

SKIP

--

-- Blackboards

--

-- set parameters and perform CREATE_BLACKBOARD call

-- (TA mapping table: blackboard index -> blackboard name)

CREATE_BLACKBOARD(tapid, bb_idx, max_msg_size) =

CREATE_BLACKBOARD_set_blackboard_name.tapid.bb_idx ->

CREATE_BLACKBOARD_set_max_message_size.tapid.max_msg_size ->

API_call_CREATE_BLACKBOARD.tapid ->

SKIP

-- set parameters and perform DISPLAY_BLACKBOARD call

-- Note: the parameters define only the message size and the encoded sequence ID,

-- the message to be sent is generated by a helper function in the TA

-- (TA mapping table: blackboard index -> blackboard ID)

DISPLAY_BLACKBOARD(tapid, bb_idx, msg_size, seq_id) =

DISPLAY_BLACKBOARD_set_blackboard_id.tapid.bb_idx ->

DISPLAY_BLACKBOARD_set_msg_size.tapid.msg_size ->

DISPLAY_BLACKBOARD_set_msg_seq_id.tapid.seq_id ->

API_call_DISPLAY_BLACKBOARD.tapid ->

SKIP

-- set parameter and perform READ_BLACKBOARD call

-- (TA mapping table: blackboard index -> blackboard ID)

READ_BLACKBOARD(tapid, bb_idx, timeout) =

READ_BLACKBOARD_set_blackboard_id.tapid.bb_idx ->

READ_BLACKBOARD_set_time_out.tapid.timeout ->

API_call_READ_BLACKBOARD.tapid ->

SKIP

-- set parameter and perform CLEAR_BLACKBOARD call

-- (TA mapping table: blackboard index -> blackboard ID)

CLEAR_BLACKBOARD(tapid, bb_idx) =

CLEAR_BLACKBOARD_set_blackboard_id.tapid.bb_idx ->

API_call_CLEAR_BLACKBOARD.tapid ->

SKIP

-- set parameter and perform GET_BLACKBOARD_ID call

-- (TA mapping table: blackboard index -> blackboard name)

GET_BLACKBOARD_ID(tapid, blackboard_idx) =

GET_BLACKBOARD_ID_set_blackboard_name.tapid.blackboard_idx ->

API_call_GET_BLACKBOARD_ID.tapid ->

Page 321: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 301

SKIP

-- set parameter and perform GET_BLACKBOARD_STATUS call

-- (TA mapping table: blackboard index -> blackboard ID)

GET_BLACKBOARD_STATUS(tapid, blackboard_idx) =

GET_BLACKBOARD_STATUS_set_blackboard_id.tapid.blackboard_idx ->

API_call_GET_BLACKBOARD_STATUS.tapid ->

SKIP

--

-- Semaphores

--

-- set parameters and perform CREATE_SEMAPHORE call

-- (TA mapping table: semaphore index -> semaphore name)

CREATE_SEMAPHORE(tapid, semaphore_idx, curr_val, max_val, queuing_discpl) =

CREATE_SEMAPHORE_set_semaphore_name.tapid.semaphore_idx ->

CREATE_SEMAPHORE_set_current_value.tapid.curr_val ->

CREATE_SEMAPHORE_set_maximum_value.tapid.max_val ->

CREATE_SEMAPHORE_set_queuing_discipline.tapid.queuing_discpl ->

API_call_CREATE_SEMAPHORE.tapid ->

SKIP

-- set parameters and perform WAIT_SEMAPHORE call

-- (TA mapping table: semaphore index -> semaphore ID)

WAIT_SEMAPHORE(tapid, semaphore_idx, timeout) =

WAIT_SEMAPHORE_set_semaphore_id.tapid.semaphore_idx ->

WAIT_SEMAPHORE_set_time_out.tapid.timeout ->

API_call_WAIT_SEMAPHORE.tapid ->

SKIP

-- set parameter and perform SIGNAL_SEMAPHORE call

-- (TA mapping table: semaphore index -> semaphore ID)

SIGNAL_SEMAPHORE(tapid, semaphore_idx) =

SIGNAL_SEMAPHORE_set_semaphore_id.tapid.semaphore_idx ->

API_call_SIGNAL_SEMAPHORE.tapid ->

SKIP

-- set parameter and perform GET_SEMAPHORE_ID call

-- (TA mapping table: semaphore index -> semaphore name)

GET_SEMAPHORE_ID(tapid, semaphore_idx) =

GET_SEMAPHORE_ID_set_semaphore_name.tapid.semaphore_idx ->

API_call_GET_SEMAPHORE_ID.tapid ->

SKIP

-- set parameter and perform GET_SEMAPHORE_STATUS call

-- (TA mapping table: semaphore index -> semaphore ID)

GET_SEMAPHORE_STATUS(tapid, semaphore_idx) =

GET_SEMAPHORE_STATUS_set_semaphore_id.tapid.semaphore_idx ->

API_call_GET_SEMAPHORE_STATUS.tapid ->

SKIP

--

-- Events

--

-- set parameters and perform CREATE_EVENT call

-- (TA mapping table: event index -> event name)

CREATE_EVENT(tapid, event_idx) =

CREATE_EVENT_set_event_name.tapid.event_idx ->

API_call_CREATE_EVENT.tapid ->

SKIP

-- set parameter and perform SET_EVENT call

-- (TA mapping table: event index -> event ID)

SET_EVENT(tapid, event_idx) =

SET_EVENT_set_event_id.tapid.event_idx ->

API_call_SET_EVENT.tapid ->

SKIP

Page 322: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

302 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- set parameter and perform RESET_EVENT call

-- (TA mapping table: event index -> event ID)

RESET_EVENT(tapid, event_idx) =

RESET_EVENT_set_event_id.tapid.event_idx ->

API_call_RESET_EVENT.tapid ->

SKIP

-- set parameters and perform WAIT_EVENT call

-- (TA mapping table: event index -> event ID)

WAIT_EVENT(tapid, event_idx, timeout) =

WAIT_EVENT_set_event_id.tapid.event_idx ->

WAIT_EVENT_set_time_out.tapid.timeout ->

API_call_WAIT_EVENT.tapid ->

SKIP

-- set parameter and perform GET_EVENT_ID call

-- (TA mapping table: event index -> event name)

GET_EVENT_ID(tapid, event_idx) =

GET_EVENT_ID_set_event_name.tapid.event_idx ->

API_call_GET_EVENT_ID.tapid ->

SKIP

-- set parameter and perform GET_EVENT_STATUS call

-- (TA mapping table: event index -> event ID)

GET_EVENT_STATUS(tapid, event_idx) =

GET_EVENT_STATUS_set_event_id.tapid.event_idx ->

API_call_GET_EVENT_STATUS.tapid ->

SKIP

--------------------------------------------------------------------------------

-- Health Monitoring

--------------------------------------------------------------------------------

-- set parameters and perform REPORT_APPLICATION_MESSAGE call

REPORT_APPLICATION_MESSAGE(tapid, msg_size, seq_id) =

REPORT_APPLICATION_MESSAGE_set_msg_size.tapid.msg_size ->

REPORT_APPLICATION_MESSAGE_set_msg_seq_id.tapid.seq_id ->

API_call_REPORT_APPLICATION_MESSAGE.tapid ->

SKIP

-- set parameter and perform CREATE_ERROR_HANDLER call

CREATE_ERROR_HANDLER(tapid, stack_size) =

CREATE_ERROR_HANDLER_set_stack_size.tapid.stack_size ->

API_call_CREATE_ERROR_HANDLER.tapid ->

SKIP

-- perform GET_ERROR_STATUS call

GET_ERROR_STATUS(tapid) =

API_call_GET_ERROR_STATUS.tapid ->

SKIP

-- set parameters and perform RAISE_APPLICATION_ERROR call

RAISE_APPLICATION_ERROR(tapid, error_code, error_msg_size) =

RAISE_APPLICATION_ERROR_set_error_code.tapid.error_code ->

RAISE_APPLICATION_ERROR_set_error_msg_size.tapid.error_msg_size ->

API_call_RAISE_APPLICATION_ERROR.tapid ->

SKIP

--------------------------------------------------------------------------------

Page 323: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 303

IMA_API_macros.csp:--------------------------------------------------------------------------------

-- Macro process for commanding an API call with given parameters and

-- for checking the respective result

--

-- Process parameters:

-- "tapid": module.partition.process to perform the API call

-- others: * input parameters of the respective API service

-- * expected return code and expected output parameters

--

-- The macros are based on the macros in IMA_API_handling.

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

-- Partition Management

--------------------------------------------------------------------------------

-- perform SET_PARTITION_MODE and check return values

-- check_SET_PARTITION_MODE (tapid, op_mode, ret_code)

-- SET_PARTITION_MODE has a return code but normally the calling process will

-- never get the return values.

-- perform GET_PARTITION_STATUS and check return values

check_GET_PARTITION_STATUS (tapid, ret_code, lock_level, op_mode) =

-- trigger API call

GET_PARTITION_STATUS(tapid);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_PARTITION_STATUS_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_PARTITION_STATUS_ret_code.tapid.ret_code,

API_out_GET_PARTITION_STATUS_identifier.tapid.IMA_Conf_PARTITION_ID,

API_out_GET_PARTITION_STATUS_period.tapid.IMA_Conf_PARTITION_PERIOD,

API_out_GET_PARTITION_STATUS_duration.tapid.part_duration,

API_out_GET_PARTITION_STATUS_lock_level.tapid.lock_level,

API_out_GET_PARTITION_STATUS_operating_mode.tapid.op_mode>);

SKIP

--------------------------------------------------------------------------------

-- Process Management

--------------------------------------------------------------------------------

-- perform GET_PROCESS_ID and check return values

check_GET_PROCESS_ID (tapid, process_idx, ret_code) =

-- trigger API call

GET_PROCESS_ID (tapid, process_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_PROCESS_ID_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameter are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_PROCESS_ID_ret_code.tapid.ret_code,

API_out_GET_PROCESS_ID_process_id.tapid.process_idx>);

SKIP

-- perform GET_PROCESS_STATUS and check return values

check_GET_PROCESS_STATUS (tapid, process_idx, ret_code, priority, process_status) =

Page 324: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

304 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- trigger API call

GET_PROCESS_STATUS(tapid, process_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_PROCESS_STATUS_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_PROCESS_STATUS_ret_code.tapid.ret_code,

API_out_GET_PROCESS_STATUS_process_attributes.tapid.process_idx,

API_out_GET_PROCESS_STATUS_current_priority.tapid.priority,

API_out_GET_PROCESS_STATUS_process_status.tapid.process_status>);

SKIP

-- perform CREATE_PROCESS and check return values

check_CREATE_PROCESS (tapid, process_idx, stack_size, base_priority, period,

time_capacity, deadline, ret_code) =

-- trigger API call

CREATE_PROCESS(tapid,

process_idx,

stack_size,

base_priority,

period,

time_capacity,

deadline);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_CREATE_PROCESS_ret_code.tapid.ret_code>);

SKIP

-- perform SET_PRIORITY and check return values

check_SET_PRIORITY (tapid, process_idx, priority, ret_code) =

-- trigger API call

SET_PRIORITY(tapid, process_idx, priority);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_SET_PRIORITY_ret_code.tapid.ret_code>);

SKIP

-- perform SUSPEND_SELF and check return values

check_SUSPEND_SELF (tapid, timeout, ret_code) =

-- trigger API call

SUSPEND_SELF(tapid, timeout);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_SUSPEND_SELF_ret_code.tapid.ret_code>);

SKIP

-- perform SUSPEND and check return values

check_SUSPEND (tapid, process_id, ret_code) =

-- trigger API call

SUSPEND(tapid, process_id);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_SUSPEND_ret_code.tapid.ret_code>);

SKIP

-- perform RESUME and check return values

Page 325: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 305

check_RESUME (tapid, process_id, ret_code) =

-- trigger API call

RESUME(tapid, process_id);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_RESUME_ret_code.tapid.ret_code>);

SKIP

-- perform STOP_SELF and check return values

-- check_STOP_SELF (tapid)

-- STOP_SELF has a return code but normally the calling process will never

-- get the return values.

-- perform STOP_PROCESS and check return values

check_STOP_PROCESS (tapid, process_idx, ret_code) =

-- trigger API call

STOP_PROCESS(tapid, process_idx);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_STOP_ret_code.tapid.ret_code>);

SKIP

-- perform START and check return values

check_START (tapid, process_idx, ret_code) =

-- trigger API call

START(tapid, process_idx);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_START_ret_code.tapid.ret_code>);

SKIP

-- perform DELAYED_START and check return values

check_DELAYED_START (tapid, process_idx, delay_time, ret_code) =

-- trigger API call

DELAYED_START(tapid, process_idx, delay_time);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_DELAYED_START_ret_code.tapid.ret_code>);

SKIP

-- perform LOCK_PREEMPTION and check return values

check_LOCK_PREEMPTION (tapid, ret_code, lock_level) =

-- trigger API call

LOCK_PREEMPTION(tapid);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_LOCK_PREEMPTION_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameter are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_LOCK_PREEMPTION_ret_code.tapid.ret_code,

API_out_LOCK_PREEMPTION_lock_level.tapid.lock_level>);

SKIP

-- perform UNLOCK_PREEMPTION and check return values

check_UNLOCK_PREEMPTION (tapid, ret_code, lock_level) =

-- trigger API call

UNLOCK_PREEMPTION(tapid);

Page 326: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

306 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_UNLOCK_PREEMPTION_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameter are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_UNLOCK_PREEMPTION_ret_code.tapid.ret_code,

API_out_UNLOCK_PREEMPTION_lock_level.tapid.lock_level>);

SKIP

-- perform GET_MY_ID and check return values

check_GET_MY_ID (tapid, ret_code, expected_process_idx) =

-- trigger API call

GET_MY_ID(tapid);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_MY_ID_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameter are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_MY_ID_ret_code.tapid.ret_code,

API_out_GET_MY_ID_process_id.tapid.expected_process_idx>);

SKIP

--------------------------------------------------------------------------------

-- Time Management

--------------------------------------------------------------------------------

-- perform TIMED_WAIT and check return values

check_TIMED_WAIT (tapid, delay_time, ret_code) =

-- trigger API call

TIMED_WAIT(tapid, delay_time);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_TIMED_WAIT_ret_code.tapid.ret_code>);

SKIP

-- perform PERIODIC_WAIT and check return values

check_PERIODIC_WAIT (tapid, ret_code) =

-- trigger API call

PERIODIC_WAIT(tapid);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_PERIODIC_WAIT_ret_code.tapid.ret_code>);

SKIP

-- perform GET_TIME and check return values

check_GET_TIME (tapid, ret_code) =

-- trigger API call

GET_TIME(tapid);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_TIME_ret_code.tapid.ret_code>);

SKIP

Page 327: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 307

-- perform REPLENISH and check return values

check_REPLENISH (tapid, budget_time, ret_code) =

-- trigger API call

REPLENISH (tapid, budget_time);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_REPLENISH_ret_code.tapid.ret_code>);

SKIP

--------------------------------------------------------------------------------

-- Inter-Partition Communication

--------------------------------------------------------------------------------

--

-- Sampling Ports

--

-- perform CREATE_SAMPLING_PORT and check return values

check_CREATE_SAMPLING_PORT (tapid, sp_idx, max_msg_size, port_dir, refresh_period,

ret_code) =

-- trigger API call

CREATE_SAMPLING_PORT(tapid,

sp_idx,

max_msg_size,

port_dir,

refresh_period);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_CREATE_SAMPLING_PORT_ret_code.tapid.ret_code>);

SKIP

-- perform WRITE_SAMPLING_MESSAGE and check return values

check_WRITE_SAMPLING_MESSAGE (tapid, sampling_port_idx, msg_size, msg_seq_id, ret_code) =

-- trigger API call

WRITE_SAMPLING_MESSAGE(tapid,

sampling_port_idx,

msg_size,

msg_seq_id);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_WRITE_SAMPLING_MESSAGE_ret_code.tapid.ret_code>);

SKIP

-- perform READ_SAMPLING_MESSAGE and check return values

check_READ_SAMPLING_MESSAGE (tapid, sampling_port_idx, ret_code, msg_size,

msg_seq_id, src_mod.src_part.src_proc, validity) =

-- trigger API call

READ_SAMPLING_MESSAGE(tapid, sampling_port_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_READ_SAMPLING_MESSAGE_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

(if (msg_size >= 4)

then (-- the message is big enough to contain a sequence identifier and

-- information about the sender

WAITFORSEQ(TM_RETVAL,

<API_out_READ_SAMPLING_MESSAGE_ret_code.tapid.ret_code,

API_out_READ_SAMPLING_MESSAGE_msg_size.tapid.msg_size,

API_out_READ_SAMPLING_MESSAGE_validity.tapid.validity,

API_out_READ_SAMPLING_MESSAGE_msg_seq_id.tapid.msg_seq_id,

Page 328: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

308 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

API_out_READ_SAMPLING_MESSAGE_src_mod.tapid.src_mod,

API_out_READ_SAMPLING_MESSAGE_src_part.tapid.src_part,

API_out_READ_SAMPLING_MESSAGE_src_proc.tapid.src_proc>);

SKIP)

else (-- if msg_size is less than 4 bytes it is not possible to encode

-- the sender of the message and a sequence identifier

WAITFORSEQ(TM_RETVAL,

<API_out_READ_SAMPLING_MESSAGE_ret_code.tapid.ret_code,

API_out_READ_SAMPLING_MESSAGE_msg_size.tapid.msg_size,

API_out_READ_SAMPLING_MESSAGE_validity.tapid.validity>);

SKIP));

SKIP

-- perform GET_SAMPLING_PORT_ID and check return values

check_GET_SAMPLING_PORT_ID (tapid, sp_idx, ret_code) =

-- trigger API call

GET_SAMPLING_PORT_ID(tapid, sp_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_SAMPLING_PORT_ID_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameter are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_SAMPLING_PORT_ID_ret_code.tapid.ret_code,

API_out_GET_SAMPLING_PORT_ID_sampling_port_id.tapid.sp_idx>);

SKIP

-- perform GET_SAMPLING_PORT_STATUS and check return values

check_GET_SAMPLING_PORT_STATUS (tapid, sp_idx, ret_code, validity) =

-- trigger API call

GET_SAMPLING_PORT_STATUS(tapid, sp_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_SAMPLING_PORT_STATUS_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameter are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_SAMPLING_PORT_STATUS_ret_code.tapid.ret_code,

API_out_GET_SAMPLING_PORT_STATUS_last_message_validity.tapid.validity>);

SKIP

--

-- Queuing Ports

--

-- perform CREATE_QUEUING_PORT and check return values

check_CREATE_QUEUING_PORT (tapid, qp_idx, max_msg_size, max_nb_msg, port_dir,

queuing_discpl, ret_code) =

-- trigger API call

CREATE_QUEUING_PORT(tapid,

qp_idx,

max_msg_size,

max_nb_msg,

port_dir,

queuing_discpl);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_CREATE_QUEUING_PORT_ret_code.tapid.ret_code>);

SKIP

Page 329: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 309

-- perform SEND_QUEUING_MESSAGE and check return values

check_SEND_QUEUING_MESSAGE (tapid, qp_idx, msg_size, seq_id, time_out, ret_code) =

-- trigger API call

SEND_QUEUING_MESSAGE(tapid,

qp_idx,

msg_size,

seq_id,

time_out);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_SEND_QUEUING_MESSAGE_ret_code.tapid.ret_code>);

SKIP

-- perform RECEIVE_QUEUING_MESSAGE and check return values

check_RECEIVE_QUEUING_MESSAGE (tapid, qp_idx, time_out, ret_code, msg_size,

msg_seq_id, src_mod.src_part.src_proc) =

-- trigger API call

RECEIVE_QUEUING_MESSAGE(tapid, qp_idx, time_out);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_RECEIVE_QUEUING_MESSAGE_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

(if (msg_size >= 4)

then (-- the message is big enough to contain a sequence identifier and

-- information about the sender

WAITFORSEQ(TM_RETVAL,

<API_out_RECEIVE_QUEUING_MESSAGE_ret_code.tapid.ret_code,

API_out_RECEIVE_QUEUING_MESSAGE_msg_size.tapid.msg_size,

API_out_RECEIVE_QUEUING_MESSAGE_msg_seq_id.tapid.msg_seq_id,

API_out_RECEIVE_QUEUING_MESSAGE_src_mod.tapid.src_mod,

API_out_RECEIVE_QUEUING_MESSAGE_src_part.tapid.src_part,

API_out_RECEIVE_QUEUING_MESSAGE_src_proc.tapid.src_proc>);

SKIP)

else (-- if msg_size is less than 4 bytes it is not possible to encode

-- the sender of the message and a sequence identifier

WAITFORSEQ(TM_RETVAL,

<API_out_RECEIVE_QUEUING_MESSAGE_ret_code.tapid.ret_code,

API_out_RECEIVE_QUEUING_MESSAGE_msg_size.tapid.msg_size>);

SKIP));

SKIP

-- perform GET_QUEUING_PORT_ID and check return values

check_GET_QUEUING_PORT_ID (tapid, qp_idx, ret_code) =

-- trigger API call

GET_QUEUING_PORT_ID(tapid, qp_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_QUEUING_PORT_ID_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameter are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_QUEUING_PORT_ID_ret_code.tapid.ret_code,

API_out_GET_QUEUING_PORT_ID_queuing_port_id.tapid.qp_idx>);

SKIP

Page 330: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

310 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- perform GET_QUEUING_PORT_STATUS and check return values

check_GET_QUEUING_PORT_STATUS (tapid, qp_idx, ret_code, nb_messages, nb_waiting_procs) =

-- trigger API call

GET_QUEUING_PORT_STATUS(tapid, qp_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_QUEUING_PORT_STATUS_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_QUEUING_PORT_STATUS_ret_code.tapid.ret_code,

API_out_GET_QUEUING_PORT_STATUS_nb_message.tapid.nb_messages,

API_out_GET_QUEUING_PORT_STATUS_waiting_processes.tapid.nb_waiting_procs>);

SKIP

--------------------------------------------------------------------------------

-- Intra-Partition Communication

--------------------------------------------------------------------------------

--

-- Buffer

--

-- perform CREATE_BUFFER and check return values

check_CREATE_BUFFER (tapid, buffer_idx, max_msg_size, max_nb_msg, queuing_discpl, ret_code) =

-- trigger API call

CREATE_BUFFER(tapid,

buffer_idx,

max_msg_size,

max_nb_msg,

queuing_discpl);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_CREATE_BUFFER_ret_code.tapid.ret_code>);

SKIP

-- perform SEND_BUFFER and check return values

check_SEND_BUFFER (tapid, buffer_idx, msg_size, seq_id, time_out, ret_code) =

-- trigger API call

SEND_BUFFER(tapid,

buffer_idx,

msg_size,

seq_id,

time_out);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_SEND_BUFFER_ret_code.tapid.ret_code>);

SKIP

-- perform RECEIVE_BUFFER and check return values

check_RECEIVE_BUFFER (tapid, buffer_idx, time_out, ret_code, msg_size, msg_seq_id,

src_mod.src_part.src_proc) =

-- trigger API call

RECEIVE_BUFFER(tapid, buffer_idx, time_out);

-- check that return code and output parameters are as expected

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_RECEIVE_BUFFER_ret_code.tapid.ret_code>);

SKIP)

Page 331: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 311

else

(if (msg_size >= 4)

then (-- the message is big enough to contain a sequence identifier and

-- information about the sender

WAITFORSEQ(TM_RETVAL,

<API_out_RECEIVE_BUFFER_ret_code.tapid.ret_code,

API_out_RECEIVE_BUFFER_msg_size.tapid.msg_size,

API_out_RECEIVE_BUFFER_msg_seq_id.tapid.msg_seq_id,

API_out_RECEIVE_BUFFER_src_mod.tapid.src_mod,

API_out_RECEIVE_BUFFER_src_part.tapid.src_part,

API_out_RECEIVE_BUFFER_src_proc.tapid.src_proc>);

SKIP)

else (-- if msg_size is less than 4 bytes it is not possible to encode

-- the sender of the message and a sequence identifier

WAITFORSEQ(TM_RETVAL,

<API_out_RECEIVE_BUFFER_ret_code.tapid.ret_code,

API_out_RECEIVE_BUFFER_msg_size.tapid.msg_size>);

SKIP));

SKIP

-- perform GET_BUFFER_ID and check return values

check_GET_BUFFER_ID (tapid, buffer_idx, ret_code) =

-- trigger API call

GET_BUFFER_ID(tapid, buffer_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_BUFFER_ID_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameter are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_BUFFER_ID_ret_code.tapid.ret_code,

API_out_GET_BUFFER_ID_buffer_id.tapid.buffer_idx>);

SKIP

-- perform GET_BUFFER_STATUS and check return values

check_GET_BUFFER_STATUS (tapid, buffer_idx, ret_code, nb_messages, nb_waiting_procs) =

-- trigger API call

GET_BUFFER_STATUS(tapid, buffer_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_BUFFER_STATUS_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_BUFFER_STATUS_ret_code.tapid.ret_code,

API_out_GET_BUFFER_STATUS_nb_message.tapid.nb_messages,

API_out_GET_BUFFER_STATUS_waiting_processes.tapid.nb_waiting_procs>);

SKIP

--

-- Blackboards

--

-- perform CREATE_BLACKBOARD and check return values

check_CREATE_BLACKBOARD (tapid, bb_idx, max_msg_size, ret_code) =

-- trigger API call

CREATE_BLACKBOARD(tapid, bb_idx, max_msg_size);

-- check that return code is as expected

Page 332: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

312 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

WAITFORSEQ(TM_RETVAL,

<API_out_CREATE_BLACKBOARD_ret_code.tapid.ret_code>);

SKIP

-- perform DISPLAY_BLACKBOARD and check return values

check_DISPLAY_BLACKBOARD (tapid, bb_idx, msg_size, seq_id, ret_code) =

-- trigger API call

DISPLAY_BLACKBOARD(tapid,

bb_idx,

msg_size,

seq_id);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_DISPLAY_BLACKBOARD_ret_code.tapid.ret_code>);

SKIP

-- perform READ_BLACKBOARD and check return values

check_READ_BLACKBOARD (tapid, bb_idx, timeout, ret_code, msg_size,

msg_seq_id, src_mod.src_part.src_proc) =

-- trigger API call

READ_BLACKBOARD(tapid, bb_idx, timeout);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_READ_BLACKBOARD_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

(if (msg_size >= 4)

then (-- the message is big enough to contain a sequence identifier and

-- information about the sender

WAITFORSEQ(TM_RETVAL,

<API_out_READ_BLACKBOARD_ret_code.tapid.ret_code,

API_out_READ_BLACKBOARD_msg_size.tapid.msg_size,

API_out_READ_BLACKBOARD_msg_seq_id.tapid.msg_seq_id,

API_out_READ_BLACKBOARD_src_mod.tapid.src_mod,

API_out_READ_BLACKBOARD_src_part.tapid.src_part,

API_out_READ_BLACKBOARD_src_proc.tapid.src_proc>);

SKIP)

else (-- if msg_size is less than 4 bytes it is not possible to encode

-- the sender of the message or the sequence identifier

WAITFORSEQ(TM_RETVAL,

<API_out_READ_BLACKBOARD_ret_code.tapid.ret_code,

API_out_READ_BLACKBOARD_msg_size.tapid.msg_size>);

SKIP));

SKIP

-- perform CLEAR_BLACKBOARD and check return values

check_CLEAR_BLACKBOARD (tapid, bb_idx, ret_code) =

-- trigger API call

CLEAR_BLACKBOARD(tapid, bb_idx);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_CLEAR_BLACKBOARD_ret_code.tapid.ret_code>);

SKIP

-- perform GET_BLACKBOARD_ID and check return values

check_GET_BLACKBOARD_ID (tapid, bb_idx, ret_code) =

-- trigger API call

GET_BLACKBOARD_ID(tapid, bb_idx);

if (ret_code != ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

Page 333: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 313

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_BLACKBOARD_ID_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameter are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_BLACKBOARD_ID_ret_code.tapid.ret_code,

API_out_GET_BLACKBOARD_ID_blackboard_id.tapid.bb_idx>);

SKIP

-- perform GET_BLACKBOARD_STATUS and check return values

check_GET_BLACKBOARD_STATUS (tapid, bb_idx, ret_code, empty_indicator, nb_waiting_procs) =

-- trigger API call

GET_BLACKBOARD_STATUS(tapid, bb_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_BLACKBOARD_STATUS_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_BLACKBOARD_STATUS_ret_code.tapid.ret_code,

API_out_GET_BLACKBOARD_STATUS_empty_indicator.tapid.empty_indicator,

API_out_GET_BLACKBOARD_STATUS_waiting_processes.tapid.nb_waiting_procs>);

SKIP

--

-- Semaphores

--

-- perform CREATE_SEMAPHORE and check return values

check_CREATE_SEMAPHORE (tapid, semaphore_idx, curr_val, max_val, queuing_discpl, ret_code) =

-- trigger API call

CREATE_SEMAPHORE(tapid,

semaphore_idx,

curr_val,

max_val,

queuing_discpl);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_CREATE_SEMAPHORE_ret_code.tapid.ret_code>);

SKIP

-- perform WAIT_SEMAPHORE and check return values

check_WAIT_SEMAPHORE (tapid, semaphore_idx, timeout, ret_code) =

-- trigger API call

WAIT_SEMAPHORE(tapid, semaphore_idx, timeout);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_WAIT_SEMAPHORE_ret_code.tapid.ret_code>);

SKIP

-- perform SIGNAL_SEMAPHORE and check return values

check_SIGNAL_SEMAPHORE (tapid, semaphore_idx, ret_code) =

-- trigger API call

SIGNAL_SEMAPHORE(tapid, semaphore_idx);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_SIGNAL_SEMAPHORE_ret_code.tapid.ret_code>);

SKIP

Page 334: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

314 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- perform GET_SEMAPHORE_ID and check return values

check_GET_SEMAPHORE_ID (tapid, semaphore_idx, ret_code) =

-- trigger API call

GET_SEMAPHORE_ID(tapid, semaphore_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_SEMAPHORE_ID_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameter are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_SEMAPHORE_ID_ret_code.tapid.ret_code,

API_out_GET_SEMAPHORE_ID_semaphore_id.tapid.semaphore_idx>);

SKIP

-- perform GET_SEMAPHORE_STATUS and check return values

check_GET_SEMAPHORE_STATUS (tapid, semaphore_idx, ret_code, sem_val, nb_waiting_procs) =

-- trigger API call

GET_SEMAPHORE_STATUS(tapid, semaphore_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_SEMAPHORE_STATUS_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_SEMAPHORE_STATUS_ret_code.tapid.ret_code,

API_out_GET_SEMAPHORE_STATUS_current_value.tapid.sem_val,

API_out_GET_SEMAPHORE_STATUS_waiting_processes.tapid.nb_waiting_procs>);

SKIP

--

-- Events

--

-- perform CREATE_EVENT and check return values

check_CREATE_EVENT (tapid, event_idx, ret_code) =

-- trigger API call

CREATE_EVENT(tapid, event_idx);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_CREATE_EVENT_ret_code.tapid.ret_code>);

SKIP

-- perform SET_EVENT and check return values

check_SET_EVENT (tapid, event_idx, ret_code) =

-- trigger API call

SET_EVENT(tapid, event_idx);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_SET_EVENT_ret_code.tapid.ret_code>);

SKIP

-- perform RESET_EVENT and check return values

check_RESET_EVENT (tapid, event_idx, ret_code) =

-- trigger API call

RESET_EVENT(tapid, event_idx);

Page 335: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 315

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_RESET_EVENT_ret_code.tapid.ret_code>);

SKIP

-- perform WAIT_EVENT and check return values

check_WAIT_EVENT (tapid, event_idx, timeout, ret_code) =

-- trigger API call

WAIT_EVENT(tapid, event_idx, timeout);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_WAIT_EVENT_ret_code.tapid.ret_code>);

SKIP

-- perform GET_EVENT_ID and check return values

check_GET_EVENT_ID (tapid, event_idx, ret_code) =

-- trigger API call

GET_EVENT_ID(tapid, event_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_EVENT_ID_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameter are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_EVENT_ID_ret_code.tapid.ret_code,

API_out_GET_EVENT_ID_event_id.tapid.event_idx>);

SKIP

-- perform GET_EVENT_STATUS and check return values

check_GET_EVENT_STATUS (tapid, event_idx, ret_code, event_state, nb_waiting_procs) =

-- trigger API call

GET_EVENT_STATUS(tapid, event_idx);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_EVENT_STATUS_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_EVENT_STATUS_ret_code.tapid.ret_code,

API_out_GET_EVENT_STATUS_event_state.tapid.event_state,

API_out_GET_EVENT_STATUS_waiting_processes.tapid.nb_waiting_procs>);

SKIP

--------------------------------------------------------------------------------

-- Health Monitoring

--------------------------------------------------------------------------------

-- perform REPORT_APPLICATION_MESSAGE and check return values

check_REPORT_APPLICATION_MESSAGE (tapid, msg_size, seq_id, ret_code) =

-- trigger API call

REPORT_APPLICATION_MESSAGE(tapid, msg_size, seq_id);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_REPORT_APPLICATION_MESSAGE_ret_code.tapid.ret_code>);

SKIP

Page 336: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

316 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- perform CREATE_ERROR_HANDLER and check return values

check_CREATE_ERROR_HANDLER (tapid, stack_size, ret_code) =

-- trigger API call

CREATE_ERROR_HANDLER(tapid, stack_size);

-- check that return code is as expected

WAITFORSEQ(TM_RETVAL,

<API_out_CREATE_ERROR_HANDLER_ret_code.tapid.ret_code>);

SKIP

-- perform GET_ERROR_STATUS and check return values

check_GET_ERROR_STATUS (tapid, ret_code, error_code, error_msg_size, process_num) =

-- trigger API call

GET_ERROR_STATUS(tapid);

if (ret_code!=ret_NO_ERROR)

then -- wait for return code only, since the remaining parameters

-- do not contain new values and are therefore not extracted

-- by the IFM

(WAITFORSEQ(TM_RETVAL,

<API_out_GET_ERROR_STATUS_ret_code.tapid.ret_code>);

SKIP)

else

-- check that return code and output parameters are as expected

WAITFORSEQ(TM_RETVAL,

<API_out_GET_ERROR_STATUS_ret_code.tapid.ret_code,

API_out_GET_ERROR_STATUS_error_code.tapid.error_code,

API_out_GET_ERROR_STATUS_error_msg_size.tapid.error_msg_size,

API_out_GET_ERROR_STATUS_failed_process.tapid.process_num>);

SKIP

-- perform RAISE_APPLICATION_ERROR and check return values

-- check_RAISE_APPLICATION_ERROR (tapid, error_code, error_msg_size, ret_code)

-- RAISE_APPLICATION_ERROR has a return code but normally the calling process will

-- never get the return values.

--------------------------------------------------------------------------------

-- Macros to generate standard test applications processes and

-- their necessary command list buffers

--

-- The values for standard TA processes are defined in the configuration data

-- extracts for each partition (IMA_Conf_PTx.csp).

--------------------------------------------------------------------------------

CREATE_STANDARD_APERIODIC_TA (tapid, proc_idx)=

CREATE_TA(tapid,

proc_idx,

TA_APERIODIC_STACK_SIZE,

TA_APERIODIC_BASE_PRIORITY,

TA_APERIODIC_PERIOD,

TA_APERIODIC_TIME_CAPACITY,

TA_APERIODIC_DEADLINE);

SKIP

CREATE_STANDARD_PERIODIC_TA (tapid, proc_idx) =

CREATE_TA(tapid,

proc_idx,

TA_PERIODIC_STACK_SIZE,

TA_PERIODIC_BASE_PRIORITY,

TA_PERIODIC_PERIOD,

TA_PERIODIC_TIME_CAPACITY,

TA_PERIODIC_DEADLINE);

SKIP

CREATE_TA (tapid, proc_idx, stack_size, priority, period, time_capacity, deadline) =

-- create corresponding process first and fetch return value of CREATE_PROCESS.

-- It is assumed that TA creation works correctly in the STANDARD macro.

check_CREATE_PROCESS (tapid,

Page 337: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 317

proc_idx,

stack_size,

priority,

period,

time_capacity,

deadline,

ret_NO_ERROR);

-- create additional buffer for internal message routing

-- (messages received via AFDX port which are not for the receiving TA are

-- redirected to the corresponding process’ buffer)

-- Remark: buff_idx == proc_idx

check_CREATE_BUFFER (tapid,

proc_idx,

TA_BUF_MSG_SIZE,

TA_BUF_MSG_NUM,

qd_FIFO,

ret_NO_ERROR);

-- start process immediately

check_START (tapid, proc_idx, ret_NO_ERROR);

SKIP

CREATE_STANDARD_ERROR_HANDLER (tapid) =

check_CREATE_ERROR_HANDLER (tapid,

ERR_STACK_SIZE,

ret_NO_ERROR);

SKIP

--------------------------------------------------------------------------------

Page 338: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

318 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

B.3.2 CSP Macros for Communication Flow Scenario

IMA_com_flow_handling.csp:

–--------------------------------------------------------------------------------

--

-- CSP macros for handling of communication flow messages

--

-- Provided macros:

-- * for generating a communication flow message

-- * for sending a previously generated communication flow message

-- * for receiving a communication flow message

-- * for starting the communication flow scenario in the involved test

-- application processes

--

--------------------------------------------------------------------------------

-- create a message for communication flow testing.

-- Note1: The parameter receivers is a sequence of tuples which contain each

-- one receive port denoted by its index and the port type.

-- Note2: It is possible to reuse a message

AFDX_COM_FLOW_CREATE_MESSAGE(msg_num, msg_size, receivers, message_seq) =

(

AFDX_com_flow_clear_message.msg_num ->

AFDX_com_flow_message_set_msg_size.msg_num.msg_size ->

AFDX_COM_FLOW_SET_RECEIVERS(msg_num, receivers) ;

AFDX_com_flow_set_seqID.msg_num.message_seq ->

SKIP

)

-- helper function: set all receivers in the communication flow message

AFDX_COM_FLOW_SET_RECEIVERS(msg_num, receivers) =

if (null(receivers))

then SKIP

else

((let

(port_idx, port_type) = head(receivers)

within

if (port_type == port_QUEUING_PORT)

then AFDX_com_flow_message_add_queuing_port.msg_num.port_idx -> SKIP

else if (port_type == port_SAMPLING_PORT)

then AFDX_com_flow_message_add_sampling_port.msg_num.port_idx -> SKIP

else if (port_type == port_BUFFER)

then AFDX_com_flow_message_add_buffer.msg_num.port_idx -> SKIP

else if (port_type == port_BLACKBOARD)

then AFDX_com_flow_message_add_blackboard.msg_num.port_idx -> SKIP

else SKIP) ;

AFDX_COM_FLOW_SET_RECEIVERS(msg_num, tail(receivers)) )

-- send the previously generated communciation flow message

-- Note: It is possible to generate a message once and re-send it several times.

AFDX_COM_FLOW_SEND_MESSAGE(msg_num, port) =

(

AFDX_com_flow_send_message.msg_num.port -> SKIP

)

-- receive a communication flow message

-- Note: The test application can listen on one port only!

AFDX_COM_FLOW_RECEIVE_MESSAGE(port, message_state_correct, message_size,

message_seq) =

(

if (message_state_correct)

then

WAITFORSEQ(TM_COM_FLOW,

<AFDX_com_flow_receive_message.port.ok,

AFDX_com_flow_receive_message_msg_size.port.message_size,

AFDX_com_flow_receive_message_seq_id.port.message_seq>)

else

-- wait for "return code" (i.e., incorrect message state) only since

Page 339: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 319

-- since the remainig parameters are not trustworthy and therefore not

-- extracted by the IFM)

WAITFORSEQ(TM_COM_FLOW,

<AFDX_com_flow_receive_message.port.ok>)

)

-- start communication flow scenario in an involved test application process

-- (TA process listens on all ports in the sequence port_seq which consists

-- of tuples containing the port index and the porttype)

-- Note: The port sequence cannot contain more than five tuples since the

-- the scenario is started using a normal scenario command message

-- which contains only 10 parameters.

START_CF_SCENARIO(tapid, port_seq) =

(

-- add the first listening port tuple to the scenario command message and

-- then continue recursively

START_CF_SCENARIO1(tapid, 1, port_seq) ;

-- trigger the scenario start

-- Note: Identifier of the communication flow scenario is 40.

SCENARIO_activate.tapid.40 ->

SKIP

)

-- add listening port tuple ’num’ of the port sequence to the scenario command

-- message

START_CF_SCENARIO1(tapid, num, port_seq) =

(

if (null(port_seq))

then (-- no further listening ports have to be set

if (num <= 9)

then SCENARIO_set_parameter.tapid.num.0 -> SKIP

else SKIP )

else (

let

-- extract the tuple from the sequence

(port_idx, port_type) = head(port_seq)

i_port_type = if (port_type == port_QUEUING_PORT) then 0

else if (port_type == port_SAMPLING_PORT) then 1

else if (port_type == port_BUFFER) then 2

else if (port_type == port_BLACKBOARD) then 3

else -1

within

(-- set the port index

(if (member(port_idx, scen_parameter_value_t))

then SCENARIO_set_parameter.tapid.num.port_idx -> SKIP

else SCENARIO_set_ext_parameter.tapid.num.port_idx -> SKIP) ;

-- set the port type

SCENARIO_set_parameter.tapid.num+1.i_port_type -> SKIP

) ;

-- continue recursively

START_CF_SCENARIO1(tapid, num + 2, tail(port_seq))

)

)

--------------------------------------------------------------------------------

--

Page 340: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

320 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

B.3.3 General CSP Macros

IMA_macros.csp:

--------------------------------------------------------------------------------

-- general CSP macros

--------------------------------------------------------------------------------

-- process RUN(M) accepts all event from set M

RUN(M) = ([] e:M @ e -> RUN(M))

-- extended RUN-process which is superior to RUN(M) because it can handle large

-- sets of events

-- Note: To use ACCEPT(M), it is necessary to insert a channel declaration

-- ’channel RUNany’ in the pragma AM_INTERNAL section of the test

-- specification.

ACCEPT(M) = (let R = RUNany -> R

within R[[ RUNany <- x | x<-M ]])

-- wait for duration t

WAIT(t) = setTimer!t -> elapsedTimer.t -> SKIP

-- Wait for all events from set M to occur in arbitrary order within

-- time duration defined by timer t. If events do not occur in time,

-- warning.t is generated.

-- Note: The warning channel must be declared in the pragma AM_WARNING section

-- of the CSP specification.

WAITFOR(t,M) = setTimer!t -> WF(t,M)

WF(t,M) =

not(empty(M)) & elapsedTimer.t -> warning.t -> SKIP

[]

not(empty(M)) & ([] e:M @ e -> WF(t,diff(M,{e})))

[]

empty(M) & SKIP

-- Wait for all events from set M to occur once in arbitrary order.

WAITFOR_NO_TIMEOUT(M) = WF_NT(M)

WF_NT(M) =

not(empty(M)) & ([] e:M @ e -> WF_NT(diff(M,{e})))

[]

empty(M) & SKIP

-- Wait for all events from sequence S to occur in given order within

-- time duration defined by timer t. If events do not occur in time,

-- warning.t is generated.

-- Note1: The warning channel must be declared in the pragma AM_WARNING section

-- of the CSP specification.

--

-- Note2: If the order of specific events is determined, WAITFORSEQ can be used

-- instead of WAITFOR macro and produces a much smaller transition system.

WAITFORSEQ(t,S) = setTimer!t -> WFSEQ(t,S)

WFSEQ(t,S) =

not(null(S)) & elapsedTimer.t -> warning.t -> SKIP

[]

not(null(S)) & (head(S) -> WFSEQ(t,tail(S)))

[]

null(S) & SKIP

-- Wait for all events from sequence S to occur in given order within

-- time duration defined by timer t. If events do not occur in time,

-- warning.t is generated. Sequence S is a sequence of sets and from each

-- set exactly one event has to occur.

-- Note: The warning channel must be declared in the pragma AM_WARNING section

-- of the CSP specification.

Page 341: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 321

WAITFORSEQ_SINGLE_EVENT(t,S) = setTimer!t -> WFSEQSE(t,S)

WFSEQSE(t,S) =

not(null(S)) & elapsedTimer.t -> warning.t -> SKIP

[]

not(null(S)) & ([] e:head(S) @ e -> WFSEQSE(t,tail(S)))

[]

null(S) & SKIP

-- Wait for a single event from set M within time duration defined by

-- timer t. If events do not occur in time, warning.t is generated.

-- Note: The warning channel must be declared in the pragma AM_WARNING section

-- of the CSP specification.

WAITFORSINGLE_EV(t,M) = setTimer!t -> WFS(t,M)

WFS(t,M) =

not(empty(M)) & elapsedTimer.t -> warning.t -> SKIP

[]

not(empty(M)) & ([] e:M @ e -> SKIP)

[]

empty(M) & SKIP

-- Wait for a single event from set M without any time restriction.

WAITFORSINGLE_EV_NO_TIMEOUT(M) =

not(empty(M)) & ([] e:M @ e -> SKIP)

[]

empty(M) & SKIP

-- Execute random sequence of events from set M; each event is used exacly

-- once; a wait of time t is inserted between each two events.

RANDOM_TRACE(t,M) =

(if ( empty(M) )

then SKIP

else (|˜| e:M @ WAIT(t); e -> RANDOM_TRACE(t,diff(M,{e}))))

-- Check for occurrence of a set of events within time interval t1,t2.

-- A warning event will be generated if one of the events occurs too early,

-- another warning if the event occurs too late.

-- After the too-late-warning the process terminates without waiting further

-- for the remaining events.

-- Note: The warning channels must be declared in the pragma AM_WARNING section

-- of the CSP specification.

WAITFOR_WITHIN(t1,t2,EVENT_SET,earlyWarning,lateWarning) =

setTimer!t1 -> setTimer!t2 -> WFW1(t1,t2,EVENT_SET,earlyWarning,lateWarning)

WFW1(t1,t2,M,earlyWarning,lateWarning) =

not(empty(M))&([] e:M @

e -> earlyWarning

-> WFW1(t1,t2,

diff(M,{e}),

earlyWarning,lateWarning))

[]

not(empty(M))&elapsedTimer.t1 -> WFW2(t1,t2,M,earlyWarning,lateWarning)

[]

empty(M)& SKIP

WFW2(t1,t2,M,earlyWarning,lateWarning) =

not(empty(M))&([] e:M @

e -> WFW2(t1,t2,

diff(M,{e}),

earlyWarning,lateWarning))

[]

not(empty(M))&elapsedTimer.t2 -> lateWarning -> SKIP

[]

empty(M)& SKIP

-- Check that no events defined in the set M occur within a given time

Page 342: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

322 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- interval denoted by timer t. If events of the set have to be

-- rejected, an error event is generated.

-- Note: The error channel must be declared in the pragma AM_ERROR section of

-- the test specification.

REJECT(t,M,er) = setTimer!t -> RJ(t,M,er)

RJ(t,M,er) =

not(empty(M)) & elapsedTimer.t -> SKIP

[]

not(empty(M)) & ([] e:M @ e -> er -> RJ(t,M,er))

[]

empty(M) & SKIP

-- ==============================================================================

--

-- Sequence access functions

--

-- Sequence access function: get (n)th element of sequence s

--

-- simulates array access s[n] (with n from 1 to ...).

-- Note: Reading the (n+1)th element of a sequence with n elements causes a

-- compilation error and is therefore not explicitely checked, since

-- the return value in case of such an error is unclear.

get_nth_elem(n, s) =

if (n==1)

then head(s)

else get_nth_elem(n-1, tail(s))

-- Sequence access function: get_matching_elem(value, seq1, seq2)

--

-- determines the element number n of ’value’ in ’seq1’ (i.e., seq1[n] == value)

-- and returns seq2[n]

--

-- Precondition: elem(value, seq1) and (length(seq1) <= length(seq2))

--

-- Example: get_matching_elem(4,<3,2,4,7,5>,<a,b,c,d,e>) == c

get_matching_elem(value, seq1, seq2) =

if (head(seq1) == value)

then head(seq2)

else get_matching_elem(value, tail(seq1), tail(seq2))

-- Sequence access function: get_highest_valid_val (seq, value)

--

-- returns the highest value of the non-empty ordered sequence ’seq’ which is

-- less than or equal to value (value is the uppper bound).

-- Function is used to get a possible value of the sequence of restricted values

-- which is as close as possible to ’value’. For example,

-- get_highest_valid_val(sampling_port_size_seq,

-- head(IMA_Conf_SEQ_SAMPLING_PORTS_MAX_MSG_SIZE))

-- determines the highest communicatable value for setting the message size for

-- the first defined sampling port.

-- Note: The sequence has to contain at least one element which is less than or

-- equal to value. Otherwise, a compile error occurs.

get_highest_valid_val(seq, value) =

-- check if head is only element of the given list

if (null(tail(seq)))

then -- head represents largest value within seq

(if (head(seq) <= value)

then head(seq)

else head(tail(seq)) -- this produces an error during compile time

-- and thus indicates that there is no valid

-- value less than or equal to ’value’.

)

else -- check if next element is larger than value

if (head(tail(seq)) > value)

then -- head represents largest possible value within seq

head(seq)

Page 343: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

B.3. CSP MACROS 323

else -- continue checking recursively

get_highest_valid_val(tail(seq),value)

-- Sequence access function: get_lowest_valid_val (seq, value)

--

-- returns the lowest value of the non-empty ordered sequence ’seq’ which is

-- greater than or equal to ’value’ (value is the lower bound).

-- Note: The sequence has to contain at least one element which is greater

-- than or equal to value. Otherwise, a compile error occurs.

get_lowest_valid_val(seq, value) =

-- check if head is only element of the given list

if (null(tail(seq)))

then -- head represents lowest value within seq

(if head(seq) >= value

then head(seq)

else head(tail(seq)) -- this produces an error during compile time

-- and thus indicates that there is no valid

-- value greater than or equal to ’value’.

)

else -- check that current element is greater than value

if (head(seq) >= value)

then -- head represents the lowest possible value within seq

head(seq)

else -- continue checking recursively

get_lowest_valueid_val(tail(seq),value)

-- ==============================================================================

--

-- Set access functions

--

-- Set access function: get_highest_valid_elem (set, value)

--

-- returns the highest value of the non-empty set ’set’ which is less than or

-- equal to value (value is the uppper bound).

-- Note: The set has to contain at least one element which is less than or

-- equal to value.

get_highest_valid_elem(set, value) =

-- check if value is element of set

if (member(value,set))

then value

else

-- check if value-1 is element of set (continue recursively)

get_highest_valid_elem(set, value-1)

-- Set access function: get_lowest_valid_elem (set, value)

--

-- returns the lowest value of the non-empty set ’set’ which is greater than or

-- equal to value (value is the lower bound).

-- Note: The set has to contain at least one element which is greater than or

-- equal to value.

get_lowest_valid_elem(set, value) =

-- check if value is element of set

if (member(value,set))

then value

else

-- check if value+1 is element of set (continue recursively)

get_lowest_valid_elem(set, value+1)

-- ==============================================================================

--

-- Other helper functions

--

-- generate an ordered sequence from a non-empty set

--

-- set: non-empty set

-- min_val: lowest value to be added to sequence if member of set

Page 344: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

324 APPENDIX B. IMA TEST EXECUTION ENVIRONMENT – EXAMPLES

-- max_val: highest value to be added to sequence if member of set

ordered_seq (set, min_val, max_val) =

if (min_val > max_val)

then -- end of recursion

<>

else -- check if current value ’min_val’ shall be added

if (member(min_val, set))

then -- use min_val and continue recursively

<min_val>ˆordered_seq(set, min_val+1, max_val)

else -- continue recursively

ordered_seq(set, min_val+1, max_val)

--------------------------------------------------------------------------------

-- sum of sequence elements

--

-- seq: sequence of integers

sum_of_seq (seq) =

if (null (seq))

then 0

else sum_of_seq1 (0, seq)

-- sum of sequence elements

-- base: base value of sum

-- seq: sequence of integers

sum_of_seq1 (base, seq) =

if (null (seq))

then -- end of recursion

base

else -- continue recursively

sum_of_seq1( (base+head(seq)), tail(seq))

--------------------------------------------------------------------------------

-- calculate the partition duration based on the extracts from the

-- scheduling configuration

-- The partition duration is the sum of all configured scheduling window

-- durations.

-- Note: If partition duration is 0, it has been identified that the

-- configuration of the partition is incorrect and inconsistent.

part_duration = part_duration1(IMA_Conf_SEQ_SCHED_WINDOW_OFFSET,

IMA_Conf_SEQ_SCHED_WINDOW_DURATION)

part_duration1(sched_window_offset_seq, sched_window_duration_seq) =

let

sched_window_offset = head(sched_window_offset_seq)

sched_window_duration = head(sched_window_duration_seq)

within

if (sched_window_offset >= IMA_Conf_PARTITION_PERIOD)

then 0

else sched_window_duration + part_duration1(tail(sched_window_offset_seq),

tail(sched_window_duration_seq))

--------------------------------------------------------------------------------

Page 345: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Appendix C

IMA Configuration Library – Examples

C.1 Configuration TEMPLATE01

The configuration library contains a set of configuration templates which can be used as the basisfor other configurations by copying the configuration tables of the template into the new configu-ration. Configuration templates consist either of a complete set of configuration tables on moduleand partition level (usually without specifying the command and result ports of the avionics parti-tions), or can provide a subset of the configuration tables. This template configuration TEMPLATE 01provides all configuration tables on module level as well as the partition-level configuration tablesfor two avionics partitions. The partition configurations are very basic such that no communicationlinks, busses, lines and API ports are specified for communication.

The configuration tables are depicted in table format. Table C.1 and Table C.2 provide the module-level configuration tables. The partition-level configuration tables for avionics partition 1 areshown in Table C.3 and Table C.4, and for avionics partition 2 in Table C.5 and Table C.6. Theconfiguration tables for the system partitions are not provided.

The equivalent csv format with a semicolon as the column separator is omitted because the csvformat is not easily readable by humans.

325

Page 346: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

326 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

GLOBALDATA

PARTITIONNBSYSPARTITIONNBMAFDURATIONCACHECONFIG

RAMBEGIN

RAMSIZECFGAREABEGINCFGAREASIZEMACADDRESSMODULELOCATION

24

500

...0x00D0000013631488

0x00600000

2097152

...

...

HMSYSTEM

ERRORSOURCERECOVERYLEVEL

CONFIGERROR

module

INITERROR

module

DEADLINEMISSED

partition

APPLICATIONERROR

partition

NUMERICERROR

partition

ILLEGALREQUEST

partition

STACKOVERFLOW

partition

MEMORYVIOLATION

partition

HARDWAREFAULT

module

POWERFAIL

module

HMMODULE

ERRORSOURCERECOVERYACTION

CONFIGERROR

reset

INITERROR

reset

DEADLINEMISSED

reset

APPLICATIONERROR

reset

NUMERICERROR

reset

ILLEGALREQUEST

reset

STACKOVERFLOW

reset

MEMORYVIOLATION

reset

HARDWAREFAULT

reset

POWERFAIL

reset

AFDXOUTPUTVL

VLNAMEVLIDNETWORKPORTIDPORTCHARACPORTTRANSTYPEVLDATAIPADDRUDPPORT

AFDXINPUTVL

VLNAMEVLIDNETWORKPORTIDPORTCHARACVLDATAIPADDRUDPPORT

TableC.1:TEMPLATE01:Module-levelconfigurationtables

Page 347: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.1. CONFIGURATION TEMPLATE01 327

A429OUTPUTBUS

BUSNAMECONNECTOR

A429INPUTBUS

BUSNAMECONNECTOR

CANOUTPUTBUS

BUSNAMECONNECTOR

CANINPUTBUS

BUSNAMECONNECTOR

DISCRETEOUTPUTLINE

LINENAMECONNECTOR

DISCRETEINPUTLINE

LINENAMECONNECTOR

ANALOGUEOUTPUTLINE

LINENAMECONNECTOR

ANALOGUEINPUTLINE

LINENAMECONNECTOR

TableC.2:TEMPLATE01:Module-levelconfigurationtables(continued)

Page 348: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

328 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

GLOBALPARTITIONDATA

PARTITIONIDPARTITIONNAMEAPPLICATIONNAMEAPPLICATIONIDCACHECONFIGMAINSTACKSIZEMAINADDRPROCESSSTACKSIZEMMUCONFIG

1PARTITION1

TA

1...

32768

main

524288

...

TEMPORALALLOCATION

PARTITIONIDPARTITIONPERIODSCHEDWINDOWPOSSCHEDWINDOWOFFSETSCHEDWINDOWDURATION

1500

00

200

SPATIALALLOCATION

PARTITIONIDCODEAREABEGINCODEAREASIZEDATAAREABEGINDATAAREASIZE

10x03600000

1048576

0x01900000

2097152

HMPARTITION

PARTITIONID

ERRORSOURCERECOVERYACTIONHANDLERRECOVERY

1CONFIGERROR

coldrestart

false

1INITERROR

coldrestart

false

1DEADLINEMISSED

coldrestart

true

1APPLICATIONERROR

coldrestart

true

1NUMERICERROR

coldrestart

true

1ILLEGALREQUEST

coldrestart

true

1STACKOVERFLOW

coldrestart

true

1MEMORYVIOLATION

coldrestart

true

1HARDWAREFAULT

coldrestart

false

1POWERFAIL

coldrestart

false

AFDXOUTPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

AFDXINPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

TableC.3:TEMPLATE01:Partition-levelconfigurationtablesforpartition1

Page 349: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.1. CONFIGURATION TEMPLATE01 329

A429OUTPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

A429INPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

CANOUTPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

CANINPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

DISCRETEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMEDEFAULTVALUE

DISCRETEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMELOGIC

ANALOGUEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

ANALOGUEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

OUTPUTDATA

PARTITIONIDPORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

INPUTDATA

PARTITIONIDPORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

TableC.4:TEMPLATE01:Partition-levelconfigurationtablesforpartition1(continued)

Page 350: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

330 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

GLOBALPARTITIONDATA

PARTITIONIDPARTITIONNAMEAPPLICATIONNAMEAPPLICATIONIDCACHECONFIGMAINSTACKSIZEMAINADDRPROCESSSTACKSIZEMMUCONFIG

2PARTITION2

TA

1...

32768

main

524288

...

TEMPORALALLOCATION

PARTITIONIDPARTITIONPERIODSCHEDWINDOWPOSSCHEDWINDOWOFFSETSCHEDWINDOWDURATION

2500

1200

200

SPATIALALLOCATION

PARTITIONIDCODEAREABEGINCODEAREASIZEDATAAREABEGINDATAAREASIZE

20x03700000

1048576

0x01B00000

2097152

HMPARTITION

PARTITIONID

ERRORSOURCERECOVERYACTIONHANDLERRECOVERY

2CONFIGERROR

coldrestart

false

2INITERROR

coldrestart

false

2DEADLINEMISSED

coldrestart

true

2APPLICATIONERROR

coldrestart

true

2NUMERICERROR

coldrestart

true

2ILLEGALREQUEST

coldrestart

true

2STACKOVERFLOW

coldrestart

true

2MEMORYVIOLATION

coldrestart

true

2HARDWAREFAULT

coldrestart

false

2POWERFAIL

coldrestart

false

AFDXOUTPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

AFDXINPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

TableC.5:TEMPLATE01:Partition-levelconfigurationtablesforpartition2

Page 351: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.1. CONFIGURATION TEMPLATE01 331

A429OUTPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

A429INPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

CANOUTPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

CANINPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

DISCRETEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMEDEFAULTVALUE

DISCRETEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMELOGIC

ANALOGUEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

ANALOGUEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

OUTPUTDATA

PARTITIONIDPORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

INPUTDATA

PARTITIONIDPORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

TableC.6:TEMPLATE01:Partition-levelconfigurationtablesforpartition2(continued)

Page 352: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

332 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

C.2 Configuration Config0001

Configuration Config0001 is a small configuration based on TEMPLATE 01 which contains in ad-dition the necessary command and result ports for each avionics partition. This is reflected inthe configuration rules file (provided in Appendix C.2.1) which is used by the configuration datagenerator to generate configuration tables. The resulting configuration tables are provided in Ap-pendix C.2.2. The test-relevant configuration data extracts for this configuration are provided inAppendix C.2.3.

C.2.1 Configuration Rules

Config0001/rules.igr:

#-------------------------------------------------------------------------------

# Begin Configuration Template

# use a configuration template

COPY CONFIG "TEMPLATE01"

# End Configuration Template

#-------------------------------------------------------------------------------

# Begin Template Definition

# End Template Definition

#-------------------------------------------------------------------------------

# Begin Partition Creation/Deletion

# End Partition Creation/Deletion

#-------------------------------------------------------------------------------

# Begin Port Creation/Deletion

# generate command ports for all partitions

GENERATE COMMAND PORTS

# End Port Creation/Deletion

#-------------------------------------------------------------------------------

# Begin Partition Modifications

# End Partition Modifications

#-------------------------------------------------------------------------------

# Begin Port Modifications

# End Port Modifications

#-------------------------------------------------------------------------------

C.2.2 Resulting Configuration Tables

To focus on the differences between TEMPLATE 01 and Config0001, Table C.7, Table C.8 and Ta-ble C.9 only show those configuration tables which have changed with respect to the configurationtables of TEMPLATE 01.

Page 353: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.2. CONFIGURATION CONFIG0001 333

AFDXOUTPUTVL

VLNAME

VLIDNETWORKPORTIDPORTCHARACPORTTRANSTYPEVLDATAIPADDRUDPPORT

RESULTVLP110018

A&B

10018

queuing

multicast

...

...

...

RESULTVLP210020

A&B

10020

queuing

multicast

...

...

...

AFDXINPUTVL

VLNAME

VLIDNETWORKPORTIDPORTCHARACVLDATAIPADDRUDPPORT

COMMANDVLP110019

A&B

10019

queuing

...

...

...

COMMANDVLP210021

A&B

10021

queuing

...

...

...

TableC.7:Config0001:Module-levelconfigurationtables(onlychangeswithrespecttoTableC.1)

Page 354: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

334 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

AFDXOUTPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

1RESULTVLP1

10018

...

AFDXINPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

1COMMANDVLP1

10019

...

OUTPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

1RESULTPORT

queuing

56

10

AFDX

10018

...

INPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

1COMMANDPORT

queuing

56

10

AFDX

10019

...

TableC.8:Config0001:Partition-levelconfigurationtablesforpartition1(onlychangeswithrespecttoTableC.3)

Page 355: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.2. CONFIGURATION CONFIG0001 335

AFDXOUTPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

2RESULTVLP2

10020

...

AFDXINPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

2COMMANDVLP2

10021

...

OUTPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

2RESULTPORT

queuing

56

10

AFDX

10020

...

INPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

2COMMANDPORT

queuing

56

10

AFDX

10021

...

TableC.9:Config0001:Partition-levelconfigurationtablesforpartition2(onlychangeswithrespecttoTableC.5)

Page 356: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

336 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

C.2.3 Resulting Test Relevant Configuration Data Extracts

C.2.3.1 Extract of Global Configuration Data

Config0001/1/config_extracts/IMA_Conf_Globals.csp:

--------------------------------------------------------------------------------

-- extract of global configuration data

-- to be used by all test specifications of a test procedure

-- (contains no configuration data relevant only for one partition)

--------------------------------------------------------------------------------

-- memory settings for configuration area

IMA_Conf_CFG_AREA_BEGIN = 6291456

IMA_Conf_CFG_AREA_SIZE = 2097152

-- global scheduling settings

IMA_Conf_MAF_DURATION = 500

-- set of avionics partition identifiers

IMA_Conf_SET_AVIONICS_PARTITIONS = { 1, 2 }

-- sequence of sets grouping the partitions belonging to the same application

IMA_Conf_SEQ_APPL_PART_ASSIGN = < { 1, 2 } >

C.2.3.2 Extract of Partition 1 Configuration Data

Config0001/1/config_extracts/IMA_Conf_PT1.csp:

----------------------------------------------------------------------------

-- extract of partition relevant configuration data

-- to be used by those test specifications of a test procedure

-- dealing with the respective partition

-- (contains no configuration data relevant for all partition)

--------------------------------------------------------------------------------

IMA_Conf_APPLICATION_ID = 1

-- APPLICATION_NAME: TA

IMA_Conf_PARTITION_ID = 1

IMA_Conf_PARTITION_INDEX = 1

-- PARTITON_NAME: PARTITION_1

IMA_Conf_PROCESS_STACK_SIZE = 524288

IMA_Conf_MAIN_STACK_SIZE = 32768

-- temporal allocation

IMA_Conf_PARTITION_PERIOD = 500

IMA_Conf_SEQ_SCHED_WINDOW_POS = < 0 >

IMA_Conf_SEQ_SCHED_WINDOW_OFFSET = < 0 >

IMA_Conf_SEQ_SCHED_WINDOW_DURATION = < 200 >

-- spatial allocation (code)

IMA_Conf_CODE_AREA_BEGIN = 56623104

IMA_Conf_CODE_AREA_SIZE = 1048576

IMA_Conf_CODE_AREA_BEGIN_TUPLE = < (3, 96, 0, 0) >

IMA_Conf_CODE_AREA_SIZE_TUPLE = < (0, 16, 0, 0) >

-- spatial allocation (data)

IMA_Conf_DATA_AREA_BEGIN = 26214400

IMA_Conf_DATA_AREA_SIZE = 2097152

IMA_Conf_DATA_AREA_BEGIN_TUPLE = < (1, 144, 0, 0) >

IMA_Conf_DATA_AREA_SIZE_TUPLE = < (0, 32, 0, 0) >

--------------------------------------------------------------------------------

-- API ports

--------------------------------------------------------------------------------

-- sampling ports

Page 357: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.2. CONFIGURATION CONFIG0001 337

IMA_Conf_NUM_SAMPLING_PORTS = 0

IMA_Conf_SET_SAMPLING_PORT_INDICES = { }

IMA_Conf_SEQ_SAMPLING_PORT_INDICES = < >

IMA_Conf_SEQ_SAMPLING_PORTS_MAX_MSG_SIZE = < >

IMA_Conf_SEQ_SAMPLING_PORT_DIRECTIONS = < >

-- queuing ports including command ports

IMA_Conf_NUM_ALL_QUEUING_PORTS = 2

IMA_Conf_SET_ALL_QUEUING_PORT_INDICES = { 1, 2 }

IMA_Conf_SEQ_ALL_QUEUING_PORT_INDICES = < 1, 2 >

IMA_Conf_SEQ_ALL_QUEUING_PORTS_MAX_MSG_SIZE = < 56, 56 >

IMA_Conf_SEQ_ALL_QUEUING_PORTS_MAX_NB_MSG = < 10, 10 >

IMA_Conf_SEQ_ALL_QUEUING_PORT_DIRECTIONS = < dir_DESTINATION, dir_SOURCE >

-- queuing ports without command ports

IMA_Conf_NUM_QUEUING_PORTS = 0

IMA_Conf_SET_QUEUING_PORTS = { }

IMA_Conf_SEQ_QUEUING_PORT_INDICES = < >

IMA_Conf_SEQ_QUEUING_PORTS_MAX_MSG_SIZE = < >

IMA_Conf_SEQ_QUEUING_PORTS_MAX_NB_MSG = < >

IMA_Conf_SEQ_QUEUING_PORT_DIRECTIONS = < >

--------------------------------------------------------------------------------

-- RAM

--------------------------------------------------------------------------------

-- sequence of RAM output sampling ports (sequence of port indices)

IMA_Conf_SEQ_RAM_SP_OUT = < >

-- sequence of RAM sampling output port communication partner tuples

-- (for all ports in IMA_Conf_SEQ_RAM_SP_OUT)

IMA_Conf_SEQ_RAM_SP_OUT_CONFIG = < >

-- sequence of RAM output queuing ports (sequence of port indices)

IMA_Conf_SEQ_RAM_QP_OUT = < >

-- sequence of RAM queuing output port communication partner tuples

-- (for all ports in IMA_Conf_SEQ_RAM_QP_OUT)

IMA_Conf_SEQ_RAM_QP_OUT_CONFIG = < >

-- sequence of RAM input sampling ports (sequence of port indices)

IMA_Conf_SEQ_RAM_SP_IN = < >

-- sequence of RAM input queuing ports (sequence of port indices)

IMA_Conf_SEQ_RAM_QP_IN = < >

--------------------------------------------------------------------------------

-- AFDX

--------------------------------------------------------------------------------

-- sequence of AFDX output sampling ports (sequence of port indices)

IMA_Conf_SEQ_AFDX_SP_OUT = < >

-- sequence of AFDX output message configuration tuples

-- (for all ports in IMA_Conf_SEQ_AFDX_SP_OUT)

IMA_Conf_SEQ_AFDX_SP_OUT_CONFIG = < >

-- sequence of configured output VL identifiers

-- (for all ports in IMA_Conf_SEQ_AFDX_SP_OUT)

IMA_Conf_SEQ_AFDX_SP_OUT_VL = < >

-- sequence of AFDX output queuing ports (sequence of port indices)

IMA_Conf_SEQ_AFDX_QP_OUT = < 2 >

-- sequence of AFDX output message configuration tuples

-- (for all ports in IMA_Conf_SEQ_AFDX_QP_OUT)

IMA_Conf_SEQ_AFDX_QP_OUT_CONFIG = < (10018, net_A_B, trans_multicast) >

-- sequence of configured output VL identifiers

-- (for all ports in IMA_Conf_SEQ_AFDX_QP_OUT)

IMA_Conf_SEQ_AFDX_QP_OUT_VL = < 10018 >

Page 358: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

338 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

-- sequence of AFDX input sampling ports (sequence of port indices)

IMA_Conf_SEQ_AFDX_SP_IN = < >

-- sequence of AFDX input message configuration tuples

-- (for all ports in IMA_Conf_SEQ_AFDX_SP_IN)

IMA_Conf_SEQ_AFDX_SP_IN_CONFIG = < >

-- sequence of configured input VL identifiers

-- (for all ports in IMA_Conf_SEQ_AFDX_SP_IN)

IMA_Conf_SEQ_AFDX_SP_IN_VL = < >

-- sequence of AFDX input queuing ports (sequence of port indices)

IMA_Conf_SEQ_AFDX_QP_IN = < 1 >

-- sequence of AFDX input message configuration tuples

-- (for all ports in IMA_Conf_SEQ_AFDX_QP_IN)

IMA_Conf_SEQ_AFDX_QP_IN_CONFIG = < (10019, net_A_B) >

-- sequence of configured input VL identifiers

-- (for all ports in IMA_Conf_SEQ_AFDX_QP_IN)

IMA_Conf_SEQ_AFDX_QP_IN_VL = < 10019 >

-- set of all configured AFDX output ports including command ports

IMA_Conf_SET_ALL_AFDX_OUT_IDS = { 10018 }

-- set of all configured AFDX input ports without command ports

IMA_Conf_SET_AFDX_OUT_IDS = { }

-- set of all configured AFDX output ports including command ports

IMA_Conf_SET_ALL_AFDX_IN_IDS = { 10019 }

-- set of all configured AFDX output ports without command ports

IMA_Conf_SET_AFDX_IN_IDS = { }

--------------------------------------------------------------------------------

-- A429

--------------------------------------------------------------------------------

-- sequence of A429 output sampling ports (sequence of port indices)

IMA_Conf_SEQ_A429_OUT = < >

-- sequence of A429 output label configuration tuples

-- (for all ports in IMA_Conf_SEQ_A429_OUT)

IMA_Conf_SEQ_A429_OUT_CONFIG = < >

-- set of A429 output busses

-- (for all ports in IMA_Conf_SEQ_A429_OUT)

IMA_Conf_SET_A429_OUT_BUSSES = { }

-- sequence of A429 input sampling ports (sequence of port indices)

IMA_Conf_SEQ_A429_IN = < >

-- sequence of A429 input label configuration tuples

-- (for all ports in IMA_Conf_SEQ_A429_IN)

IMA_Conf_SEQ_A429_IN_CONFIG = < >

-- set of A429 input busses

-- (for all ports in IMA_Conf_SEQ_A429_IN)

IMA_Conf_SET_A429_IN_BUSSES = { }

--------------------------------------------------------------------------------

-- CAN

--------------------------------------------------------------------------------

-- sequence of CAN output sampling ports (sequence of port indices)

IMA_Conf_SEQ_CAN_OUT = < >

-- sequence of CAN output label configuration tuples

-- (for all ports in IMA_Conf_SEQ_CAN_OUT)

IMA_Conf_SEQ_CAN_OUT_CONFIG = < >

Page 359: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.2. CONFIGURATION CONFIG0001 339

-- set of CAN output busses

-- (for all ports in IMA_Conf_SEQ_CAN_OUT)

IMA_Conf_SET_CAN_OUT_BUSSES = { }

-- sequence of CAN input sampling ports (sequence of port indices)

IMA_Conf_SEQ_CAN_IN = < >

-- sequence of CAN input label configuration tuples

-- (for all ports in IMA_Conf_SEQ_CAN_IN)

IMA_Conf_SEQ_CAN_IN_CONFIG = < >

-- set of CAN input busses

-- (for all ports in IMA_Conf_SEQ_CAN_IN)

IMA_Conf_SET_CAN_IN_BUSSES = { }

--------------------------------------------------------------------------------

-- DISCRETE

--------------------------------------------------------------------------------

-- sequence of discrete output sampling ports (sequence of port indices)

IMA_Conf_SEQ_DISC_OUT = < >

-- sequence of discrete output signal configuration tuples

-- (for all ports in IMA_Conf_SEQ_DISC_OUT)

IMA_Conf_SEQ_DISC_OUT_CONFIG = < >

-- sequence of discrete input sampling ports (sequence of port indices)

IMA_Conf_SEQ_DISC_IN = < >

-- sequence of discrete input signal configuration tuples

-- (for all ports in IMA_Conf_SEQ_DISC_IN)

IMA_Conf_SEQ_DISC_IN_CONFIG = < >

--------------------------------------------------------------------------------

-- ANALOGUE

--------------------------------------------------------------------------------

-- sequence of analogue output sampling ports (sequence of port indices)

IMA_Conf_SEQ_ANALOG_OUT = < >

-- sequence of analogue output signal configuration tuples

-- (for all ports in IMA_Conf_SEQ_ANALOG_OUT)

IMA_Conf_SEQ_ANALOG_OUT_CONFIG = < >

-- sequence of analogue input sampling ports (sequence of port indices)

IMA_Conf_SEQ_ANALOG_IN = < >

-- sequence of analogue input signal configuration tuples

-- (for all ports in IMA_Conf_SEQ_ANALOG_IN)

IMA_Conf_SEQ_ANALOG_IN_CONFIG = < >

--------------------------------------------------------------------------------

--

--------------------------------------------------------------------------------

-- process attributes for standard periodic TAs

-- (compliant to partition attributes)

TA_PERIODIC_STACK_SIZE = 32768

TA_PERIODIC_BASE_PRIORITY = 3

TA_PERIODIC_PERIOD = 50000

TA_PERIODIC_TIME_CAPACITY = 50000

TA_PERIODIC_DEADLINE = deadline_HARD

-- process attributes for standard aperiodic TAs

-- (compliant to partition attributes)

TA_APERIODIC_STACK_SIZE = 32768

TA_APERIODIC_BASE_PRIORITY = 3

TA_APERIODIC_PERIOD = -1

TA_APERIODIC_TIME_CAPACITY = 50000

Page 360: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

340 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

TA_APERIODIC_DEADLINE = deadline_HARD

-- process attributes for standard error handler process

ERR_STACK_SIZE = 32768

C.2.3.3 Extract of Partition 2 Configuration Data

Config0001/1/config_extracts/IMA_Conf_PT2.csp:

----------------------------------------------------------------------------

-- extract of partition relevant configuration data

-- to be used by those test specifications of a test procedure

-- dealing with the respective partition

-- (contains no configuration data relevant for all partition)

--------------------------------------------------------------------------------

IMA_Conf_APPLICATION_ID = 1

-- APPLICATION_NAME: TA

IMA_Conf_PARTITION_ID = 2

IMA_Conf_PARTITION_INDEX = 2

-- PARTITON_NAME: PARTITION_2

IMA_Conf_PROCESS_STACK_SIZE = 524288

IMA_Conf_MAIN_STACK_SIZE = 32768

-- temporal allocation

IMA_Conf_PARTITION_PERIOD = 500

IMA_Conf_SEQ_SCHED_WINDOW_POS = < 1 >

IMA_Conf_SEQ_SCHED_WINDOW_OFFSET = < 200 >

IMA_Conf_SEQ_SCHED_WINDOW_DURATION = < 200 >

-- spatial allocation (code)

IMA_Conf_CODE_AREA_BEGIN = 57671680

IMA_Conf_CODE_AREA_SIZE = 1048576

IMA_Conf_CODE_AREA_BEGIN_TUPLE = < (3, 112, 0, 0) >

IMA_Conf_CODE_AREA_SIZE_TUPLE = < (0, 16, 0, 0) >

-- spatial allocation (data)

IMA_Conf_DATA_AREA_BEGIN = 28311552

IMA_Conf_DATA_AREA_SIZE = 2097152

IMA_Conf_DATA_AREA_BEGIN_TUPLE = < (1, 176, 0, 0) >

IMA_Conf_DATA_AREA_SIZE_TUPLE = < (0, 32, 0, 0) >

--------------------------------------------------------------------------------

-- API ports

--------------------------------------------------------------------------------

-- sampling ports

IMA_Conf_NUM_SAMPLING_PORTS = 0

IMA_Conf_SET_SAMPLING_PORT_INDICES = { }

IMA_Conf_SEQ_SAMPLING_PORT_INDICES = < >

IMA_Conf_SEQ_SAMPLING_PORTS_MAX_MSG_SIZE = < >

IMA_Conf_SEQ_SAMPLING_PORT_DIRECTIONS = < >

-- queuing ports including command ports

IMA_Conf_NUM_ALL_QUEUING_PORTS = 2

IMA_Conf_SET_ALL_QUEUING_PORT_INDICES = { 1, 2 }

IMA_Conf_SEQ_ALL_QUEUING_PORT_INDICES = < 1, 2 >

IMA_Conf_SEQ_ALL_QUEUING_PORTS_MAX_MSG_SIZE = < 56, 56 >

IMA_Conf_SEQ_ALL_QUEUING_PORTS_MAX_NB_MSG = < 10, 10 >

IMA_Conf_SEQ_ALL_QUEUING_PORT_DIRECTIONS = < dir_DESTINATION, dir_SOURCE >

-- queuing ports without command ports

IMA_Conf_NUM_QUEUING_PORTS = 0

IMA_Conf_SET_QUEUING_PORTS = { }

IMA_Conf_SEQ_QUEUING_PORT_INDICES = < >

IMA_Conf_SEQ_QUEUING_PORTS_MAX_MSG_SIZE = < >

IMA_Conf_SEQ_QUEUING_PORTS_MAX_NB_MSG = < >

IMA_Conf_SEQ_QUEUING_PORT_DIRECTIONS = < >

Page 361: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.2. CONFIGURATION CONFIG0001 341

--------------------------------------------------------------------------------

-- RAM

--------------------------------------------------------------------------------

-- sequence of RAM output sampling ports (sequence of port indices)

IMA_Conf_SEQ_RAM_SP_OUT = < >

-- sequence of RAM sampling output port communication partner tuples

-- (for all ports in IMA_Conf_SEQ_RAM_SP_OUT)

IMA_Conf_SEQ_RAM_SP_OUT_CONFIG = < >

-- sequence of RAM output queuing ports (sequence of port indices)

IMA_Conf_SEQ_RAM_QP_OUT = < >

-- sequence of RAM queuing output port communication partner tuples

-- (for all ports in IMA_Conf_SEQ_RAM_QP_OUT)

IMA_Conf_SEQ_RAM_QP_OUT_CONFIG = < >

-- sequence of RAM input sampling ports (sequence of port indices)

IMA_Conf_SEQ_RAM_SP_IN = < >

-- sequence of RAM input queuing ports (sequence of port indices)

IMA_Conf_SEQ_RAM_QP_IN = < >

--------------------------------------------------------------------------------

-- AFDX

--------------------------------------------------------------------------------

-- sequence of AFDX output sampling ports (sequence of port indices)

IMA_Conf_SEQ_AFDX_SP_OUT = < >

-- sequence of AFDX output message configuration tuples

-- (for all ports in IMA_Conf_SEQ_AFDX_SP_OUT)

IMA_Conf_SEQ_AFDX_SP_OUT_CONFIG = < >

-- sequence of configured output VL identifiers

-- (for all ports in IMA_Conf_SEQ_AFDX_SP_OUT)

IMA_Conf_SEQ_AFDX_SP_OUT_VL = < >

-- sequence of AFDX output queuing ports (sequence of port indices)

IMA_Conf_SEQ_AFDX_QP_OUT = < 2 >

-- sequence of AFDX output message configuration tuples

-- (for all ports in IMA_Conf_SEQ_AFDX_QP_OUT)

IMA_Conf_SEQ_AFDX_QP_OUT_CONFIG = < (10020, net_A_B, trans_multicast) >

-- sequence of configured output VL identifiers

-- (for all ports in IMA_Conf_SEQ_AFDX_QP_OUT)

IMA_Conf_SEQ_AFDX_QP_OUT_VL = < 10020 >

-- sequence of AFDX input sampling ports (sequence of port indices)

IMA_Conf_SEQ_AFDX_SP_IN = < >

-- sequence of AFDX input message configuration tuples

-- (for all ports in IMA_Conf_SEQ_AFDX_SP_IN)

IMA_Conf_SEQ_AFDX_SP_IN_CONFIG = < >

-- sequence of configured input VL identifiers

-- (for all ports in IMA_Conf_SEQ_AFDX_SP_IN)

IMA_Conf_SEQ_AFDX_SP_IN_VL = < >

-- sequence of AFDX input queuing ports (sequence of port indices)

IMA_Conf_SEQ_AFDX_QP_IN = < 1 >

-- sequence of AFDX input message configuration tuples

-- (for all ports in IMA_Conf_SEQ_AFDX_QP_IN)

IMA_Conf_SEQ_AFDX_QP_IN_CONFIG = < (10021, net_A_B) >

-- sequence of configured input VL identifiers

-- (for all ports in IMA_Conf_SEQ_AFDX_QP_IN)

Page 362: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

342 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

IMA_Conf_SEQ_AFDX_QP_IN_VL = < 10021 >

-- set of all configured AFDX output ports including command ports

IMA_Conf_SET_ALL_AFDX_OUT_IDS = { 10020 }

-- set of all configured AFDX input ports without command ports

IMA_Conf_SET_AFDX_OUT_IDS = { }

-- set of all configured AFDX output ports including command ports

IMA_Conf_SET_ALL_AFDX_IN_IDS = { 10021 }

-- set of all configured AFDX output ports without command ports

IMA_Conf_SET_AFDX_IN_IDS = { }

--------------------------------------------------------------------------------

-- A429

--------------------------------------------------------------------------------

-- sequence of A429 output sampling ports (sequence of port indices)

IMA_Conf_SEQ_A429_OUT = < >

-- sequence of A429 output label configuration tuples

-- (for all ports in IMA_Conf_SEQ_A429_OUT)

IMA_Conf_SEQ_A429_OUT_CONFIG = < >

-- set of A429 output busses

-- (for all ports in IMA_Conf_SEQ_A429_OUT)

IMA_Conf_SET_A429_OUT_BUSSES = { }

-- sequence of A429 input sampling ports (sequence of port indices)

IMA_Conf_SEQ_A429_IN = < >

-- sequence of A429 input label configuration tuples

-- (for all ports in IMA_Conf_SEQ_A429_IN)

IMA_Conf_SEQ_A429_IN_CONFIG = < >

-- set of A429 input busses

-- (for all ports in IMA_Conf_SEQ_A429_IN)

IMA_Conf_SET_A429_IN_BUSSES = { }

--------------------------------------------------------------------------------

-- CAN

--------------------------------------------------------------------------------

-- sequence of CAN output sampling ports (sequence of port indices)

IMA_Conf_SEQ_CAN_OUT = < >

-- sequence of CAN output label configuration tuples

-- (for all ports in IMA_Conf_SEQ_CAN_OUT)

IMA_Conf_SEQ_CAN_OUT_CONFIG = < >

-- set of CAN output busses

-- (for all ports in IMA_Conf_SEQ_CAN_OUT)

IMA_Conf_SET_CAN_OUT_BUSSES = { }

-- sequence of CAN input sampling ports (sequence of port indices)

IMA_Conf_SEQ_CAN_IN = < >

-- sequence of CAN input label configuration tuples

-- (for all ports in IMA_Conf_SEQ_CAN_IN)

IMA_Conf_SEQ_CAN_IN_CONFIG = < >

-- set of CAN input busses

-- (for all ports in IMA_Conf_SEQ_CAN_IN)

IMA_Conf_SET_CAN_IN_BUSSES = { }

--------------------------------------------------------------------------------

-- DISCRETE

--------------------------------------------------------------------------------

Page 363: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.2. CONFIGURATION CONFIG0001 343

-- sequence of discrete output sampling ports (sequence of port indices)

IMA_Conf_SEQ_DISC_OUT = < >

-- sequence of discrete output signal configuration tuples

-- (for all ports in IMA_Conf_SEQ_DISC_OUT)

IMA_Conf_SEQ_DISC_OUT_CONFIG = < >

-- sequence of discrete input sampling ports (sequence of port indices)

IMA_Conf_SEQ_DISC_IN = < >

-- sequence of discrete input signal configuration tuples

-- (for all ports in IMA_Conf_SEQ_DISC_IN)

IMA_Conf_SEQ_DISC_IN_CONFIG = < >

--------------------------------------------------------------------------------

-- ANALOGUE

--------------------------------------------------------------------------------

-- sequence of analogue output sampling ports (sequence of port indices)

IMA_Conf_SEQ_ANALOG_OUT = < >

-- sequence of analogue output signal configuration tuples

-- (for all ports in IMA_Conf_SEQ_ANALOG_OUT)

IMA_Conf_SEQ_ANALOG_OUT_CONFIG = < >

-- sequence of analogue input sampling ports (sequence of port indices)

IMA_Conf_SEQ_ANALOG_IN = < >

-- sequence of analogue input signal configuration tuples

-- (for all ports in IMA_Conf_SEQ_ANALOG_IN)

IMA_Conf_SEQ_ANALOG_IN_CONFIG = < >

--------------------------------------------------------------------------------

--

--------------------------------------------------------------------------------

-- process attributes for standard periodic TAs

-- (compliant to partition attributes)

TA_PERIODIC_STACK_SIZE = 32768

TA_PERIODIC_BASE_PRIORITY = 3

TA_PERIODIC_PERIOD = 50000

TA_PERIODIC_TIME_CAPACITY = 50000

TA_PERIODIC_DEADLINE = deadline_HARD

-- process attributes for standard aperiodic TAs

-- (compliant to partition attributes)

TA_APERIODIC_STACK_SIZE = 32768

TA_APERIODIC_BASE_PRIORITY = 3

TA_APERIODIC_PERIOD = -1

TA_APERIODIC_TIME_CAPACITY = 50000

TA_APERIODIC_DEADLINE = deadline_HARD

-- process attributes for standard error handler process

ERR_STACK_SIZE = 32768

Page 364: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

344 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

C.3 Configuration Config0040

Configuration Config0040 provides two configurations named Config0040 1 and Config0040 2which both are generated using the configuration rules file provided in Appendix C.3.1. The twoconfigurations have four avionics partitions and each partition has one input and one output AFDXport in addition to the command and result port. The difference between the two configurationsis the queue length (parameter PORT MAX MSG NB) of the AFDX ports. All further changes withrespect to TEMPLATE 01 are also described by configuration rules, for example, the adjustment ofthe partition scheduling. This includes a change of the MAF duration and, as a consequence, alsoof the partition periods. These changes are necessary to allow scheduling of all four partitionswithout decreasing the partition durations.The resulting configuration tables are depicted in Appendix C.3.2.1 (for Config0040 1) and Ap-pendix C.3.2.2 (for Config0040 2).The output of the configuration data parser, i.e., the configuration data extract files, are omitted.

C.3.1 Configuration Rules

Config0040/rules.igr:

#-------------------------------------------------------------------------------

# Begin Configuration Template

# use a configuration template

COPY CONFIG "TEMPLATE01"

# End Configuration Template

#-------------------------------------------------------------------------------

# Begin Template Definition

# template for AFDX output VLs

BEGIN TEMPLATE "AFDX_OUTPUT_VL_TMPL"

"VL_IDENTIFIER" = 100

"VL_NAME" = "AFDX_VL_100"

"NETWORK" = "A&B"

"PORT_ID" = 100

"PORT_CHARAC" = "queuing"

"PORT_TRANS_TYPE" = "multicast"

END TEMPLATE

# template for AFDX input VLs

BEGIN TEMPLATE "AFDX_INPUT_VL_TMPL"

"VL_IDENTIFIER" = 200

"VL_NAME" = "AFDX_VL_200"

"NETWORK" = "A&B"

"PORT_ID" = 200

"PORT_CHARAC" = "queuing"

END TEMPLATE

# template for AFDX output messages

BEGIN TEMPLATE "AFDX_OUTPUT_MSG_TMPL"

"ASSOCIATED_VL_NAME" = "undef"

"ASSOCIATED_AFDX_PORT_ID" = "undef"

END TEMPLATE

# template for AFDX input messages

BEGIN TEMPLATE "AFDX_INPUT_MSG_TMPL"

"ASSOCIATED_VL_NAME" = "undef"

"ASSOCIATED_AFDX_PORT_ID" = "undef"

END TEMPLATE

# End Template Definition

#-------------------------------------------------------------------------------

# Begin Partition Creation/Deletion

# generate partition 3 and 4 by cloning partition 1

Page 365: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.3. CONFIGURATION CONFIG0040 345

# (modifies automatically module-level configuration parameter PARTITION_NB in

# table GLOBAL_DATA)

CLONE PARTITION [3,4] FROM PARTITION 1

# End Partition Creation/Deletion

#-------------------------------------------------------------------------------

# Begin Port Creation/Deletion

# generate command ports for all partitions (i.e., also for the new ones)

GENERATE COMMAND PORTS

# generate AFDX VLs using the above defined templates

# (one VL in each direaction for each partition)

# for partition 1:

INSERT MODULE TABLE "AFDX_OUTPUT_VL" ROW FROM TEMPLATE "AFDX_OUTPUT_VL_TMPL" \

AUTOINC "VL_IDENTIFIER" "VL_NAME" "PORT_ID"

INSERT MODULE TABLE "AFDX_INPUT_VL" ROW FROM TEMPLATE "AFDX_INPUT_VL_TMPL" \

AUTOINC "VL_IDENTIFIER" "VL_NAME" "PORT_ID"

# for partition 2:

INSERT MODULE TABLE "AFDX_OUTPUT_VL" ROW FROM TEMPLATE "AFDX_OUTPUT_VL_TMPL" \

AUTOINC "VL_IDENTIFIER" "VL_NAME" "PORT_ID"

INSERT MODULE TABLE "AFDX_INPUT_VL" ROW FROM TEMPLATE "AFDX_INPUT_VL_TMPL" \

AUTOINC "VL_IDENTIFIER" "VL_NAME" "PORT_ID"

# for partition 3:

INSERT MODULE TABLE "AFDX_OUTPUT_VL" ROW FROM TEMPLATE "AFDX_OUTPUT_VL_TMPL" \

AUTOINC "VL_IDENTIFIER" "VL_NAME" "PORT_ID"

INSERT MODULE TABLE "AFDX_INPUT_VL" ROW FROM TEMPLATE "AFDX_INPUT_VL_TMPL" \

AUTOINC "VL_IDENTIFIER" "VL_NAME" "PORT_ID"

# for partition 4:

INSERT MODULE TABLE "AFDX_OUTPUT_VL" ROW FROM TEMPLATE "AFDX_OUTPUT_VL_TMPL" \

AUTOINC "VL_IDENTIFIER" "VL_NAME" "PORT_ID"

INSERT MODULE TABLE "AFDX_INPUT_VL" ROW FROM TEMPLATE "AFDX_INPUT_VL_TMPL" \

AUTOINC "VL_IDENTIFIER" "VL_NAME" "PORT_ID"

# generate AFDX message using the above defined templates

# output and input port for partition 1

INSERT PARTITION 1 TABLE "AFDX_OUTPUT_MESSAGE" ROW FROM TEMPLATE "AFDX_OUTPUT_MSG_TMPL"

MODIFY PARTITION 1 TABLE "AFDX_OUTPUT_MESSAGE" WHERE "ASSOCIATED_VL_NAME" = "undef" \

SET "ASSOCIATED_VL_NAME" = "AFDX_VL_100"

MODIFY PARTITION 1 TABLE "AFDX_OUTPUT_MESSAGE" WHERE "ASSOCIATED_AFDX_PORT_ID" = "undef" \

SET "ASSOCIATED_AFDX_PORT_ID" = 100

INSERT PARTITION 2 TABLE "AFDX_OUTPUT_MESSAGE" ROW FROM TEMPLATE "AFDX_OUTPUT_MSG_TMPL"

MODIFY PARTITION 2 TABLE "AFDX_OUTPUT_MESSAGE" WHERE "ASSOCIATED_VL_NAME" = "undef" \

SET "ASSOCIATED_VL_NAME" = "AFDX_VL_101"

MODIFY PARTITION 2 TABLE "AFDX_OUTPUT_MESSAGE" WHERE "ASSOCIATED_AFDX_PORT_ID" = "undef" \

SET "ASSOCIATED_AFDX_PORT_ID" = 101

# output and input port for partition 2

INSERT PARTITION 3 TABLE "AFDX_OUTPUT_MESSAGE" ROW FROM TEMPLATE "AFDX_OUTPUT_MSG_TMPL"

MODIFY PARTITION 3 TABLE "AFDX_OUTPUT_MESSAGE" WHERE "ASSOCIATED_VL_NAME" = "undef" \

SET "ASSOCIATED_VL_NAME" = "AFDX_VL_102"

MODIFY PARTITION 3 TABLE "AFDX_OUTPUT_MESSAGE" WHERE "ASSOCIATED_AFDX_PORT_ID" = "undef" \

SET "ASSOCIATED_AFDX_PORT_ID" = 102

INSERT PARTITION 4 TABLE "AFDX_OUTPUT_MESSAGE" ROW FROM TEMPLATE "AFDX_OUTPUT_MSG_TMPL"

MODIFY PARTITION 4 TABLE "AFDX_OUTPUT_MESSAGE" WHERE "ASSOCIATED_VL_NAME" = "undef" \

SET "ASSOCIATED_VL_NAME" = "AFDX_VL_103"

MODIFY PARTITION 4 TABLE "AFDX_OUTPUT_MESSAGE" WHERE "ASSOCIATED_AFDX_PORT_ID" = "undef" \

SET "ASSOCIATED_AFDX_PORT_ID" = 103

# output and input port for partition 3

INSERT PARTITION 1 TABLE "AFDX_INPUT_MESSAGE" ROW FROM TEMPLATE "AFDX_INPUT_MSG_TMPL"

MODIFY PARTITION 1 TABLE "AFDX_INPUT_MESSAGE" WHERE "ASSOCIATED_VL_NAME" = "undef" \

SET "ASSOCIATED_VL_NAME" = "AFDX_VL_200"

MODIFY PARTITION 1 TABLE "AFDX_INPUT_MESSAGE" WHERE "ASSOCIATED_AFDX_PORT_ID" = "undef" \

SET "ASSOCIATED_AFDX_PORT_ID" = 200

INSERT PARTITION 2 TABLE "AFDX_INPUT_MESSAGE" ROW FROM TEMPLATE "AFDX_INPUT_MSG_TMPL"

MODIFY PARTITION 2 TABLE "AFDX_INPUT_MESSAGE" WHERE "ASSOCIATED_VL_NAME" = "undef" \

SET "ASSOCIATED_VL_NAME" = "AFDX_VL_201"

MODIFY PARTITION 2 TABLE "AFDX_INPUT_MESSAGE" WHERE "ASSOCIATED_AFDX_PORT_ID" = "undef" \

SET "ASSOCIATED_AFDX_PORT_ID" = 201

Page 366: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

346 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

# output and input port for partition 4

INSERT PARTITION 3 TABLE "AFDX_INPUT_MESSAGE" ROW FROM TEMPLATE "AFDX_INPUT_MSG_TMPL"

MODIFY PARTITION 3 TABLE "AFDX_INPUT_MESSAGE" WHERE "ASSOCIATED_VL_NAME" = "undef" \

SET "ASSOCIATED_VL_NAME" = "AFDX_VL_202"

MODIFY PARTITION 3 TABLE "AFDX_INPUT_MESSAGE" WHERE "ASSOCIATED_AFDX_PORT_ID" = "undef" \

SET"ASSOCIATED_AFDX_PORT_ID" = 202

INSERT PARTITION 4 TABLE "AFDX_INPUT_MESSAGE" ROW FROM TEMPLATE "AFDX_INPUT_MSG_TMPL"

MODIFY PARTITION 4 TABLE "AFDX_INPUT_MESSAGE" WHERE "ASSOCIATED_VL_NAME" = "undef" \

SET "ASSOCIATED_VL_NAME" = "AFDX_VL_203"

MODIFY PARTITION 4 TABLE "AFDX_INPUT_MESSAGE" WHERE "ASSOCIATED_AFDX_PORT_ID" = "undef" \

SET "ASSOCIATED_AFDX_PORT_ID" = 203

# generate new rows for table OUTPUT_DATA by creating API ports and

# connecting them to the AFDX ports

CONNECT PORTS PARTITION 1 OUTPUT QUEUEING PORT 1 TO AFDX PORT 100 \

PORT_MAX_MSG_SIZE = 1024 PORT_MAX_MSG_NB = 1

CONNECT PORTS PARTITION 2 OUTPUT QUEUEING PORT 1 TO AFDX PORT 101 \

PORT_MAX_MSG_SIZE = 512 PORT_MAX_MSG_NB = 1

CONNECT PORTS PARTITION 3 OUTPUT QUEUEING PORT 1 TO AFDX PORT 102 \

PORT_MAX_MSG_SIZE = 2048 PORT_MAX_MSG_NB = 1

CONNECT PORTS PARTITION 4 OUTPUT QUEUEING PORT 1 TO AFDX PORT 103 \

PORT_MAX_MSG_SIZE = 8192 PORT_MAX_MSG_NB = 1

# generate new rows for table INPUT_DATA by creating API ports and

# connecting them to the AFDX ports

CONNECT PORTS PARTITION 1 INPUT QUEUEING PORT 2 TO AFDX PORT 200 \

PORT_MAX_MSG_SIZE = 1024 PORT_MAX_MSG_NB = 1

CONNECT PORTS PARTITION 2 INPUT QUEUEING PORT 2 TO AFDX PORT 201 \

PORT_MAX_MSG_SIZE = 512 PORT_MAX_MSG_NB = 1

CONNECT PORTS PARTITION 3 INPUT QUEUEING PORT 2 TO AFDX PORT 202 \

PORT_MAX_MSG_SIZE = 2048 PORT_MAX_MSG_NB = 1

CONNECT PORTS PARTITION 4 INPUT QUEUEING PORT 2 TO AFDX PORT 203 \

PORT_MAX_MSG_SIZE = 8192 PORT_MAX_MSG_NB = 1

# End Port Creation/Deletion

#-------------------------------------------------------------------------------

# Begin Partition Modifications

# set the application identifier

MODIFY PARTITION 3 TABLE "GLOBAL_PARTITION_DATA" SET "APPLICATION_ID" = 3

MODIFY PARTITION 4 TABLE "GLOBAL_PARTITION_DATA" SET "APPLICATION_ID" = 4

# set the temporal allocation of partition 3

MODIFY PARTITION 3 TABLE "TEMPORAL_ALLOCATION" SET "WINDOW_ID" = 2

MODIFY PARTITION 3 TABLE "TEMPORAL_ALLOCATION" SET "WINDOW_OFFSET" = 400

#set the spatial allocation of partition 3

MODIFY PARTITION 3 TABLE "SPATIAL_ALLOCATION" SET "CODE_AREA_BEGIN" = "0x3800000"

MODIFY PARTITION 3 TABLE "SPATIAL_ALLOCATION" SET "DATA_AREA_BEGIN" = "0x1D00000"

# set the temporal allocation of partition 4

MODIFY PARTITION 4 TABLE "TEMPORAL_ALLOCATION" SET "WINDOW_ID" = 3

MODIFY PARTITION 4 TABLE "TEMPORAL_ALLOCATION" SET "WINDOW_OFFSET" = 600

#set the spatial allocation of partition 4

MODIFY PARTITION 4 TABLE "SPATIAL_ALLOCATION" SET "CODE_AREA_BEGIN" = "0x3900000"

MODIFY PARTITION 4 TABLE "SPATIAL_ALLOCATION" SET "DATA_AREA_BEGIN" = "0x1F00000"

# set the new partition period for all partition

MODIFY PARTITION * TABLE "TEMPORAL_ALLOCATION" SET "PARTITION_PERIOD" = 1000

# change the new duration also for the module

MODIFY MODULE TABLE "GLOBAL_DATA" SET "MAF_DURATION" = 1000

# End Partition Modifications

#-------------------------------------------------------------------------------

# Begin Port Modifications

Page 367: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.3. CONFIGURATION CONFIG0040 347

# generate two configurations.

# first configuration: set all queue lengths which have previously been 1 to 1

# second configuration: set all queue lengths which have previously been 1 to 10

MODIFY PARTITION * TABLE OUTPUT_DATA WHERE PORT_MAX_MSG_NB = 1 \

SET PORT_MAX_MSG_NB = [1..10] (2)

MODIFY PARTITION * TABLE INPUT_DATA WHERE PORT_MAX_MSG_NB = 1 \

SET PORT_MAX_MSG_NB = [1..10] (2)

# End Port Modifications

#-------------------------------------------------------------------------------

C.3.2 Resulting Configuration Tables

According to the above rules, to different configurations are generated by the configuration datagenerator which are both depicted in the following subsections: The first configuration is providedin Appendix C.3.2.1 and the second one in Appendix C.3.2.2.

C.3.2.1 Config0040 1

The configuration tables for configuration Config0040 1 are provided in the following tables:

• Module-level configuration tables: Table C.10 and Table C.11• Partition-level configuration tables for partition 1: Table C.12 and Table C.13• Partition-level configuration tables for partition 2: Table C.14 and Table C.15• Partition-level configuration tables for partition 3: Table C.16 and Table C.17• Partition-level configuration tables for partition 4: Table C.18 and Table C.19

Page 368: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

348 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

GLOBALDATA

PARTITIONNBSYSPARTITIONNBMAFDURATIONCACHECONFIG

RAMBEGIN

RAMSIZECFGAREABEGINCFGAREASIZEMACADDRESSMODULELOCATION

44

1000

...0x00D0000013631488

0x00600000

2097152

...

...

HMSYSTEM

ERRORSOURCERECOVERYLEVEL

CONFIGERROR

module

INITERROR

module

DEADLINEMISSED

partition

APPLICATIONERROR

partition

NUMERICERROR

partition

ILLEGALREQUEST

partition

STACKOVERFLOW

partition

MEMORYVIOLATION

partition

HARDWAREFAULT

module

POWERFAIL

module

HMMODULE

ERRORSOURCERECOVERYACTION

CONFIGERROR

reset

INITERROR

reset

DEADLINEMISSED

reset

APPLICATIONERROR

reset

NUMERICERROR

reset

ILLEGALREQUEST

reset

STACKOVERFLOW

reset

MEMORYVIOLATION

reset

HARDWAREFAULT

reset

POWERFAIL

reset

AFDXOUTPUTVL

VLNAME

VLIDNETWORKPORTIDPORTCHARACPORTTRANSTYPEVLDATAIPADDRUDPPORT

RESULTVLP110018

A&B

10018

queuing

multicast

...

...

...

RESULTVLP210020

A&B

10020

queuing

multicast

...

...

...

RESULTVLP310022

A&B

10022

queuing

multicast

...

...

...

RESULTVLP410024

A&B

10024

queuing

multicast

...

...

...

AFDXVL100

100

A&B

100

queuing

multicast

...

...

...

AFDXVL101

101

A&B

101

queuing

multicast

...

...

...

AFDXVL102

102

A&B

102

queuing

multicast

...

...

...

AFDXVL103

103

A&B

103

queuing

multicast

...

...

...

TableC.10:Config00401:Module-levelconfigurationtables

Page 369: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.3. CONFIGURATION CONFIG0040 349

AFDXINPUTVL

VLNAME

VLIDNETWORKPORTIDPORTCHARACVLDATAIPADDRUDPPORT

COMMANDVLP110019

A&B

10019

queuing

...

...

...

COMMANDVLP210021

A&B

10021

queuing

...

...

...

COMMANDVLP310023

A&B

10023

queuing

...

...

...

COMMANDVLP410025

A&B

10025

queuing

...

...

...

AFDXVL200

200

A&B

200

queuing

...

...

...

AFDXVL201

201

A&B

201

queuing

...

...

...

AFDXVL202

202

A&B

202

queuing

...

...

...

AFDXVL203

203

A&B

203

queuing

...

...

...

A429OUTPUTBUS

BUSNAMECONNECTOR

A429INPUTBUS

BUSNAMECONNECTOR

CANOUTPUTBUS

BUSNAMECONNECTOR

CANINPUTBUS

BUSNAMECONNECTOR

DISCRETEOUTPUTLINE

LINENAMECONNECTOR

DISCRETEINPUTLINE

LINENAMECONNECTOR

ANALOGUEOUTPUTLINE

LINENAMECONNECTOR

ANALOGUEINPUTLINE

LINENAMECONNECTOR

TableC.11:Config00401:Module-levelconfigurationtables(continued)

Page 370: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

350 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

GLOBALPARTITIONDATA

PARTITIONIDPARTITIONNAMEAPPLICATIONNAMEAPPLICATIONIDCACHECONFIGMAINSTACKSIZEMAINADDRPROCESSSTACKSIZEMMUCONFIG

1PARTITION1

TA

1...

32768

main

524288

...

TEMPORALALLOCATION

PARTITIONIDPARTITIONPERIODSCHEDWINDOWPOSSCHEDWINDOWOFFSETSCHEDWINDOWDURATION

11000

00

200

SPATIALALLOCATION

PARTITIONIDCODEAREABEGINCODEAREASIZEDATAAREABEGINDATAAREASIZE

10x03600000

1048576

0x01900000

2097152

HMPARTITION

PARTITIONID

ERRORSOURCERECOVERYACTIONHANDLERRECOVERY

1CONFIGERROR

coldrestart

false

1INITERROR

coldrestart

false

1DEADLINEMISSED

coldrestart

true

1APPLICATIONERROR

coldrestart

true

1NUMERICERROR

coldrestart

true

1ILLEGALREQUEST

coldrestart

true

1STACKOVERFLOW

coldrestart

true

1MEMORYVIOLATION

coldrestart

true

1HARDWAREFAULT

coldrestart

false

1POWERFAIL

coldrestart

false

AFDXOUTPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

1RESULTVLP1

10018

...

1AFDXVL100

100

...

AFDXINPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

1COMMANDVLP1

10019

...

1AFDXVL200

200

...

TableC.12:Config00401:Partition-levelconfigurationtablesforpartition1

Page 371: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.3. CONFIGURATION CONFIG0040 351

A429OUTPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

A429INPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

CANOUTPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

CANINPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

DISCRETEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMEDEFAULTVALUE

DISCRETEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMELOGIC

ANALOGUEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

ANALOGUEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

OUTPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

1RESULTPORT

queuing

56

10

AFDX

10018

...

1QP003

queuing

1024

1AFDX

100

...

INPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

1COMMANDPORT

queuing

56

10

AFDX

10019

...

1QP004

queuing

1024

1AFDX

200

...

TableC.13:Config00401:Partition-levelconfigurationtablesforpartition1(continued)

Page 372: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

352 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

GLOBALPARTITIONDATA

PARTITIONIDPARTITIONNAMEAPPLICATIONNAMEAPPLICATIONIDCACHECONFIGMAINSTACKSIZEMAINADDRPROCESSSTACKSIZEMMUCONFIG

2PARTITION2

TA

2...

32768

main

524288

...

TEMPORALALLOCATION

PARTITIONIDPARTITIONPERIODSCHEDWINDOWPOSSCHEDWINDOWOFFSETSCHEDWINDOWDURATION

21000

1200

200

SPATIALALLOCATION

PARTITIONIDCODEAREABEGINCODEAREASIZEDATAAREABEGINDATAAREASIZE

20x03700000

1048576

0x01B00000

2097152

HMPARTITION

PARTITIONID

ERRORSOURCERECOVERYACTIONHANDLERRECOVERY

2CONFIGERROR

coldrestart

false

2INITERROR

coldrestart

false

2DEADLINEMISSED

coldrestart

true

2APPLICATIONERROR

coldrestart

true

2NUMERICERROR

coldrestart

true

2ILLEGALREQUEST

coldrestart

true

2STACKOVERFLOW

coldrestart

true

2MEMORYVIOLATION

coldrestart

true

2HARDWAREFAULT

coldrestart

false

2POWERFAIL

coldrestart

false

AFDXOUTPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

2RESULTVLP2

10020

...

2AFDXVL101

101

...

AFDXINPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

2COMMANDVLP2

10021

...

2AFDXVL201

201

...

TableC.14:Config00401:Partition-levelconfigurationtablesforpartition2

Page 373: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.3. CONFIGURATION CONFIG0040 353

A429OUTPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

A429INPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

CANOUTPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

CANINPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

DISCRETEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMEDEFAULTVALUE

DISCRETEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMELOGIC

ANALOGUEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

ANALOGUEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

OUTPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

2RESULTPORT

queuing

56

10

AFDX

10020

...

2QP003

queuing

512

1AFDX

101

...

INPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

2COMMANDPORT

queuing

56

10

AFDX

10021

...

2QP004

queuing

512

1AFDX

201

...

TableC.15:Config00401:Partition-levelconfigurationtablesforpartition2(continued)

Page 374: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

354 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

GLOBALPARTITIONDATA

PARTITIONIDPARTITIONNAMEAPPLICATIONNAMEAPPLICATIONIDCACHECONFIGMAINSTACKSIZEMAINADDRPROCESSSTACKSIZEMMUCONFIG

3PARTITION3

TA

2...

32768

main

524288

...

TEMPORALALLOCATION

PARTITIONIDPARTITIONPERIODSCHEDWINDOWPOSSCHEDWINDOWOFFSETSCHEDWINDOWDURATION

31000

2400

200

SPATIALALLOCATION

PARTITIONIDCODEAREABEGINCODEAREASIZEDATAAREABEGINDATAAREASIZE

30x03800000

1048576

0x01D00000

2097152

HMPARTITION

PARTITIONID

ERRORSOURCERECOVERYACTIONHANDLERRECOVERY

3CONFIGERROR

coldrestart

false

3INITERROR

coldrestart

false

3DEADLINEMISSED

coldrestart

true

3APPLICATIONERROR

coldrestart

true

3NUMERICERROR

coldrestart

true

3ILLEGALREQUEST

coldrestart

true

3STACKOVERFLOW

coldrestart

true

3MEMORYVIOLATION

coldrestart

true

3HARDWAREFAULT

coldrestart

false

3POWERFAIL

coldrestart

false

AFDXOUTPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

3RESULTVLP2

10020

...

3AFDXVL102

102

...

AFDXINPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

3COMMANDVLP2

10021

...

3AFDXVL202

202

...

TableC.16:Config00401:Partition-levelconfigurationtablesforpartition3

Page 375: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.3. CONFIGURATION CONFIG0040 355

A429OUTPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

A429INPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

CANOUTPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

CANINPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

DISCRETEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMEDEFAULTVALUE

DISCRETEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMELOGIC

ANALOGUEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

ANALOGUEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

OUTPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

3RESULTPORT

queuing

56

10

AFDX

10020

...

3QP003

queuing

2048

1AFDX

102

...

INPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

3COMMANDPORT

queuing

56

10

AFDX

10021

...

3QP004

queuing

2048

1AFDX

202

...

TableC.17:Config00401:Partition-levelconfigurationtablesforpartition3(continued)

Page 376: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

356 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

GLOBALPARTITIONDATA

PARTITIONIDPARTITIONNAMEAPPLICATIONNAMEAPPLICATIONIDCACHECONFIGMAINSTACKSIZEMAINADDRPROCESSSTACKSIZEMMUCONFIG

4PARTITION4

TA

4...

32768

main

524288

...

TEMPORALALLOCATION

PARTITIONIDPARTITIONPERIODSCHEDWINDOWPOSSCHEDWINDOWOFFSETSCHEDWINDOWDURATION

41000

3600

200

SPATIALALLOCATION

PARTITIONIDCODEAREABEGINCODEAREASIZEDATAAREABEGINDATAAREASIZE

40x03900000

1048576

0x01F00000

2097152

HMPARTITION

PARTITIONID

ERRORSOURCERECOVERYACTIONHANDLERRECOVERY

4CONFIGERROR

coldrestart

false

4INITERROR

coldrestart

false

4DEADLINEMISSED

coldrestart

true

4APPLICATIONERROR

coldrestart

true

4NUMERICERROR

coldrestart

true

4ILLEGALREQUEST

coldrestart

true

4STACKOVERFLOW

coldrestart

true

4MEMORYVIOLATION

coldrestart

true

4HARDWAREFAULT

coldrestart

false

4POWERFAIL

coldrestart

false

AFDXOUTPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

4RESULTVLP2

10020

...

4AFDXVL103

103

...

AFDXINPUTMESSAGE

PARTITIONIDASSOCIATEDVLNAMEASSOCIATEDAFDXPORTIDTYPEDATA

4COMMANDVLP2

10021

...

4AFDXVL203

203

...

TableC.18:Config00401:Partition-levelconfigurationtablesforpartition4

Page 377: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.3. CONFIGURATION CONFIG0040 357

A429OUTPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

A429INPUTLABEL

PARTITIONIDASSOCIATEDA429BUSA429LABELNAMEA429LABELNUMBERSIGNALLSBSIGNALMSBTYPEDATA

CANOUTPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

CANINPUTMESSAGE

PARTITIONIDASSOCIATEDCANBUSCANMSGNAMECANMSGIDCANMSGPAYLOADSIGNALLSBSIGNALMSBTYPEDATA

DISCRETEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMEDEFAULTVALUE

DISCRETEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMELOGIC

ANALOGUEOUTPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

ANALOGUEINPUTSIGNAL

PARTITIONIDASSOCIATEDLINESIGNALNAMETYPEDATA

OUTPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

4RESULTPORT

queuing

56

10

AFDX

10020

...

4QP003

queuing

8192

1AFDX

103

...

INPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

4COMMANDPORT

queuing

56

10

AFDX

10021

...

4QP004

queuing

8192

1AFDX

203

...

TableC.19:Config00401:Partition-levelconfigurationtablesforpartition4(continued)

Page 378: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

358 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

C.3.2.2 Config0040 2

To focus on the differences between Config0040 1 and Config0040 2, Table C.20, Table C.21,Table C.22 and Table C.23 only show those configuration tables which have changed with respectto the configuration tables provided in Appendix C.3.2.1.

Page 379: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

C.3. CONFIGURATION CONFIG0040 359

OUTPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

1RESULTPORT

queuing

56

10

AFDX

10018

...

1QP003

queuing

1024

10

AFDX

100

...

INPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

1COMMANDPORT

queuing

56

10

AFDX

10019

...

1QP004

queuing

1024

10

AFDX

200

...

TableC.20:Config00402:Partition-levelconfigurationtablesforpartition1whicharedifferentwithrespecttoConfig00401(TableC.13)

OUTPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

2RESULTPORT

queuing

56

10

AFDX

10020

...

2QP003

queuing

512

10

AFDX

101

...

INPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

2COMMANDPORT

queuing

56

10

AFDX

10021

...

2QP004

queuing

512

10

AFDX

201

...

TableC.21:Config00402:Partition-levelconfigurationtablesforpartition2whicharedifferentwithrespecttoConfig00401(TableC.15)

Page 380: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

360 APPENDIX C. IMA CONFIGURATION LIBRARY – EXAMPLES

OUTPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

3RESULTPORT

queuing

56

10

AFDX

10020

...

3QP003

queuing

2048

10

AFDX

102

...

INPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

3COMMANDPORT

queuing

56

10

AFDX

10021

...

3QP004

queuing

2048

10

AFDX

202

...

TableC.22:Config00402:Partition-levelconfigurationtablesforpartition3whicharedifferentwithrespecttoConfig00401(TableC.17)

OUTPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

4RESULTPORT

queuing

56

10

AFDX

10020

...

4QP003

queuing

8192

10

AFDX

103

...

INPUTDATA

PARTITIONID

PORTNAMEPORTCHARACPORTMAXMSGSIZEPORTMAXMSGNBMEDIUMTYPEASSOCIATEDPORTTYPEDATA

4COMMANDPORT

queuing

56

10

AFDX

10021

...

4QP004

queuing

8192

10

AFDX

203

...

TableC.23:Config00402:Partition-levelconfigurationtablesforpartition4whicharedifferentwithrespecttoConfig00401(TableC.19)

Page 381: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Appendix D

IMA Test Specification TemplateLibrary – Examples

D.1 Partitioning Tests

Partitioning tests shall verify the partition segregation with respect to memory, scheduling, op-erational modes, and communication. In this appendix section, one test procedure template islisted which contributes to test objective TO PART 003. This test objective checks that a partition’sfailure resulting in a partition reset or shutdown does not affect other partitions and their oper-ating mode. Test procedure template test procedure template 03 which has been selected forinclusion in this thesis, verifies that resetting a partition does not affect the other partitions andtheir scheduling, communication behavior or operational mode. The test design uses a so-calledcommunication flow scenario which is executed simultaneously in four partitions to show that thebehavior of each partition is uniform – even if another partition is reset.

D.1.1 TO PART 003/test procedure template 03

The test procedure template consists of three types of files: Test specification templates, a listof possible configurations, and an RT-Tester configuration template which are included in thefollowing subsections.

D.1.1.1 Test Specifications

Since it is necessary to simultaneously address four partitions, commanding and control-ling of each partition is handled by a separate test specification template (main pt1.csp.t,main pt2.csp.t, main pt3.csp.t, and main pt4.csp.t) which include test procedure-specificmacro processes defined in aux procs.csp.t.

specs/main_pt1.csp.t:

--------------------------------------------------------------------------------

--

-- Description:

--

-- Test Objective: TO_PART_003

-- Test Procedure: test_procedure_03

--

-- Tag: PART_008_TR001

--

--

-- Test, if the reset of an application has no effects on other

361

Page 382: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

362 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

-- applications (partitions)

--

-- Behavior of partition 1

--

--------------------------------------------------------------------------------

-- include IMA type definitions

#include "IMA_types4.csp"

-- include general macro process definitions

#include "IMA_macros.csp"

-- include general IMA API handling processes

#include "IMA_API_handling.csp"

-- include general processes to trigger IMA API calls including

-- checking the respective results

#include "IMA_API_macros.csp"

-- include constants etc. from the configuration tables

-- (include file generated by configuration data parser)

--

-- partition related data (here for Partition 1, postfix PT1)

#include "IMA_Conf_PT1.csp"

-- global (module related) data for all partitions

--#include "IMA_Conf_Globals.csp"

-- include types and macro process definitions for AFDX and the

-- communication flow scenario

#include "IMA_AFDX_types.csp"

#include "IMA_AFDX_handling.csp"

#include "IMA_com_flow_types.csp"

#include "IMA_com_flow_handling.csp"

--------------------------------------------------------------------------------

-- set (type) declarations

--------------------------------------------------------------------------------

-- Test Application Module ID

-- for distinction of test applications running on different IMA modules

MOD = {1}

TAMOD = 1

-- Test Application Partition Numbers

-- for distinction of test applications running on different partitions

PART = {1}

-- Test Application Process Numbers

-- for distinction of test applications processes within one partition

PROC = { 1, 2, 3, 129, 130 }

-- set of possible timer IDs

TIMER = {0..9}

--------------------------------------------------------------------------------

-- local settings and definitions

--------------------------------------------------------------------------------

-- aperiodic process is process i on partition 1 on module 1

-- myTAPERPID = modulenumber.partitionnumber.processnumber

toPR1 = 1.1.1

PR1 = 1

toPR2 = 1.1.2

PR2 = 2

toPR3 = 1.1.3

PR3 = 3

Page 383: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.1. PARTITIONING TESTS 363

-- MAIN process on partition 1 on module 1 has process number 130

toPRmain = 1.1.130

-- Error handler process on partition 1 on module 1 has process number 129

toPRerr = 1.1.129

PRerr = 129

-- timer assignments

TM_WAIT = 0

TM_RETVAL = 1

TM_STARTUP = 2

TM_DISC_ACT = 3

TM_COM_FLOW = 4

TM_LOOP_DURATION = 5

TM_TEST_DURATION = 6

TM_RESET_PART = 7

COM_FLOW_MSG_SIZE = 512

COM_FLOW_MAX_MSG_NB = 1

--------------------------------------------------------------------------------

-- Channel declarations

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

-- Errors generated by the test specification

--------------------------------------------------------------------------------

pragma AM_ERROR

channel error : TIMER

--------------------------------------------------------------------------------

-- Warnings generated by the test specification

--------------------------------------------------------------------------------

pragma AM_WARNING

channel warning : TIMER

WARNING_INVALID_CONFIG = 9

--------------------------------------------------------------------------------

-- Timers (set/elapsed/reset)

-- The durations associated with each timer are defined in config.rtt

--------------------------------------------------------------------------------

pragma AM_SET_TIMER

channel setTimer : TIMER

pragma AM_ELAPSED_TIMER

channel elapsedTimer : TIMER

pragma AM_RESET_TIMER

channel resetTimer : TIMER

--------------------------------------------------------------------------------

-- Inputs to the test application (TA)

--------------------------------------------------------------------------------

pragma AM_OUTPUT

#include "IMA_input_channels.csp"

#include "IMA_DISC_in.csp"

#include "IMA_com_flow_in.csp"

--

-- Events to the other test specifications

--

-- channel for synchronization between the test specifications

channel reset_completed : MOD

--------------------------------------------------------------------------------

-- Outputs of the test application (TA)

Page 384: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

364 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

--------------------------------------------------------------------------------

pragma AM_INPUT

#include "IMA_output_channels.csp"

#include "IMA_DISC_out.csp"

#include "IMA_com_flow_out.csp"

--

-- Events from the other test specifications

--

-- channel for synchronization between the test specifications

channel PT2_finished

channel PT3_finished

channel PT4_finished

--------------------------------------------------------------------------------

-- Internal Channels

--------------------------------------------------------------------------------

pragma AM_INTERNAL

--

-- Requirement Tag channels

--

channel PART_008_TR001

--------------------------------------------------------------------------------

-- Test Termination

--------------------------------------------------------------------------------

pragma AM_TERMINATE_TEST

channel test_finished

--==============================================================================

-- Process definitions

--==============================================================================

--------------------------------------------------------------------------------

-- Top-level process

--------------------------------------------------------------------------------

MAIN_PT1 =

(

-- find suitable ports for communication flow scenario in the partition’s

-- configuration

let

source_port = get_port_index(COM_FLOW_MSG_SIZE,

COM_FLOW_MAX_MSG_NB,

port_QUEUING_PORT,

dir_SOURCE)

dest_port = get_port_index(COM_FLOW_MSG_SIZE,

COM_FLOW_MAX_MSG_NB,

port_QUEUING_PORT,

dir_DESTINATION)

within

(

-- check that suitable ports are available

if ((source_port == -1) or (dest_port == -1))

then warning.WARNING_INVALID_CONFIG -> SKIP

else

(

-- reset IMA module and inform other partitions after reset

IMA_RESET(TAMOD);

WAIT(TM_STARTUP);

reset_completed.TAMOD ->

-- initialize the partition (create processes, buffers, ports) and,

-- after switching to normal mode, start the TA-internal communication

-- flow scenario in all started processes

PART_INIT(source_port, dest_port) ;

Page 385: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.1. PARTITIONING TESTS 365

setTimer.TM_RESET_PART ->

-- start sending of communication flow messages for a predefined time

SEND_RECV_LOOP(TM_RESET_PART,

dest_port,

source_port,

COM_FLOW_MSG_SIZE,

com_flow_sequence_id_t,

com_flow_sequence_id_t) ;

-- reset the partition

SET_PARTITION_MODE(toPR1, op_WARM_START) ;

-- wait for other partitions to finish

((PT2_finished -> SKIP)

|||

(PT3_finished -> SKIP)

|||

(PT4_finished -> SKIP)) ;

-- test requirement passed if no errors in the test execution log

PART_008_TR001 ->

test_finished ->

SKIP

)

)

)

--------------------------------------------------------------------------------

-- Macro processes

--------------------------------------------------------------------------------

-- test procedure specfic macro processes for all test specifications (main_pt*.csp)

#include "aux_procs.csp"

--------------------------------------------------------------------------------

specs/main_pt2.csp.t:

--------------------------------------------------------------------------------

--

-- Description:

--

-- Test Objective: TO_PART_003

-- Test Procedure: test_procedure_03

--

-- Tag: PART_008_TR001

--

--

-- Test, if the reset of an application has no effects on other

-- applications (partitions)

--

-- Behavior of partition 2

--

--------------------------------------------------------------------------------

-- include IMA type definitions

#include "IMA_types4.csp"

-- include general macro process definitions

#include "IMA_macros.csp"

-- include general IMA API handling processes

#include "IMA_API_handling.csp"

-- include general processes to trigger IMA API calls including

-- checking the respective results

#include "IMA_API_macros.csp"

-- include constants etc. from the configuration tables

Page 386: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

366 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

-- (include file generated by configuration data parser)

--

-- partition related data (here for Partition 2, postfix PT2)

#include "IMA_Conf_PT2.csp"

-- global (module related) data for all partitions

--#include "IMA_Conf_Globals.csp"

-- include types and macro process definitions for AFDX and the

-- communication flow scenario

#include "IMA_AFDX_types.csp"

#include "IMA_AFDX_handling.csp"

#include "IMA_com_flow_types.csp"

#include "IMA_com_flow_handling.csp"

--------------------------------------------------------------------------------

-- set (type) declarations

--------------------------------------------------------------------------------

-- Test Application Module ID

-- for distinction of test applications running on different IMA modules

MOD = {1}

TAMOD = 1

-- Test Application Partition Numbers

-- for distinction of test applications running on different partitions

PART = {2}

-- Test Application Process Numbers

-- for distinction of test applications processes within one partition

PROC = { 1, 2, 3, 129, 130 }

-- set of possible timer IDs

TIMER = {0..9}

--------------------------------------------------------------------------------

-- local settings and definitions

--------------------------------------------------------------------------------

-- aperiodic process is process i on partition 1 on module 1

-- myTAPERPID = modulenumber.partitionnumber.processnumber

toPR1 = 1.2.1

PR1 = 1

toPR2 = 1.2.2

PR2 = 2

toPR3 = 1.2.3

PR3 = 3

-- MAIN process on partition 1 on module 1 has process number 130

toPRmain = 1.2.130

-- Error handler process on partition 1 on module 1 has process number 129

toPRerr = 1.2.129

PRerr = 129

-- timer assignments

TM_WAIT = 0

TM_RETVAL = 1

TM_STARTUP = 2

TM_DISC_ACT = 3

TM_COM_FLOW = 4

TM_LOOP_DURATION = 5

TM_TEST_DURATION = 6

COM_FLOW_MSG_SIZE = 512

COM_FLOW_MAX_MSG_NB = 1

--------------------------------------------------------------------------------

Page 387: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.1. PARTITIONING TESTS 367

-- Channel declarations

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

-- Errors generated by the test specification

--------------------------------------------------------------------------------

pragma AM_ERROR

channel error : TIMER

--------------------------------------------------------------------------------

-- Warnings generated by the test specification

--------------------------------------------------------------------------------

pragma AM_WARNING

channel warning : TIMER

WARNING_INVALID_CONFIG = 9

--------------------------------------------------------------------------------

-- Timers (set/elapsed/reset)

-- The durations associated with each timer are defined in config.rtt

--------------------------------------------------------------------------------

pragma AM_SET_TIMER

channel setTimer : TIMER

pragma AM_ELAPSED_TIMER

channel elapsedTimer : TIMER

pragma AM_RESET_TIMER

channel resetTimer : TIMER

--------------------------------------------------------------------------------

-- Inputs to the test application (TA)

--------------------------------------------------------------------------------

pragma AM_OUTPUT

#include "IMA_input_channels.csp"

#include "IMA_DISC_in.csp"

#include "IMA_com_flow_in.csp"

--

-- Events to the other test specifications

--

-- channel for synchronization between the test specifications

channel PT2_finished

--------------------------------------------------------------------------------

-- Outputs of the test application (TA)

--------------------------------------------------------------------------------

pragma AM_INPUT

#include "IMA_output_channels.csp"

#include "IMA_DISC_out.csp"

#include "IMA_com_flow_out.csp"

--

-- Events from the other test specifications

--

-- channel for synchronization between the test specifications

channel reset_completed : MOD

--------------------------------------------------------------------------------

-- Internal Channels

--------------------------------------------------------------------------------

pragma AM_INTERNAL

--------------------------------------------------------------------------------

Page 388: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

368 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

-- Test Termination

--------------------------------------------------------------------------------

pragma AM_TERMINATE_TEST

--==============================================================================

-- Process definitions

--==============================================================================

--------------------------------------------------------------------------------

-- Top-level process

--------------------------------------------------------------------------------

MAIN_PT2 =

(

-- find suitable ports for communication flow scenario in the partition’s

-- configuration

let

source_port = get_port_index(COM_FLOW_MSG_SIZE,

COM_FLOW_MAX_MSG_NB,

port_QUEUING_PORT,

dir_SOURCE)

dest_port = get_port_index(COM_FLOW_MSG_SIZE,

COM_FLOW_MAX_MSG_NB,

port_QUEUING_PORT,

dir_DESTINATION)

within

(

-- check that suitable ports are available

if ((source_port == -1) or (dest_port == -1))

then warning.WARNING_INVALID_CONFIG -> SKIP

else

(

-- wait for reset of IMA module

reset_completed.TAMOD ->

-- initialize the partition (create processes, buffers, ports) and,

-- after switching to normal mode, start the TA-internal communication

-- flow scenario in all started processes

PART_INIT(source_port, dest_port) ;

setTimer.TM_TEST_DURATION ->

-- start sending of communication flow messages for a predefined time

SEND_RECV_LOOP(TM_TEST_DURATION,

dest_port,

source_port,

COM_FLOW_MSG_SIZE,

com_flow_sequence_id_t,

com_flow_sequence_id_t) ;

-- synchronization between the test specifications

PT2_finished ->

SKIP

)

)

)

--------------------------------------------------------------------------------

-- Macro processes

--------------------------------------------------------------------------------

-- test procedure specfic macro processes for all test specifications (main_pt*.csp)

#include "aux_procs.csp"

--------------------------------------------------------------------------------

Page 389: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.1. PARTITIONING TESTS 369

specs/main_pt3.csp.t:--------------------------------------------------------------------------------

--

-- Description:

--

-- Test Objective: TO_PART_003

-- Test Procedure: test_procedure_03

--

-- Tag: PART_008_TR001

--

--

-- Test, if the reset of an application has no effects on other

-- applications (partitions)

--

-- Behavior of partition 3

--

--------------------------------------------------------------------------------

-- include IMA type definitions

#include "IMA_types4.csp"

-- include general macro process definitions

#include "IMA_macros.csp"

-- include general IMA API handling processes

#include "IMA_API_handling.csp"

-- include general processes to trigger IMA API calls including

-- checking the respective results

#include "IMA_API_macros.csp"

-- include constants etc. from the configuration tables

-- (include file generated by configuration data parser)

--

-- partition related data (here for Partition 3, postfix PT3)

#include "IMA_Conf_PT3.csp"

-- global (module related) data for all partitions

--#include "IMA_Conf_Globals.csp"

-- include types and macro process definitions for AFDX and the

-- communication flow scenario

#include "IMA_AFDX_types.csp"

#include "IMA_AFDX_handling.csp"

#include "IMA_com_flow_types.csp"

#include "IMA_com_flow_handling.csp"

--------------------------------------------------------------------------------

-- set (type) declarations

--------------------------------------------------------------------------------

-- Test Application Module ID

-- for distinction of test applications running on different IMA modules

MOD = {1}

TAMOD = 1

-- Test Application Partition Numbers

-- for distinction of test applications running on different partitions

PART = {3}

-- Test Application Process Numbers

-- for distinction of test applications processes within one partition

PROC = { 1, 2, 3, 129, 130 }

-- set of possible timer IDs

TIMER = {0..9}

--------------------------------------------------------------------------------

-- local settings and definitions

--------------------------------------------------------------------------------

Page 390: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

370 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

-- aperiodic process is process i on partition 1 on module 1

-- myTAPERPID = modulenumber.partitionnumber.processnumber

toPR1 = 1.3.1

PR1 = 1

toPR2 = 1.3.2

PR2 = 2

toPR3 = 1.3.3

PR3 = 3

-- MAIN process on partition 1 on module 1 has process number 130

toPRmain = 1.3.130

-- Error handler process on partition 1 on module 1 has process number 129

toPRerr = 1.3.129

PRerr = 129

-- timer assignments

TM_WAIT = 0

TM_RETVAL = 1

TM_STARTUP = 2

TM_DISC_ACT = 3

TM_COM_FLOW = 4

TM_LOOP_DURATION = 5

TM_TEST_DURATION = 6

COM_FLOW_MSG_SIZE = 512

COM_FLOW_MAX_MSG_NB = 1

--------------------------------------------------------------------------------

-- Channel declarations

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

-- Errors generated by the test specification

--------------------------------------------------------------------------------

pragma AM_ERROR

channel error : TIMER

--------------------------------------------------------------------------------

-- Warnings generated by the test specification

--------------------------------------------------------------------------------

pragma AM_WARNING

channel warning : TIMER

WARNING_INVALID_CONFIG = 9

--------------------------------------------------------------------------------

-- Timers (set/elapsed/reset)

-- The durations associated with each timer are defined in config.rtt

--------------------------------------------------------------------------------

pragma AM_SET_TIMER

channel setTimer : TIMER

pragma AM_ELAPSED_TIMER

channel elapsedTimer : TIMER

pragma AM_RESET_TIMER

channel resetTimer : TIMER

--------------------------------------------------------------------------------

-- Inputs to the test application (TA)

--------------------------------------------------------------------------------

pragma AM_OUTPUT

#include "IMA_input_channels.csp"

#include "IMA_DISC_in.csp"

#include "IMA_com_flow_in.csp"

Page 391: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.1. PARTITIONING TESTS 371

--

-- Events to the other test specifications

--

-- channel for synchronization between the test specifications

channel PT3_finished

--------------------------------------------------------------------------------

-- Outputs of the test application (TA)

--------------------------------------------------------------------------------

pragma AM_INPUT

#include "IMA_output_channels.csp"

#include "IMA_DISC_out.csp"

#include "IMA_com_flow_out.csp"

--

-- Events from the other test specifications

--

-- channel for synchronization between the test specifications

channel reset_completed : MOD

--------------------------------------------------------------------------------

-- Internal Channels

--------------------------------------------------------------------------------

pragma AM_INTERNAL

--------------------------------------------------------------------------------

-- Test Termination

--------------------------------------------------------------------------------

pragma AM_TERMINATE_TEST

--==============================================================================

-- Process definitions

--==============================================================================

--------------------------------------------------------------------------------

-- Top-level process

--------------------------------------------------------------------------------

MAIN_PT3 =

(

-- find suitable ports for communication flow scenario in the partition’s

-- configuration

let

source_port = get_port_index(COM_FLOW_MSG_SIZE,

COM_FLOW_MAX_MSG_NB,

port_QUEUING_PORT,

dir_SOURCE)

dest_port = get_port_index(COM_FLOW_MSG_SIZE,

COM_FLOW_MAX_MSG_NB,

port_QUEUING_PORT,

dir_DESTINATION)

within

(

-- check that suitable ports are available

if ((source_port == -1) or (dest_port == -1))

then warning.WARNING_INVALID_CONFIG -> SKIP

else

(

-- wait for reset of IMA module

reset_completed.TAMOD ->

-- initialize the partition (create processes, buffers, ports) and,

-- after switching to normal mode, start the TA-internal communication

-- flow scenario in all started processes

PART_INIT(source_port, dest_port) ;

setTimer.TM_TEST_DURATION ->

Page 392: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

372 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

-- start sending of communication flow messages for a predefined time

SEND_RECV_LOOP(TM_TEST_DURATION,

dest_port,

source_port,

COM_FLOW_MSG_SIZE,

com_flow_sequence_id_t,

com_flow_sequence_id_t) ;

-- synchronization between the test specifications

PT3_finished ->

SKIP

)

)

)

--------------------------------------------------------------------------------

-- Macro processes

--------------------------------------------------------------------------------

-- test procedure specfic macro processes for all test specifications (main_pt*.csp)

#include "aux_procs.csp"

--------------------------------------------------------------------------------

specs/main_pt4.csp.t:

--------------------------------------------------------------------------------

--

-- Description:

--

-- Test Objective: TO_PART_003

-- Test Procedure: test_procedure_03

--

-- Tag: PART_008_TR001

--

--

-- Test, if the reset of an application has no effects on other

-- applications (partitions)

--

-- Behavior of partition 4

--

--------------------------------------------------------------------------------

-- include IMA type definitions

#include "IMA_types4.csp"

-- include general macro process definitions

#include "IMA_macros.csp"

-- include general IMA API handling processes

#include "IMA_API_handling.csp"

-- include general processes to trigger IMA API calls including

-- checking the respective results

#include "IMA_API_macros.csp"

-- include constants etc. from the configuration tables

-- (include file generated by configuration data parser)

--

-- partition related data (here for Partition 4, postfix PT4)

#include "IMA_Conf_PT4.csp"

-- global (module related) data for all partitions

--#include "IMA_Conf_Globals.csp"

-- include types and macro process definitions for AFDX and the

-- communication flow scenario

#include "IMA_AFDX_types.csp"

#include "IMA_AFDX_handling.csp"

Page 393: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.1. PARTITIONING TESTS 373

#include "IMA_com_flow_types.csp"

#include "IMA_com_flow_handling.csp"

--------------------------------------------------------------------------------

-- set (type) declarations

--------------------------------------------------------------------------------

-- Test Application Module ID

-- for distinction of test applications running on different IMA modules

MOD = {1}

TAMOD = 1

-- Test Application Partition Numbers

-- for distinction of test applications running on different partitions

PART = {4}

-- Test Application Process Numbers

-- for distinction of test applications processes within one partition

PROC = { 1, 2, 3, 129, 130 }

-- set of possible timer IDs

TIMER = {0..9}

--------------------------------------------------------------------------------

-- local settings and definitions

--------------------------------------------------------------------------------

-- aperiodic process is process i on partition 1 on module 1

-- myTAPERPID = modulenumber.partitionnumber.processnumber

toPR1 = 1.4.1

PR1 = 1

toPR2 = 1.4.2

PR2 = 2

toPR3 = 1.4.3

PR3 = 3

-- MAIN process on partition 1 on module 1 has process number 130

toPRmain = 1.4.130

-- Error handler process on partition 1 on module 1 has process number 129

toPRerr = 1.4.129

PRerr = 129

-- timer assignments

TM_WAIT = 0

TM_RETVAL = 1

TM_STARTUP = 2

TM_DISC_ACT = 3

TM_COM_FLOW = 4

TM_LOOP_DURATION = 5

TM_TEST_DURATION = 6

COM_FLOW_MSG_SIZE = 512

COM_FLOW_MAX_MSG_NB = 1

--------------------------------------------------------------------------------

-- Channel declarations

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

-- Errors generated by the test specification

--------------------------------------------------------------------------------

pragma AM_ERROR

channel error : TIMER

--------------------------------------------------------------------------------

-- Warnings generated by the test specification

Page 394: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

374 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

--------------------------------------------------------------------------------

pragma AM_WARNING

channel warning : TIMER

WARNING_INVALID_CONFIG = 9

--------------------------------------------------------------------------------

-- Timers (set/elapsed/reset)

-- The durations associated with each timer are defined in config.rtt

--------------------------------------------------------------------------------

pragma AM_SET_TIMER

channel setTimer : TIMER

pragma AM_ELAPSED_TIMER

channel elapsedTimer : TIMER

pragma AM_RESET_TIMER

channel resetTimer : TIMER

--------------------------------------------------------------------------------

-- Inputs to the test application (TA)

--------------------------------------------------------------------------------

pragma AM_OUTPUT

#include "IMA_input_channels.csp"

#include "IMA_DISC_in.csp"

#include "IMA_com_flow_in.csp"

--

-- Events to the other test specifications

--

-- channel for synchronization between the test specifications

channel PT4_finished

--------------------------------------------------------------------------------

-- Outputs of the test application (TA)

--------------------------------------------------------------------------------

pragma AM_INPUT

#include "IMA_output_channels.csp"

#include "IMA_DISC_out.csp"

#include "IMA_com_flow_out.csp"

--

-- Events from the other test specifications

--

-- channel for synchronization between the test specifications

channel reset_completed : MOD

--------------------------------------------------------------------------------

-- Internal Channels

--------------------------------------------------------------------------------

pragma AM_INTERNAL

--------------------------------------------------------------------------------

-- Test Termination

--------------------------------------------------------------------------------

pragma AM_TERMINATE_TEST

--==============================================================================

-- Process definitions

--==============================================================================

--------------------------------------------------------------------------------

-- Top-level process

--------------------------------------------------------------------------------

Page 395: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.1. PARTITIONING TESTS 375

MAIN_PT4 =

(

-- find suitable ports for communication flow scenario in the partition’s

-- configuration

let

source_port = get_port_index(COM_FLOW_MSG_SIZE,

COM_FLOW_MAX_MSG_NB,

port_QUEUING_PORT,

dir_SOURCE)

dest_port = get_port_index(COM_FLOW_MSG_SIZE,

COM_FLOW_MAX_MSG_NB,

port_QUEUING_PORT,

dir_DESTINATION)

within

(

-- check that suitable ports are available

if ((source_port == -1) or (dest_port == -1))

then warning.WARNING_INVALID_CONFIG -> SKIP

else

(

-- wait for reset of IMA module

reset_completed.TAMOD ->

-- initialize the partition (create processes, buffers, ports) and,

-- after switching to normal mode, start the TA-internal communication

-- flow scenario in all started processes

PART_INIT(source_port, dest_port) ;

setTimer.TM_TEST_DURATION ->

-- start sending of communication flow messages for a predefined time

SEND_RECV_LOOP(TM_TEST_DURATION,

dest_port,

source_port,

COM_FLOW_MSG_SIZE,

com_flow_sequence_id_t,

com_flow_sequence_id_t) ;

-- synchronization between the test specifications

PT4_finished ->

SKIP

)

)

)

--------------------------------------------------------------------------------

-- Macro processes

--------------------------------------------------------------------------------

-- test procedure specfic macro processes for all test specifications (main_pt*.csp)

#include "aux_procs.csp"

--------------------------------------------------------------------------------

specs/aux_procs.csp.t:

--------------------------------------------------------------------------------

--

-- Description:

--

-- Test Objective: TO_PART_003

-- Test Procedure: test_procedure_03

--

-- Tag: PART_008_TR001

--

--

-- Test, if the reset of an application has no effects on other

-- applications (partitions)

--

-- Common auxiliary functions/processes

Page 396: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

376 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

--

--------------------------------------------------------------------------------

--

-- initialize the partition: create processes, error handler, buffers, queuing ports

-- after switching to normal mode, start communication flow scenario in each

-- started process

PART_INIT(source_port_idx, dest_port_idx) =

(

-- create error handler

-- (macro defined in IMA_API_macros.csp,

-- attributes used for the macro are defined in IMA_Conf_PTx.csp)

CREATE_STANDARD_ERROR_HANDLER(toPRmain) ;

-- create three standard aperiodic process

CREATE_STANDARD_APERIODIC_TA(toPRmain, PR1) ;

CREATE_STANDARD_APERIODIC_TA(toPRmain, PR2) ;

CREATE_STANDARD_APERIODIC_TA(toPRmain, PR3) ;

(

let

-- retrieve the port parameters from the configuration data

-- (IMA_Conf_PTx.csp)

msg_size_src = get_matching_elem(source_port_idx,

IMA_Conf_SEQ_QUEUING_PORT_INDICES,

IMA_Conf_SEQ_QUEUING_PORTS_MAX_MSG_SIZE)

msg_size_dst = get_matching_elem(dest_port_idx,

IMA_Conf_SEQ_QUEUING_PORT_INDICES,

IMA_Conf_SEQ_QUEUING_PORTS_MAX_MSG_SIZE)

msg_size = (if (msg_size_src < msg_size_dst)

then msg_size_src

else msg_size_dst)

msg_nb_src = get_matching_elem(source_port_idx,

IMA_Conf_SEQ_QUEUING_PORT_INDICES,

IMA_Conf_SEQ_QUEUING_PORTS_MAX_NB_MSG)

msg_nb_dst = get_matching_elem(dest_port_idx,

IMA_Conf_SEQ_QUEUING_PORT_INDICES,

IMA_Conf_SEQ_QUEUING_PORTS_MAX_NB_MSG)

msg_dir_src = get_matching_elem(source_port_idx,

IMA_Conf_SEQ_QUEUING_PORT_INDICES,

IMA_Conf_SEQ_QUEUING_PORT_DIRECTIONS)

msg_dir_dst = get_matching_elem(dest_port_idx,

IMA_Conf_SEQ_QUEUING_PORT_INDICES,

IMA_Conf_SEQ_QUEUING_PORT_DIRECTIONS)

buffer_msg_nb = 10

within

(

-- create the buffer for communication between process PR1 and PR2

check_CREATE_BUFFER(toPRmain,

11,

msg_size,

buffer_msg_nb,

qd_FIFO,

ret_NO_ERROR) ;

-- create the buffer for communication between process PR2 and PR3

check_CREATE_BUFFER(toPRmain,

12,

msg_size,

buffer_msg_nb,

qd_FIFO,

ret_NO_ERROR) ;

-- create the port to receive the messages from the test specifications

check_CREATE_QUEUING_PORT(toPRmain,

dest_port_idx,

msg_size_dst,

msg_nb_dst,

msg_dir_dst,

qd_FIFO,

ret_NO_ERROR) ;

-- create the port to send the message back to the test specification

Page 397: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.1. PARTITIONING TESTS 377

check_CREATE_QUEUING_PORT(toPRmain,

source_port_idx,

msg_size_src,

msg_nb_src,

msg_dir_src,

qd_FIFO,

ret_NO_ERROR) ;

-- switch to normal mode

SET_PARTITION_MODE(toPRmain, op_NORMAL) ;

-- start communication flow scenario in all aperiodic processes

-- PR1 listens on the queuing port

START_CF_SCENARIO(toPR1,

< (dest_port_idx, port_QUEUING_PORT) >) ;

-- PR2 listens on the first buffer

START_CF_SCENARIO(toPR2,

< (11, port_BUFFER) >) ;

-- PR3 listens on the second buffer

START_CF_SCENARIO(toPR3,

< (12, port_BUFFER) >)

)

)

)

SEND_RECV_LOOP(timer, snd_port_idx, rcv_port_idx, size, seq_nums, usable) =

(

let

{ part } = TAPARTNUM

within

(

|˜| seq_num:usable @

(

AFDX_COM_FLOW_CREATE_MESSAGE(part,

COM_FLOW_MSG_SIZE,

<(11, port_BUFFER),

(12, port_BUFFER),

(rcv_port_idx, port_QUEUING_PORT)>,

seq_num) ;

AFDX_COM_FLOW_SEND_MESSAGE(part,

get_afdx_port(snd_port_idx,

port_QUEUING_PORT)) ;

setTimer.TM_LOOP_DURATION ->

AFDX_COM_FLOW_RECEIVE_MESSAGE(get_afdx_port(rcv_port_idx,

port_QUEUING_PORT),

true,

size,

seq_num) ;

(

(elapsedTimer.timer -> SKIP)

[]

(

elapsedTimer.TM_LOOP_DURATION ->

SEND_RECV_LOOP(timer, snd_port_idx, rcv_port_idx, size, seq_nums, seq_nums)

)

)

)

)

)

-- get the index of a port with the given direction, a MAX_MSG_SIZE >=

-- msg_size and MAX_NB_MSG >= msg_nb.

-- If no suitable port is found, get_port_index will return -1

get_port_index(msg_size, msg_nb, type, dir) =

(

if (type == port_QUEUING_PORT)

then get_port_index1((if (dir == dir_SOURCE)

Page 398: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

378 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

then IMA_Conf_SEQ_AFDX_QP_OUT

else IMA_Conf_SEQ_AFDX_QP_IN),

msg_size, msg_nb,

IMA_Conf_SEQ_QUEUING_PORT_INDICES,

IMA_Conf_SEQ_QUEUING_PORTS_MAX_MSG_SIZE,

IMA_Conf_SEQ_QUEUING_PORTS_MAX_NB_MSG)

else if (type == port_SAMPLING_PORT)

then get_port_index1((if (dir == dir_SOURCE)

then IMA_Conf_SEQ_AFDX_SP_OUT

else IMA_Conf_SEQ_AFDX_SP_IN),

msg_size, 0,

IMA_Conf_SEQ_SAMPLING_PORT_INDICES,

IMA_Conf_SEQ_SAMPLING_PORTS_MAX_MSG_SIZE,

<>)

else -1

)

get_port_index1(seq, msg_size, msg_nb, seq_ind, seq_msg_size, seq_msg_nb) =

(

if (null(seq))

then -1

else (

if ((elem(head(seq), seq_ind)) and

(get_matching_elem(head(seq), seq_ind, seq_msg_size) >= msg_size) and

((msg_nb == -1) or (get_matching_elem(head(seq), seq_ind, seq_msg_nb) >= msg_nb)))

then head(seq)

else get_port_index1(tail(seq),

msg_size, msg_nb,

seq_ind, seq_msg_size, seq_msg_nb)

)

)

--------------------------------------------------------------------------------

D.1.1.2 Possible Configurations

possible_configs:

# Configuration requirements:

# - configuration with >= 4 partitions each usable by the test application

# - resonable scheduling of each partition (i.e., scheduling windows are long

# enough for receiving and sending a message)

# - process stack size allows more than 3 standard processes per partition

# - each partition has 2 AFDX queuing ports (one for input, one for output)

# - each partition has enough data area for two buffers

Config0040_1

Config0040_2

D.1.1.3 RT-Tester Configuration Template

config.rtt.t:

################################################################################

# General Configuration Data

################################################################################

TESTOBJECTIVE TO_PART_003

TESTPROCEDURE test_procedure_03_--CONFIG--_--VAR--

################################################################################

# Abstract Machines

################################################################################

Page 399: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.1. PARTITIONING TESTS 379

#-------------------------------------------------------------------------------

# Partition 1 handling

#-------------------------------------------------------------------------------

AM 101

AMPROCESS MAIN_PT1

FILE specs/main_pt1

TIMER 0:1000:- # Short wait between consecutive calls etc.

1:2000:- # Wait for API call return value

2:30000:- # Wait for startup event of current Partition

3:200:- # General discrete toggle duration

4:1000:- # Maximum round trip time for com flow msg

5:5000:10000 # Distant between two messages (normal I/O rate)

6:240000:- # Test duration

7:120000:- # Time after which to reset partition 1

END

#-------------------------------------------------------------------------------

# Partition 2 handling

#-------------------------------------------------------------------------------

AM 201

AMPROCESS MAIN_PT2

FILE specs/main_pt2

TIMER 0:1000:- # Short wait between consecutive calls etc.

1:2000:- # Wait for API call return value

2:30000:- # Wait for startup event of current Partition

3:200:- # General discrete toggle duration

4:1000:- # Maximum round trip time for com flow msg

5:5000:10000 # Distant between two messages (normal I/O rate)

6:240000:- # Test duration

7:120000:- # Time after which to reset partition 1

END

#-------------------------------------------------------------------------------

# Partition 3 handling

#-------------------------------------------------------------------------------

AM 301

AMPROCESS MAIN_PT3

FILE specs/main_pt3

TIMER 0:1000:- # Short wait between consecutive calls etc.

1:2000:- # Wait for API call return value

2:30000:- # Wait for startup event of current Partition

3:200:- # General discrete toggle duration

4:1000:- # Maximum round trip time for com flow msg

5:5000:10000 # Distant between two messages (normal I/O rate)

6:240000:- # Test duration

7:120000:- # Time after which to reset partition 1

END

#-------------------------------------------------------------------------------

# Partition 4 handling

#-------------------------------------------------------------------------------

AM 401

AMPROCESS MAIN_PT4

FILE specs/main_pt4

TIMER 0:1000:- # Short wait between consecutive calls etc.

1:2000:- # Wait for API call return value

2:30000:- # Wait for startup event of current Partition

3:200:- # General discrete toggle duration

4:1000:- # Maximum round trip time for com flow msg

5:5000:10000 # Distant between two messages (normal I/O rate)

6:240000:- # Test duration

7:120000:- # Time after which to reset partition 1

END

################################################################################

# Interface Modules (IFM)

################################################################################

Page 400: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

380 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

#-------------------------------------------------------------------------------

# IFM for connection to TA

#-------------------------------------------------------------------------------

IFM TA 4

#-------------------------------------------------------------------------------

# IFM for discrete signals

#-------------------------------------------------------------------------------

IFM DISC

#-------------------------------------------------------------------------------

# IFM for AFDX communication

#-------------------------------------------------------------------------------

IFM AFDX

Page 401: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.2. INTRA-PARTITION COMMUNICATION TESTS 381

D.2 Intra-Partition Communication Tests

Intra-partition communication testing shall verify that the partition-internal communication meanslike buffers, blackboards, semaphores, and events can be used as specified. In this appendix sec-tion, one test procedure template is listed which contributes to test objective TO PARCOM 002. Thistest objective checks that message exchange through buffers is possible by all processes of a par-tition and that no messages are lost, corrupted or delivered in wrong sequence. Test proceduretemplate test procedure template 01 which has been selected for inclusion in this thesis, veri-fies that it is possible to write and then the maximum number of messages without corrupting themessages or their sequence. It further tests, that it is not possible to write more messages thanallowed or read more messages than have been written.

D.2.1 TO PARCOM 002/test procedure template 01

The test procedure template consists of three types of files: A test specification template, a listof possible configurations, and an RT-Tester configuration template which are included in thefollowing subsections.

D.2.1.1 Test Specifications

specs/main_pt1.csp.t:

--------------------------------------------------------------------------------

--

-- Description:

--

-- Test Objective: TO_PARCOM_002

-- Test Procedure: test_procedure_01

--

-- Tag: PARCOM_007_TR001

--

--

-- Check buffers (overflows and sequencing)

--

--------------------------------------------------------------------

-- include IMA type definitions

#include "IMA_types1.csp"

-- include general macro process definitions

#include "IMA_macros.csp"

-- include general IMA API handling processes

#include "IMA_API_handling.csp"

-- include general processes to trigger IMA API calls including

-- checking the respective results

#include "IMA_API_macros.csp"

-- include constants etc. generated by the IMA configuration parser

-- Use includes for Partition 1 here (postfix PT1)

#include "IMA_Conf_PT1.csp"

-- include global data from ICD, valid for all partitions

#include "IMA_Conf_Globals.csp"

--------------------------------------------------------------------------------

-- set (type) declarations

--------------------------------------------------------------------------------

-- Test Application Module ID

-- for distinction of test applications running on different IMA modules

MOD = {1}

Page 402: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

382 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

TAMOD = 1

-- Test Application Partition Numbers

-- for distinction of test applications running on different

-- partitions

PART = {1}

-- Test Application Process Numbers

-- for distinction of test applications processes within one partition

PROC = { 1, 2, 130}

-- set of possible timer IDs

TIMER = {0..9}

--------------------------------------------------------------------------------

-- local settings and definitions

--------------------------------------------------------------------------------

-- periodic process is process i on partition 1 on module 1

-- myTAPERPID = modulenumber.partitionnumber.processnumber

toPR1 = 1.1.1

PR1 = 1

toPR2 = 1.1.2

PR2 = 2

-- MAIN process on partition 1 on module 1 has process number 130

toPRmain = 1.1.130

PRmain = 130

-- Index of first buffer is 129 since buffers 1..128 are potentially used internally

buf1 = 131

-- sets for selecting different buffer configurations to be used

buf1_max_msg_size_set = {64,1024,8192}

buf1_max_nb_msg = {5,31}

-- maximum seq ID number to be used (using seq IDs {1..max_seq_num}

max_seq_num = 3

-- standard timer assignements

TM_WAIT = 0

TM_RETVAL = 1

TM_STARTUP = 2

TM_DISC_ACT = 3

-- private timer definitions

TM_RESTART = 5

--------------------------------------------------------------------------------

-- Channel declarations

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

-- Errors generated by the test specification

--------------------------------------------------------------------------------

pragma AM_ERROR

channel error : TIMER

--------------------------------------------------------------------------------

-- Warnings generated by the test specification

--------------------------------------------------------------------------------

pragma AM_WARNING

channel warning : TIMER

--------------------------------------------------------------------------------

-- Timers (set/elapsed/reset)

-- The durations associated with each timer are defined in config.rtt

--------------------------------------------------------------------------------

Page 403: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.2. INTRA-PARTITION COMMUNICATION TESTS 383

pragma AM_SET_TIMER

channel setTimer : TIMER

pragma AM_ELAPSED_TIMER

channel elapsedTimer : TIMER

pragma AM_RESET_TIMER

channel resesTimer : TIMER

--------------------------------------------------------------------------------

-- Inputs to the test application (TA)

--------------------------------------------------------------------------------

pragma AM_OUTPUT

#include "IMA_input_channels.csp"

#include "IMA_DISC_in.csp"

--------------------------------------------------------------------------------

-- Outputs of the test application (TA)

--------------------------------------------------------------------------------

pragma AM_INPUT

#include "IMA_output_channels.csp"

#include "IMA_DISC_out.csp"

--------------------------------------------------------------------------------

-- Internal Channels

--------------------------------------------------------------------------------

pragma AM_INTERNAL

--

-- Requirement Tag channels

--

channel PARCOM_007_TR001

--==============================================================================

--==============================================================================

--------------------------------------------------------------------------------

-- Process definitions

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

-- Top-level process

--------------------------------------------------------------------------------

MAIN_PT1 =

-- reset IMA module

IMA_RESET(TAMOD);

CHECK_BUFFER_HANDLING

--------------------------------------------------------------------------------

-- Macro processes

--------------------------------------------------------------------------------

-- CHECK_BUFFER_HANDLING is restarted repeatedly and selects

-- different buffer sizes and number of messages after each restart

CHECK_BUFFER_HANDLING =

-- partition started in COLD_START or WARM_START mode

-- create error handler, two test application processes, and a buffer

-- switch to normal mode afterwards

Page 404: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

384 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

CREATE_STANDARD_ERROR_HANDLER(toPRmain);

CREATE_STANDARD_PERIODIC_TA(toPRmain, PR1);

CREATE_STANDARD_PERIODIC_TA(toPRmain, PR2);

-- select a possible combination of buffer parameters from the given sets

|˜| buf_size : buf1_max_msg_size_set @

|˜| buf_nb_msg : buf1_max_nb_msg @

-- create buffer with these parameters

(check_CREATE_BUFFER(toPRmain, buf1, buf_size, buf_nb_msg,

qd_FIFO, ret_NO_ERROR);

-- switch to NORMAL mode

SET_PARTITION_MODE(toPRmain, op_NORMAL);

WAIT(TM_WAIT);

-- start timer for restarting the test after a random time

-- (after restart, a new parameter combination is selected)

setTimer.TM_RESTART ->

-- write to and read from currently empty buffer

WRITE_READ_BUF(buf1,buf_size,buf_nb_msg,false)

)

--------------------------------------------------------------------------------

-- write to buffer until it is full and then try to write one more message,

-- then read until buffer is empty again

-- repeat until timer is elapsed, then restart partition and

-- continue with other buffer parameters

WRITE_READ_BUF(buf,buf_size,buf_nb_msg,full) =

((not full) & (-- buffer is empty, process PR1 writes new items to buffer

-- until it is full

WRITE_ITEMS(buf,buf_size,buf_nb_msg,0,1);

WRITE_READ_BUF(buf,buf_size,buf_nb_msg,true)))

[]

((full) & (-- buffer is full, process PR2 reads all messages from the buffer

-- until it empty again

READ_ITEMS(buf,buf_size,buf_nb_msg,0,1);

-- buffer was filled and then emptied again,

-- so requirement tag is checked

PARCOM_007_TR001 ->

-- repeat until timer is elapsed

WRITE_READ_BUF(buf,buf_size,buf_nb_msg,false)))

[]

(elapsedTimer.TM_RESTART ->

-- switch to COLD_START or WARM_START mode

((SET_PARTITION_MODE(toPR1, op_COLD_START); WAIT(TM_WAIT); SKIP)

|˜|

(SET_PARTITION_MODE(toPR1, op_WARM_START); WAIT(TM_WAIT); SKIP));

CHECK_BUFFER_HANDLING)

--------------------------------------------------------------------------------

-- write to buffer buf until it is full and then try to write one more message

-- (recursive function)

WRITE_ITEMS(buf,buf_size,buf_nb_msg,curr_msg,wr_seq) =

((curr_msg < buf_nb_msg) & (-- buffer is not yet full

-- write message into buffer

check_SEND_BUFFER(toPR1,buf,buf_size,

wr_seq,10000,

ret_NO_ERROR);

-- continue recursively

WRITE_ITEMS(buf,buf_size,buf_nb_msg,

curr_msg+1,

((wr_seq)%max_seq_num)+1)))

[]

((curr_msg >= buf_nb_msg) & (-- buffer is full

-- try to write another message

-- without time_out

Page 405: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

D.2. INTRA-PARTITION COMMUNICATION TESTS 385

check_SEND_BUFFER(toPR1,buf,buf_size,

wr_seq,0,

ret_NOT_AVAILABLE);

-- stop recursion

SKIP))

[]

((curr_msg >= buf_nb_msg) & (-- buffer is full

-- try to write another message

-- with short time_out

check_SEND_BUFFER(toPR1,buf,buf_size,

wr_seq,10,

ret_TIMED_OUT);

-- stop recursion

SKIP))

--------------------------------------------------------------------------------

-- read from buffer until it is empty and then try to read one more message

-- (recursive function)

READ_ITEMS(buf,buf_size,buf_nb_msg,curr_msg,rd_seq) =

((curr_msg < buf_nb_msg) & (-- buffer not yet empty

-- read message from buffer

check_RECEIVE_BUFFER(toPR2,buf,0,

ret_NO_ERROR,

buf_size,rd_seq,

toPR1);

-- continue recursively

READ_ITEMS(buf,buf_size,buf_nb_msg,

curr_msg+1,

((rd_seq)%max_seq_num)+1)))

[]

((curr_msg >= buf_nb_msg) & (-- buffer is empty

-- try to read another message

-- while no message available, no timeout

check_RECEIVE_BUFFER(toPR2,buf,0,

ret_NOT_AVAILABLE,

buf_size,rd_seq,

toPR1);

-- stop recursion

SKIP))

[]

((curr_msg >= buf_nb_msg) & (-- buffer is empty

-- try to read another message

-- while no message available, short timeout

check_RECEIVE_BUFFER(toPR2,buf,10,

ret_TIMED_OUT,

buf_size,rd_seq,

toPR1);

-- stop recursion

SKIP))

--------------------------------------------------------------------------------

D.2.1.2 Possible Configurations

possible_configs:

# Configuration requirements:

# - configuration with >= 1 partitions each usable by the test application

# - reasonable scheduling of each partition

# - process stack size allows >= 2 standard periodic processes

# - the partition’s data area size allows creation of a buffer

Config0001_1

# ...

Page 406: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

386 APPENDIX D. IMA TEST SPECIFICATION TEMPLATE LIBRARY – EXAMPLES

D.2.1.3 RT-Tester Configuration Template

config.rtt.t:

################################################################################

# General Configuration Data

################################################################################

TESTOBJECTIVE TO_PARCOM_002

TESTPROCEDURE test_procedure_01_--CONFIG--_--VAR--

DURATION 500

################################################################################

# Abstract Machines

################################################################################

#-------------------------------------------------------------------------------

# Partition 1 handling

#-------------------------------------------------------------------------------

AM 101

AMPROCESS MAIN_PT1

FILE specs/main_pt1

TIMER 0:1000:- # Short wait between consecutive calls etc.

1:2000:- # Wait for API call return value

2:5000:- # Wait for startup event of current Partition

3:200:- # General discrete toggle duration

4:100:- # Unused

5:50000:200000 # Restart test with modified buffer size

END

################################################################################

# Interface Modules (IFM)

################################################################################

#-------------------------------------------------------------------------------

# IFM for connection to TA

#-------------------------------------------------------------------------------

IFM TA 1

#-------------------------------------------------------------------------------

# IFM for discrete signals

#-------------------------------------------------------------------------------

IFM DISC

Page 407: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Appendix E

Communication Schedule Examples

E.1 Example of a Communication Schedule Set

The following communication schedule set contains all possible communication schedules oflength 3 time units (generated without applying any restriction rules) for the example networkdescribed by Fig. 7.2, Fig. 7.3, and Table 7.1. For further details on this representation format seeSect. 7.1.3.

{ < (0,{A:W(A1p1),B:W(B1p1)}), (1,{A:B(1), B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >,

< (0,{A:W(A1p1),B:W(B1p1)}), (1,{A:B(1), B:B(3)}), (2,{A:W(A2p2),B:B(2)}) >,

< (0,{A:W(A1p1),B:W(B1p1)}), (1,{A:B(1), B:B(3)}), (2,{A:T(1), B:B(2)}) >,

< (0,{A:W(A1p1),B:R(B1p2)}), (1,{A:B(1), B:B(2)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:W(A1p1),B:R(B1p2)}), (1,{A:B(1), B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:W(A1p1),B:R(B1p2)}), (1,{A:B(1), B:B(2)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p2)}), (2,{A:W(A2p1),B:B(2)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p2)}), (2,{A:W(A2p2),B:B(2)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p2)}), (2,{A:T(1), B:B(2)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:W(B1p3)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:W(B1p3)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:W(B1p3)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p4)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p4)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p4)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

387

Page 408: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

388 APPENDIX E. COMMUNICATION SCHEDULE EXAMPLES

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >,

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:W(A2p2),B:B(2)}) >,

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:T(1), B:B(2)}) >,

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >,

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:W(A2p2),B:B(2)}) >,

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:T(1), B:B(2)}) >,

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:W(A2p1),B:B(2)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:W(A2p2),B:B(2)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:T(1), B:B(2)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

Page 409: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

E.1. EXAMPLE OF A COMMUNICATION SCHEDULE SET 389

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:W(A2p1),B:B(2)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:W(A2p2),B:B(2)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:T(1), B:B(2)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:T(1), B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >,

< (0,{A:T(1), B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:W(A2p2),B:B(2)}) >,

< (0,{A:T(1), B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:T(1), B:B(2)}) >,

< (0,{A:T(1), B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >,

< (0,{A:T(1), B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:W(A2p2),B:B(2)}) >,

< (0,{A:T(1), B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:T(1), B:B(2)}) >,

< (0,{A:T(1), B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:T(1), B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:T(1), B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:T(1), B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:T(1), B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:T(1), B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >,

Page 410: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

390 APPENDIX E. COMMUNICATION SCHEDULE EXAMPLES

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:W(A2p1),B:B(2)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:W(A2p2),B:B(2)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:T(1), B:B(2)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:T(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:W(A2p1),B:B(2)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:W(A2p2),B:B(2)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:T(1), B:B(2)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:W(A2p1),B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:W(A2p2),B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:T(1), B:B(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >,

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:T(1)}) > }

Page 411: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

E.2. EXAMPLE OF A SORTED COMMUNICATION SCHEDULE SEQUENCE 391

E.2 Example of a Sorted Communication Schedule Sequence

The following communication schedule sequence contains all possible communication schedulesof length 3 time units (generated without applying any restriction rules) for the example networkdescribed by Fig. 7.2, Fig. 7.3, and Table 7.1. The weight has been calculated by applying theheuristic function heuristic1 which has been described in Sect. 7.2.2.2. For further details onthis representation format see Sect. 7.1.3.

< < (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 55

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 55

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 55

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 55

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 55

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 55

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 55

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 55

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >, # 50

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:W(A2p1),B:B(1)}) >, # 50

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:W(A2p2),B:B(2)}) >, # 50

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >, # 50

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 50

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 50

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 50

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 50

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 50

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 50

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 50

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 50

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 45

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 45

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 45

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 45

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 45

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 45

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 45

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 45

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:W(A2p2),B:B(1)}) >, # 45

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:W(A2p1),B:B(1)}) >, # 45

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:W(A2p2),B:B(1)}) >, # 45

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:W(A2p1),B:B(1)}) >, # 45

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:W(A2p2),B:B(2)}) >, # 45

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:W(A2p1),B:B(2)}) >, # 45

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 45

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 45

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 45

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 45

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 45

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 45

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 45

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 45

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 45

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 45

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 45

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 45

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 45

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 45

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 45

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 45

< (0,{A:W(A1p1),B:R(B1p2)}), (1,{A:B(1), B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >, # 45

< (0,{A:W(A1p1),B:R(B1p2)}), (1,{A:B(1), B:B(2)}), (2,{A:W(A2p1),B:B(1)}) >, # 45

< (0,{A:W(A1p1),B:W(B1p1)}), (1,{A:B(1), B:B(3)}), (2,{A:W(A2p2),B:B(2)}) >, # 45

< (0,{A:W(A1p1),B:W(B1p1)}), (1,{A:B(1), B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >, # 45

Page 412: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

392 APPENDIX E. COMMUNICATION SCHEDULE EXAMPLES

< (0,{A:T(1), B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >, # 40

< (0,{A:T(1), B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:W(A2p1),B:B(1)}) >, # 40

< (0,{A:T(1), B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:W(A2p2),B:B(2)}) >, # 40

< (0,{A:T(1), B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >, # 40

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 40

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 40

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 40

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 40

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >, # 40

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:W(A2p1),B:B(1)}) >, # 40

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:T(1), B:B(1)}) >, # 40

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:W(A2p2),B:B(2)}) >, # 40

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >, # 40

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:T(1), B:B(2)}) >, # 40

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p4)}), (2,{A:W(A2p2),B:B(1)}) >, # 40

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p4)}), (2,{A:W(A2p1),B:B(1)}) >, # 40

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:W(B1p3)}), (2,{A:W(A2p2),B:B(1)}) >, # 40

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:W(B1p3)}), (2,{A:W(A2p1),B:B(1)}) >, # 40

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p2)}), (2,{A:W(A2p2),B:B(2)}) >, # 40

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p2)}), (2,{A:W(A2p1),B:B(2)}) >, # 40

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 40

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 40

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 40

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 40

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 40

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 40

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 40

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 40

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:W(A2p2),B:B(1)}) >, # 35

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:W(A2p1),B:B(1)}) >, # 35

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:W(A2p2),B:B(1)}) >, # 35

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:W(A2p1),B:B(1)}) >, # 35

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:W(A2p2),B:B(2)}) >, # 35

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:W(A2p1),B:B(2)}) >, # 35

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 35

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 35

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 35

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 35

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 35

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 35

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 35

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 35

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 35

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 35

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 35

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 35

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 35

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 35

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 35

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 35

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:W(A2p2),B:B(1)}) >, # 35

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:W(A2p1),B:B(1)}) >, # 35

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:W(A2p2),B:B(1)}) >, # 35

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:W(A2p1),B:B(1)}) >, # 35

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:W(A2p2),B:B(2)}) >, # 35

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:W(A2p1),B:B(2)}) >, # 35

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:T(1), B:B(1)}) >, # 35

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:T(1), B:B(1)}) >, # 35

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:T(1), B:B(2)}) >, # 35

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 35

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 35

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 35

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 35

Page 413: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

E.2. EXAMPLE OF A SORTED COMMUNICATION SCHEDULE SEQUENCE 393

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:T(1)}) >, # 35

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 35

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 35

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 35

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 35

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:T(1)}) >, # 35

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 35

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 35

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 35

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 35

< (0,{A:W(A1p1),B:R(B1p2)}), (1,{A:B(1), B:B(2)}), (2,{A:T(1), B:B(1)}) >, # 35

< (0,{A:W(A1p1),B:W(B1p1)}), (1,{A:B(1), B:B(3)}), (2,{A:T(1), B:B(2)}) >, # 35

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 30

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 30

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 30

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 30

< (0,{A:T(1), B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:W(A2p2),B:B(1)}) >, # 30

< (0,{A:T(1), B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:W(A2p1),B:B(1)}) >, # 30

< (0,{A:T(1), B:R(B1p2)}), (1,{A:R(A1p2),B:B(2)}), (2,{A:T(1), B:B(1)}) >, # 30

< (0,{A:T(1), B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:W(A2p2),B:B(2)}) >, # 30

< (0,{A:T(1), B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:W(A2p1),B:B(2)}) >, # 30

< (0,{A:T(1), B:W(B1p1)}), (1,{A:R(A1p2),B:B(3)}), (2,{A:T(1), B:B(2)}) >, # 30

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 30

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 30

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 30

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 30

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 30

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 30

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 30

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 30

< (0,{A:R(A1p2),B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:T(1), B:B(1)}) >, # 30

< (0,{A:R(A1p2),B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:T(1), B:B(2)}) >, # 30

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p4)}), (2,{A:T(1), B:B(1)}) >, # 30

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:W(B1p3)}), (2,{A:T(1), B:B(1)}) >, # 30

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:R(B1p2)}), (2,{A:T(1), B:B(2)}) >, # 30

< (0,{A:W(A1p1),B:R(B1p4)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >, # 30

< (0,{A:W(A1p1),B:W(B1p3)}), (1,{A:B(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >, # 30

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:W(A2p2),B:B(1)}) >, # 25

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:W(A2p1),B:B(1)}) >, # 25

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:W(A2p2),B:B(1)}) >, # 25

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:W(A2p1),B:B(1)}) >, # 25

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:W(A2p2),B:B(2)}) >, # 25

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:W(A2p1),B:B(2)}) >, # 25

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p4)}), (2,{A:T(1), B:B(1)}) >, # 25

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:W(B1p3)}), (2,{A:T(1), B:B(1)}) >, # 25

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:R(B1p2)}), (2,{A:T(1), B:B(2)}) >, # 25

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 25

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 25

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 25

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 25

< (0,{A:T(1), B:R(B1p4)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:T(1)}) >, # 25

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 25

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 25

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 25

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 25

< (0,{A:T(1), B:W(B1p3)}), (1,{A:R(A1p2),B:B(1)}), (2,{A:T(1), B:T(1)}) >, # 25

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:T(1), B:B(1)}) >, # 25

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:T(1), B:B(1)}) >, # 25

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:T(1), B:B(2)}) >, # 25

< (0,{A:R(A1p2),B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >, # 25

< (0,{A:R(A1p2),B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >, # 25

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 25

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 25

Page 414: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

394 APPENDIX E. COMMUNICATION SCHEDULE EXAMPLES

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 25

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 25

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:R(B1p4)}) >, # 20

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:W(B1p3)}) >, # 20

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:R(B1p4)}) >, # 20

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:W(B1p3)}) >, # 20

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 20

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 20

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 20

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 20

< (0,{A:T(1), B:R(B1p2)}), (1,{A:T(1), B:B(2)}), (2,{A:T(1), B:B(1)}) >, # 20

< (0,{A:T(1), B:W(B1p1)}), (1,{A:T(1), B:B(3)}), (2,{A:T(1), B:B(2)}) >, # 20

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 20

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 20

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 20

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 20

< (0,{A:R(A1p2),B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:T(1)}) >, # 20

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p4)}), (2,{A:T(1), B:B(1)}) >, # 15

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:W(B1p3)}), (2,{A:T(1), B:B(1)}) >, # 15

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:R(B1p2)}), (2,{A:T(1), B:B(2)}) >, # 15

< (0,{A:T(1), B:R(B1p4)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >, # 15

< (0,{A:T(1), B:W(B1p3)}), (1,{A:T(1), B:B(1)}), (2,{A:T(1), B:T(1)}) >, # 15

< (0,{A:W(A1p1),B:T(1)}), (1,{A:B(1), B:T(1)}), (2,{A:T(1), B:T(1)}) >, # 15

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:R(B1p4)}) >, # 10

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:W(B1p3)}) >, # 10

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p2),B:T(1)}) >, # 10

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:W(A2p1),B:T(1)}) >, # 10

< (0,{A:T(1), B:T(1)}), (1,{A:R(A1p2),B:T(1)}), (2,{A:T(1), B:T(1)}) >, # 10

< (0,{A:R(A1p2),B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:T(1)}) >, # 10

< (0,{A:T(1), B:T(1)}), (1,{A:T(1), B:T(1)}), (2,{A:T(1), B:T(1)}) > > # 0

Page 415: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Appendix F

Results of the Generation Algorithm’sPrototype Implementation

The following tables summarize the execution results of the generation algorithm’s prototype im-plementation which has been executed on a 2.2 GHz 4-CPU PC with 3 960MB RAM. The upperbounds – which are dependent on the specific network configuration – are given by the limitationsof the prototype implementation. For obvious reasons, the set of communication schedules whichare generated during the execution are not contained in the tables.

The longest communication schedules (duration 17 time units) can be generated for network1while applying restriction function restriction2. The highest number of communication schedules(9 765 625) is generated for network9 while generating all communication schedules of length 5and applying restriction function restriction0. The longest execution time of the prototype hasbeen measured while generating the communication schedules of length 11 for network3 (withrestriction function restriction1). These ‘highlights’ have been marked in bold letters in Table F.2,Table F.5, and Table F.3, respectively. However, these values do not denote the boundaries of thealgorithm but of the respective prototype implementation when executed on this specific powerfulPC.

Examplenetwork

Appliedrestrictionfunction

Communicationschedule length

Number ofgeneratedcommunicationschedules

Prototype’sexecution time

network1 restriction0 3 210 0.002 secnetwork1 restriction0 4 210 0.003 secnetwork1 restriction0 5 1 890 0.007 secnetwork1 restriction0 6 13 230 0.040 secnetwork1 restriction0 7 68 040 0.224 secnetwork1 restriction0 8 567 000 1.658 secnetwork1 restriction0 9 2 721 600 8.472 secnetwork1 restriction0 10 4 762 800 24.642 secnetwork1 restriction1 3 33 0.002 secnetwork1 restriction1 4 33 0.002 secnetwork1 restriction1 5 132 0.002 secnetwork1 restriction1 6 339 0.003 secnetwork1 restriction1 7 625 0.005 secnetwork1 restriction1 8 3 064 0.013 sec

Table F.1: Summary of the results (part 1)

395

Page 416: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

396 APPENDIX F. RESULTS OF THE GENERATION ALGORITHM’S PROTOTYPE

Examplenetwork

Appliedrestrictionfunction

Communicationschedule length

Number ofgeneratedcommunicationschedules

Prototype’sexecution time

network1 restriction1 9 13 148 0.049 secnetwork1 restriction1 10 18 335 0.115 secnetwork1 restriction1 11 35 781 0.230 secnetwork1 restriction1 12 107 081 0.543 secnetwork1 restriction1 13 323 956 1.558 secnetwork1 restriction1 14 565 006 3.326 secnetwork1 restriction1 15 6 150 297 19.013 secnetwork1 restriction1 16 9 344 961 50.457 secnetwork1 restriction2 3 30 0.002 secnetwork1 restriction2 4 30 0.002 secnetwork1 restriction2 5 88 0.002 secnetwork1 restriction2 6 180 0.003 secnetwork1 restriction2 7 340 0.004 secnetwork1 restriction2 8 1 420 0.008 secnetwork1 restriction2 9 5 900 0.023 secnetwork1 restriction2 10 8 000 0.052 secnetwork1 restriction2 11 13 766 0.101 secnetwork1 restriction2 12 41 050 0.224 secnetwork1 restriction2 13 128 534 0.627 secnetwork1 restriction2 14 231 960 1.361 secnetwork1 restriction2 15 2 319 261 7.369 secnetwork1 restriction2 16 3 452 127 18.594 secnetwork1 restriction2 17 8 244 396 49.014 secnetwork2 restriction0 3 210 0.003 secnetwork2 restriction0 4 210 0.004 secnetwork2 restriction0 5 1 890 0.010 secnetwork2 restriction0 6 13 230 0.063 secnetwork2 restriction0 7 68 040 0.350 secnetwork2 restriction0 8 567 000 2.573 secnetwork2 restriction0 9 2 721 600 12.498 secnetwork2 restriction0 10 4 762 800 37.159 secnetwork2 restriction1 3 33 0.002 secnetwork2 restriction1 4 33 0.002 secnetwork2 restriction1 5 132 0.003 secnetwork2 restriction1 6 339 0.004 secnetwork2 restriction1 7 625 0.007 secnetwork2 restriction1 8 3 064 0.018 secnetwork2 restriction1 9 13 148 0.070 secnetwork2 restriction1 10 18 335 0.165 secnetwork2 restriction1 11 35 781 0.354 secnetwork2 restriction1 12 107 081 0.832 secnetwork2 restriction1 13 323 956 2.374 secnetwork2 restriction1 14 565 006 5.414 secnetwork2 restriction1 15 6 150 297 28.179 sec

Table F.2: Summary of the results (part 2)

Page 417: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

397

Examplenetwork

Appliedrestrictionfunction

Communicationschedule length

Number ofgeneratedcommunicationschedules

Prototype’sexecution time

network2 restriction2 3 30 0.002 secnetwork2 restriction2 4 30 0.002 secnetwork2 restriction2 5 88 0.002 secnetwork2 restriction2 6 180 0.003 secnetwork2 restriction2 7 340 0.005 secnetwork2 restriction2 8 1 420 0.010 secnetwork2 restriction2 9 5 900 0.033 secnetwork2 restriction2 10 8 000 0.079 secnetwork2 restriction2 11 13 766 0.152 secnetwork2 restriction2 12 41 050 0.357 secnetwork2 restriction2 13 128 534 0.916 secnetwork2 restriction2 14 231 960 2.204 secnetwork2 restriction2 15 2 319 261 10.896 secnetwork2 restriction2 16 3 452 127 30.187 secnetwork3 restriction0 3 1 760 0.007 secnetwork3 restriction0 4 1 760 0.014 secnetwork3 restriction0 5 44 000 0.131 secnetwork3 restriction0 6 748 000 2.259 secnetwork3 restriction0 7 8 580 000 32.464 secnetwork3 restriction1 3 111 0.002 secnetwork3 restriction1 4 111 0.003 secnetwork3 restriction1 5 1 409 0.006 secnetwork3 restriction1 6 8 561 0.032 secnetwork3 restriction1 7 32 378 0.154 secnetwork3 restriction1 8 238 494 0.891 secnetwork3 restriction1 9 1 413 572 5.565 secnetwork3 restriction1 10 2 849 010 17.141 secnetwork3 restriction1 11 8 819 110 58.056 secnetwork3 restriction2 3 99 0.002 secnetwork3 restriction2 4 99 0.003 secnetwork3 restriction2 5 918 0.005 secnetwork3 restriction2 6 4 536 0.019 secnetwork3 restriction2 7 17 406 0.083 secnetwork3 restriction2 8 125 690 0.475 secnetwork3 restriction2 9 764 515 2.892 secnetwork3 restriction2 10 1 542 158 9.270 secnetwork3 restriction2 11 3 921 167 27.901 sec

Table F.3: Summary of the results (part 3)

Page 418: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

398 APPENDIX F. RESULTS OF THE GENERATION ALGORITHM’S PROTOTYPE

Examplenetwork

Appliedrestrictionfunction

Communicationschedule length

Number ofgeneratedcommunicationschedules

Prototype’sexecution time

network4 restriction0 3 46 656 0.132 secnetwork4 restriction0 4 1 679 616 4.782 secnetwork4 restriction1 3 1 680 0.007 secnetwork4 restriction1 4 19 600 0.068 secnetwork4 restriction1 5 235 200 0.761 secnetwork4 restriction1 6 2 744 000 9.435 secnetwork4 restriction2 3 1 092 0.005 secnetwork4 restriction2 4 8 281 0.031 secnetwork4 restriction2 5 99 372 0.345 secnetwork4 restriction2 6 753 571 2.896 secnetwork4 restriction2 7 8 579 116 33.477 secnetwork5 restriction0 3 1 771 561 7.701 secnetwork5 restriction1 3 46 296 0.221 secnetwork5 restriction1 4 1 653 796 7.764 secnetwork5 restriction2 3 34 596 0.170 secnetwork5 restriction2 4 923 521 4.494 secnetwork6 restriction0 3 46 656 0.130 secnetwork6 restriction0 4 1 679 616 4.796 secnetwork6 restriction1 3 2 520 0.009 secnetwork6 restriction1 4 44 100 0.140 secnetwork6 restriction1 5 529 200 1.787 secnetwork6 restriction1 6 9 261 000 31.781 secnetwork6 restriction2 3 1 728 0.007 secnetwork6 restriction2 4 20 736 0.071 secnetwork6 restriction2 5 248 832 0.842 secnetwork6 restriction2 6 2 985 984 10.168 secnetwork7 restriction0 3 1 771 561 7.698 secnetwork7 restriction1 3 60 516 0.294 secnetwork7 restriction1 4 2 825 761 13.143 secnetwork7 restriction2 3 46 656 0.223 secnetwork7 restriction2 4 1 679 616 7.903 sec

Table F.4: Summary of the results (part 4)

Page 419: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

399

Examplenetwork

Appliedrestrictionfunction

Communicationschedule length

Number ofgeneratedcommunicationschedules

Prototype’sexecution time

network8 restriction0 3 729 0.003 secnetwork8 restriction0 4 6 561 0.017 secnetwork8 restriction0 5 59 049 0.146 secnetwork8 restriction0 6 531 441 1.310 secnetwork8 restriction0 7 4 782 969 11.670 secnetwork8 restriction1 3 190 0.002 secnetwork8 restriction1 4 1 579 0.005 secnetwork8 restriction1 5 13 696 0.035 secnetwork8 restriction1 6 121 213 0.307 secnetwork8 restriction1 7 1 082 722 2.672 secnetwork8 restriction1 8 9 711 727 24.260 secnetwork8 restriction2 3 64 0.002 secnetwork8 restriction2 4 256 0.003 secnetwork8 restriction2 5 1 600 0.006 secnetwork8 restriction2 6 12 544 0.034 secnetwork8 restriction2 7 107 584 0.275 secnetwork8 restriction2 8 952 576 2.398 secnetwork8 restriction2 9 8 516 032 21.358 secnetwork9 restriction0 3 15 625 0.041 secnetwork9 restriction0 4 390 625 1.000 secnetwork9 restriction0 5 9 765 625 25.095 secnetwork9 restriction1 3 2 005 0.007 secnetwork9 restriction1 4 37 925 0.103 secnetwork9 restriction1 5 784 521 1.990 secnetwork9 restriction2 3 729 0.004 secnetwork9 restriction2 4 6 561 0.020 secnetwork9 restriction2 5 88 209 0.258 secnetwork9 restriction2 6 1 520 289 4.187 secnetwork10 restriction0 3 117 649 0.341 secnetwork10 restriction0 4 5 764 801 21.293 secnetwork10 restriction1 3 9 958 0.030 secnetwork10 restriction1 4 315 727 0.972 secnetwork10 restriction2 3 4 096 0.014 secnetwork10 restriction2 4 65 536 0.221 secnetwork10 restriction2 5 1 478 656 4.523 secnetwork11 restriction0 3 531 441 1.744 secnetwork11 restriction1 3 34 153 0.116 secnetwork11 restriction1 4 1 590 553 5.245 secnetwork11 restriction2 3 15 625 0.058 secnetwork11 restriction2 4 390 625 1.466 secnetwork12 restriction0 3 1 771 561 7.441 secnetwork12 restriction1 3 93 526 0.416 secnetwork12 restriction2 3 46 656 0.216 secnetwork12 restriction2 4 1 679 616 7.749 sec

Table F.5: Summary of the results (part 5)

Page 420: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

400

Page 421: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Appendix G

Acronyms for Systems and SystemComponents

ACF Air Conditioning FunctionACMF Aircraft Condition Monitoring FunctionACR Avionics Computing ResourceADIRU Air Data Inertial Reference UnitATC Air Traffic ControlBS Braking and SteeringCBM Circuit Breaker MonitorCIDS Cabin Intercommunication Data SystemEEC Electronic Engine ControlEHM Engine Health ManagementELC External Lights ControllerELM Electrical Load ManagementFCDC Flight Control Data ConcentratorFCGC Flight Control and Guidance ComputerFCSC Flight Control Secondary ComputerFDIF Flight Data Interface FunctionFM Flight ManagementFW Flight WarningIOM I/O ModuleIPCU Ice Protection Control UnitIRDC Interface Remote Data ConcentratorLG Landing GearsPWCU Portable Water Control UnitSCI Secured Communication InterfaceSFCC Slat/Flap Control ComputerSPDB Secondary Power Distribution BoxTP Tyre PressureVSC Vacuum System Controller

401

Page 422: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

402

Page 423: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

Bibliography

[ABD100] Airbus Industrie. Airbus Directives (ABD) and Procedures: Equipment-Design / General Requirements for Suppliers, December 2000. Confiden-tial document.

[ABD200] Airbus Industrie. Airbus Directives (ABD) and Procedures: Requirementsand Guidelines for the System Designer, June 2000. Confidential docu-ment.

[AC20-145] Federal Aviation Authorization. Advisory Circular AC 20-145: Guidancefor Integrated Modular Avionics (IMA) that Implement TSO-C153 Autho-rized Hardware Elements. Available at http://www.faa.gov/aircraft/, Febru-ary 2003.

[ACH+95] R. Alur, C. Courcoubetis, T. Henzinger, P. Ho, X. Nicollin, A. Olivero,J. Sifakis, and S. Yovine. The Algorithmic Analysis of Hybrid Systems.Theoretical Computer Science, 138:3–34, 1995. A preliminary version ap-peared in the proceedings of The 11th International Conference on Analy-sis and Optimization of Systems: Discrete Event Systems (Springer LNCI199).

[AD94] Rajeev Alur and David L. Dill. A Theory of Timed Automata. TheoreticalComputer Science, 126:183–235, 1994.

[ADE+01] R. Alur, T. Dang, J. Esposito, Fierro R., Y. Hur, F. Ivancic, V. Kumar, I. Lee,P. Mishra, G. Pappas, and O. Sokolsky. Hierarchical Hybrid Modelingof Embedded Systems. Lecture Notes in Computer Science, 2211:14–31,2001.

[AGLS01] Rajeev Alur, Radu Grosu, Insup Lee, and Oleg Sokolsky. CompositionalRefinement for Hierarchical Hybrid Systems. In Proceedings of the 4th In-ternational Workshop on Hybrid Systems: Computation and Control, vol-ume 2034 of Lecture Notes in Computer Science, pages 33–48, 2001.

[Air04] Airbus Deutschland GmbH. Cabin Domain – Tool Chain Concept:Development and Testing Tools. Poster at VICTORIA Forum 2004,Athens, Greece, May 2004. http://www.netdev.gr/vic04/posters/

Cabin/731DA_040518-6_PA_Cabin_Tool_Chain_poster.pdf.

[AIR5428] SAE International. SAE AIR 5428: Utility System Characterization, anOverview. SAE Aerospace Information Report, September 2000.

[APEX-WG] Airlines Electronic Engineering Committee – APEX Working Group.Current draft documents are available at http://www.arinc.com/aeec/projects/apex/.

403

Page 424: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

404 BIBLIOGRAPHY

[ARINC429] Airlines Electronic Engineering Committee. ARINC Specification 429:Digital Information Transfer System. Aeronautical Radio Inc., October2001.

[ARINC629P1-5] Airlines Electronic Engineering Committee. ARINC Specification629P1-5: Multi-Transmitter Data Bus, Part 1 – Technical Description.Aeronautical Radio Inc., March 1999.

[ARINC629P2-2] Airlines Electronic Engineering Committee. ARINC Specification629P2-2: Multi-Transmitter Data Bus, Part 2 – Application Guide. Aero-nautical Radio Inc., February 1999.

[ARINC653] Airlines Electronic Engineering Committee. ARINC Specification 653:Avionics Application Software Standard Interface. Aeronautical RadioInc., January 1997.

[ARINC653P1] Airlines Electronic Engineering Committee. ARINC Specification 653-1:Avionics Application Software Standard Interface. Aeronautical RadioInc., October 2003.

[ARINC653P1-2] Airlines Electronic Engineering Committee. ARINC Specification653P1-2: Avionics Application Software Standard Interface, Part 1 – Re-quired Services. Aeronautical Radio Inc., March 2006.

[ARINC653P1 d4s2] Airlines Electronic Engineering Committee. Draft 4 of Supplement 2 toARINC Specification 653-1: Avionics Application Software Standard In-terface. Aeronautical Radio Inc., August 2005. Circulation prior to adop-tion.

[ARINC653P2] Airlines Electronic Engineering Committee. ARINC Specification 653P2:Avionics Application Software Standard Interface, Part 2 – Extended Ser-vices. Aeronautical Radio Inc., January 2007.

[ARINC653P3] Airlines Electronic Engineering Committee. ARINC Specification 653P3:Avionics Application Software Standard Interface, Part 3 – Conformity TestSpecification. Aeronautical Radio Inc., October 2006.

[ARINC653P3d3] Airlines Electronic Engineering Committee. Draft 3 of ARINC Project Pa-per 653: Avionics Application Software Standard Interface, Part 3 – Con-formity Test Specification. Aeronautical Radio Inc., August 2005. Circu-lation prior to adoption.

[ARINC664] Airlines Electronic Engineering Committee. ARINC Specification 664:Aircraft Data Network. Aeronautical Radio Inc. This standard is definedin separate parts: [ARINC664P1-1], [ARINC664P2-1], [ARINC664P3-1],[ARINC664P4-1], [ARINC664P5], [ARINC664P7], [ARINC664P8].

[ARINC664P1-1] Airlines Electronic Engineering Committee. ARINC Specification664P1-1: Aircraft Data Network, Part 1 – Systems Concepts and Overview.Aeronautical Radio Inc., June 2006.

[ARINC664P2-1] Airlines Electronic Engineering Committee. ARINC Specification664P2-1: Aircraft Data Network, Part 2 – Ethernet Physical and Data LinkLayer Specification. Aeronautical Radio Inc., June 2006.

Page 425: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

BIBLIOGRAPHY 405

[ARINC664P3-1] Airlines Electronic Engineering Committee. ARINC Specification664P3-1: Aircraft Data Network, Part 3 – Internet-Based Protocols andServices. Aeronautical Radio Inc., December 2004.

[ARINC664P4-1] Airlines Electronic Engineering Committee. ARINC Specification664P4-1: Aircraft Data Network, Part 4 – Internet Based Address Struc-tures and Assigned Numbers. Aeronautical Radio Inc., December 2004.

[ARINC664P5] Airlines Electronic Engineering Committee. ARINC Specification 664P5:Aircraft Data Network, Part 5 – Network Domain Characteristics and In-terconnection. Aeronautical Radio Inc., April 2005.

[ARINC664P7] Airlines Electronic Engineering Committee. ARINC Specification 664P7:Aircraft Data Network, Part 7 – Avionics Full Duplex Switched Ethernet(AFDX) Network. Aeronautical Radio Inc., June 2005.

[ARINC664P7d] Airlines Electronic Engineering Committee. Draft 4 of ARINC ProjectPaper 664: Aircraft Data Network, Part 7 – Avionics Full-Duplex SwitchedEthernet Network. Aeronautical Radio Inc., February 2005. Available athttp://www.arinc.com/aeec/draft_documents/664p7_d4.pdf.

[ARINC664P8] Airlines Electronic Engineering Committee. ARINC Specification 664P8:Aircraft Data Network, Part 8 – Interoperation with Non-IP Protocols andServices. Aeronautical Radio Inc., April 2005.

[ARP4754] SAE International. SAE ARP 4754: Certification Considerations forHighly-Integrated or Complex Aircraft Systems. SAE Aerospace Recom-mended Practice, November 1996.

[AV04a] Airbus Deutschland GmbH and VICTORIA Cabin Domain Team. CabinDomain Architecture. Poster at VICTORIA Forum 2004, Athens,Greece, May 2004. http://www.netdev.gr/vic04/posters/Cabin/

731DA_040518-25_PA_Cabin_Domain_Architecture_poster.pdf.

[AV04b] Airbus Deutschland GmbH and VICTORIA PCES Domain Team. PCESDomain Architecture. Poster at VICTORIA Forum 2004, Athens,Greece, May 2004. http://www.netdev.gr/vic04/posters/PCES/731DA_040518-24_PA_PCES_Domain_Architecture_poster.pdf.

[AV04c] Airbus France and VICTORIA Cockpit Domain Team. Cockpit Do-main Architecture. Poster at VICTORIA Forum 2004, Athens, Greece,May 2004. http://www.netdev.gr/vic04/posters/Cockpit/731AMB_

040518-6_PA_Cockpit_Domain_Architecture_poster.pdf.

[AV04d] Airbus France and VICTORIA Energy Domain Team. Energy Do-main Architecture. Poster at VICTORIA Forum 2004, Athens, Greece,May 2004. http://www.netdev.gr/vic04/posters/Energy/731AMB_

040518-6_PA_Energy_Domain_Architecture_poster.pdf.

[AV04e] Airbus France and VICTORIA OIS Domain Team. OIS Domain Ar-chitecture. Poster at VICTORIA Forum 2004, Athens, Greece, May2004. http://www.netdev.gr/vic04/posters/OIS/731AMB_040518-7_

PA_OIS_Domain_Architecture_poster.pdf.

[AV04f] Airbus UK and VICTORIA Utilities Domain Team. Utilities Do-main Architecture. Poster at VICTORIA Forum 2004, Athens, Greece,

Page 426: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

406 BIBLIOGRAPHY

May 2004. http://www.netdev.gr/vic04/posters/Utilities/731AUL_040518-3_PA_Utilities_Domain_Architecture_poster.pdf.

[Avi04] Special Report of Avionics Magazine: Real-Time Operating Systems –Versatility Plus Security. http://www.avionicsmagazine.com/RTOS.pdf,March 2004.

[Bal98] Helmut Balzert. Lehrbuch der Software-Technik: Software-Mangement,Software-Qualitatssicherung, Unternehmensmodellierung. SpektrumAkademischer Verlag GmbH, 1998.

[BBF+01] Beatrice Berard, Michel Bidoit, Alain Finkel, Francois Laroussinie, An-toine Petit, Laure Petrucci, Philippe Schnoebelen, and Pierre McKenzie.Systems and Software Verification: Model Checking Techniques and Tools.Springer, 2001.

[BBHP03] Kirsten Berkenkotter, Stefan Bisanz, Ulrich Hannemann, and Jan Peleska.HybridUML Profile for UML 2.0. SVERTS Workshop at the �UML�2003 Conference, October 2003. http://www-verimag.imag.fr/EVENTS/2003/SVERTS/.

[BBHP04] Kirsten Berkenkotter, Stefan Bisanz, Ulrich Hannemann, and Jan Peleska.Executable HybridUML and its Application to Train Control Systems. InH. Ehrig, W. Damm, J. Desel, M. Große-Rhode, W. Reif, E. Schnieder, andE. Westkamper, editors, Integration of Software Specification Techniquesfor Applications in Engineering, volume 3147 of Lecture Notes in Com-puter Science, pages 145–173. Springer Verlag, September 2004. ISBN3-540-23135-8.

[BFPT06] Bahareh Badban, Martin Franzle, Jan Peleska, and Tino Teige. Test Au-tomation for Hybrid Systems. In Neelam Gupta, Yves Ledru, and JohannesMayer, editors, Proceedings of the Third International Workshop on Soft-ware Quality Assurance (SOQUA 2006), co-located with the FourteenthACM SIGSOFT Symposium on Foundations of Software Engineering (ACMSIGSOFT 2006 / FSE 14), November 6, 2006, Portland, OR, USA, 2006.Extended version of the submission at SOQUA 2006.

[Bis05] Stefan Bisanz. Executable HybridUML Semantics. A TransformationDefinition. PhD thesis, University of Bremen, Faculty 03: Mathemat-ics /Computer Science, December 2005.

[Bol99] Louis Bolduc. Verifying Modern Processors in Integrated Modular Avion-ics Systems. In Proc. of The 10th International Symposium on SoftwareReliability Engineering (ISSRE99), Boca Raton, Florida, USA, November1999.

[BT02a] Stefan Bisanz and Aliki Tsiolakis. Test Development in Virtual Environ-ments. In Dominik Haneberg, Gerhard Schellhorn, and Wolfgang Reif,editors, Proc. of FM-TOOLS 2002 – The 5th Workshop on Tools for SystemDesign and Verification, pages 65–69, Augsburg, June 2002. UniversitatAugsburg, Institut fur Informatik. Technical Report 2002-11.

[BT02b] Stefan Bisanz and Aliki Tsiolakis. Using a Virtual Reality Environment toGenerate Test Specifications. In Rob Hierons and Thierry Jeron, editors,Proc. of Formal Approaches to Testing of Software. FATES’02 – A SatelliteWorkshop of CONCUR’02, pages 121–135. INRIA, August 2002.

Page 427: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

BIBLIOGRAPHY 407

[Bur] Alan Burns. The Ravenscar Profile. Revision of [Bur99].

[Bur99] Alan Burns. The Ravenscar Profile. ACM Ada Letters, XIX(4):49–52, dec1999.

[But05] Henning Butz. Integrated Modular Avionic (IMA): State of the Art and fu-ture Development Road Map at Airbus Deutschland. In Automation, Assis-tance and Embedded Real Time Platforms for Transportation (AAET 2005),Braunschweig, Germany, February 2005.

[BZL04] Stefan Bisanz, Paul Ziemann, and Arne Lindow. Integrated Specifica-tion, Validation and Verification with HybridUML and OCL applied tothe BART Case Study. In Eckehard Schnieder and Geza Tarnai, editors,FORMS/FORMAT 2004. Formal Methods for Automation and Safety inRailway and Automotive Systems, pages 191–203, Braunschweig, Decem-ber 2004. Proceedings of Symposium FORMS/FORMAT 2004, Braun-schweig, Germany.

[CAN] ISO 11898 Road vehicles – Controller Area Network (CAN). Avail-able at http://www.iso.org. This standard is defined in separate parts:[CAN-P1], [CAN-P2], [CAN-P3d], [CAN-P4].

[CAN-P1] ISO 11898-1:2003 Road vehicles – Controller Area Network (CAN), Part1: Data link layer and physical signalling. Available at http://www.iso.org, November 2003.

[CAN-P2] ISO 11898-2:2003 Road vehicles – Controller Area Network (CAN), Part2: High-speed medium access unit. Available at http://www.iso.org,November 2003.

[CAN-P3d] ISO 11898-3:2006 Road vehicles – Controller Area Network (CAN), Part3: Low-speed, fault-tolerant, medium dependent interface. Available athttp://www.iso.org, May 2006.

[CAN-P4] ISO 11898-4:2004 Road vehicles – Controller Area Network (CAN), Part4: Time-triggered communication. Available at http://www.iso.org, Au-gust 2004.

[CM01] Philippa Conmy and John McDermid. High level failure analysis for Inte-grated Modular Avionics. In P. Lindsay, editor, 6th Australian Workshop onSafety Critical Systems and Software (SCS ’01), Brisbane, Australia, vol-ume 3. Conferences in Research and Practice in Information Technology,2001.

[CO00] Rachel Cardell-Oliver. Conformance Tests for Real-Time Systems withTimed Automata Specifications. Formal Aspects of Computing, 12(5):350–371, 2000.

[Coo03] Jim Cooling. Software Engineering for Real-Time Systems. Pearson Edu-cation Limited, 2003.

[DGG04] A. Denise, M.-C. Gaudel, and S.-D. Gouraud. A Generic Method for Sta-tistical Testing. In 15th International Symposium on Software ReliabilityEngineering (ISSRE’04), pages 25–34, 2004.

Page 428: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

408 BIBLIOGRAPHY

[Die04a] Diehl Avionik Systeme. Cabin CPIOM. Leaflet at VICTORIA Forum 2004,Athens, Greece, May 2004. http://www.netdev.gr/vic04/leaflets/Cabin/731DAU_040518-2_TA_CABIN-CPIOM_Leaflet.pdf.

[Die04b] Diehl Avionik Systeme. Cabin Domain – Shared Resources: The Cen-tral Processing and I/O Module (CPIOM). Poster at VICTORIA Fo-rum 2004, Athens, Greece, May 2004. http://www.netdev.gr/vic04/posters/Cabin/731DAU_040518-3_PA_Cabin_CPIOM_Poster.pdf.

[Diehl04] Diehl Avionik Systeme Frankfurt. Cabin Domain – Doors and Slides Man-agement System, CAN Bus Evaluation and Simulation. Leaflet at VICTO-RIA Forum 2004, Athens, Greece, May 2004. http://www.netdev.gr/vic04/leaflets/Cabin/731DAF_040518-1_TA_DSMS_Leaflet.pdf.

[DJHP98] Werner Damm, Bernhard Josko, Hardi Hungar, and Amir Pnueli. A Com-positional Real-Time Semantics of STATEMATE Designs. Lecture Notesin Computer Science, 1536:186–238, 1998.

[DMP00] Markus Dahlweid, Oliver Meyer, and Jan Peleska. Automated Testing withRT-Tester – Theoretical Issues Driven by Practical Needs. In GerhardSchellhorn and Wolfgang Reif, editors, Proc. of FM-TOOLS 2000 – The4th Workshop on Tools for System Design and Verification, pages 65–70,Ulm, July 2000. Universitat Ulm, Fakultat fur Informatik. Technical Report2000-07.

[DO-160E] Environmental Conditions and Test Procedures for Airborne Equipment.RTCA/DO-160E, EUROCAE/ED-14E, September 2004.

[DO-178B] Software Considerations in Airborne Systems and Equipment Certification.RTCA/DO-178B, December 1992.

[DP04] Werner Damm and Thomas Peikenkamp. Model Based Safety Analysis.Presentation at Informatik Ringvorlesung, Humboldt Universitat Berlin,July 1 2004, http://www.informatik.hu-berlin.de/Ringvorlesung/ss04/1jul.pdf, July 2004.

[DS04] Markus Dahlweid and Uwe Schulze. High Level Transition Systems of CSPSpecifications and their Application in Automated Testing. PhD thesis, Uni-versity of Bremen, Faculty 03: Mathematics /Computer Science, February2004.

[DW96] Jim Davies and Jim Woodcock. Using Z: Specification, Refinement andProof. International Series in Computer Science. Prentice Hall, 1996.

[Efk05] Christof Efkemann. Development and evaluation of a hard real-time scheduling modification for Linux 2.6. Diploma thesis, Univer-sity of Bremen, Faculty 03: Mathematics /Computer Science, April2005. Available at http://www.informatik.uni-bremen.de/agbs/qualifikationsarbeiten/diplomarbeiten/2005_efkemann.pdf.

[FDR] Formal Systems (Europe) Ltd. Failures-Divergence Refinement: FDR2User Manual, 5 edition, 2000.

[Fil03] Bill Filmer. Open Systems Avionics Architectures Considerations. IEEEAerospace and Electronic Systems Magazine, 18(9):3–10, September 2003.

Page 429: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

BIBLIOGRAPHY 409

[Gas05] Marty Gasiorowski. Differences and Similarities in DO-178B Compli-ance in an IMA System vs a Federated Product. http://www.faa.gov/aircraft/air_cert/design_approvals/air_software/conference/

fy2005_conf/ima/media/IMA-Federated-Gasiorowski.pdf, July 2005.FAA National Software Conference 2005.

[GHR+03] Jens Grabowski, Dieter Hogrefe, Gyorgy Rethy, Ina Schieferdecker, An-thony Wiles, and Colin Willcock. An Introduction into the Testing and TestControl Notation (TTCN-3). Computer Networks, 42(3):375–403, June2003.

[Har87] David Harel. Statecharts: A Visual Formalism for Complex Systems. Sci-ence of Computer Programming, 8(3):231–274, June 1987.

[HaRTLinC] Student Project HaRTLinC. HaRTLinC – Projekbericht. Final project re-port, University of Bremen, Faculty 03: Mathematics /Computer Science,December 2003. Available at http://www.informatik.uni-bremen.de/agbs/hartlinc/projektbericht.pdf.

[HdMR04] Gregoire Hamon, Leonardo de Moura, and John Rushby. Generating Effi-cient Test Sets with a Model Checker. Technical report, Computer ScienceLaboratory, SRI International, Menlo Park CA 94025 USA, May 2004.

[Hen96] Thomas A. Henzinger. The Theory of Hybrid Automata. In Proceedings ofthe 11th Annual Symposium on Logic in Computer Science (LICS), pages278–292. IEEE Computer Society Press, 1996.

[HMN00] Paul Hollow, John McDermid, and Mark Nicholson. Approaches to Certi-fication of Reconfigurable IMA Systems. In INCOSE 2000, Minneapolis,USA, July 2000.

[HN96] David Harel and Amnon Naamad. The STATEMATE Semantics of Stat-echarts. ACM Transactions on Software Engineering and Methodology,5(4):293–333, October 1996.

[Hoa85] Charles Anthony Richard Hoare. Communicating Sequential Processes.Prentice Hall International, Hemel Hempstead, 1985.

[HPSS87] D. Harel, A. Pnueli, J. P. Schmidt, and R. Sherman. On the formal se-mantics of statecharts. In Proceedings, Symposium on Logic in ComputerScience, pages 54–64. The Computer Society of the IEEE, June 1987. Ex-tended abstract.

[Kro04] Jim Krodel. Commercial Off-The-Shelf Real-Time Operating System andArchitectural Considerations. Technical Report DOT/FAA/AR-03/77, U.S.Department of Transportation, Federal Aviation Administration, February2004.

[Lab04] Jean Labrulerie. All Clear for the A380. Planet AeroSpace, (17):4–8, 2004.

[Liu00] Jane W. S. Liu. Real-Time Systems. Prentice Hall, Inc., 2000.

[LR03] John Lewis and Leanna Rierson. Certification Concerns with IntegratedModular Avionics (IMA) Projects. In The 22nd Digital Avionics SystemsConference (DASC ’03), volume 1, pages 1.A.3–1–1.A.3–9, 2003.

Page 430: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

410 BIBLIOGRAPHY

[LV92] Nancy Lynch and Frits Vaandrager. Forward and backward simulations fortiming-based systems. In Proceedings of Real-Time: Theory in Practice(REX Workshop, Mook, The Netherlands), volume 600 of Lecture Notes inComputer Science, pages 397–446. Springer, 1992.

[Mey01] Oliver Meyer. Structural Decomposition of Timed CSP and its Applica-tion in Real-Time Testing. PhD thesis, University of Bremen, Faculty 03:Mathematics /Computer Science, December 2001.

[Moo01] Jim Moore. The Avionics Handbook, chapter Advanced Distributed Archi-tectures, pages 33–1–33–13. In Spitzer [Spi01], 2001.

[Mor01] Michael J. Morgan. The Avionics Handbook, chapter Boeing B-777, pages29–1–29–8. In Spitzer [Spi01], 2001.

[MP04] Oliver Meyer and Julien Pfefferkorn. Automated IMA Module Configura-tion Testing, 2004.

[MS03] Ian Moir and Allan Seabridge. Civil Avionics Systems. AIAA EducationSeries, 2003.

[MTB+04] Oliver Meyer, Aliki Tsiolakis, Sven-Olaf Berkhahn, Josef Kruse, and DirkMartinen. Automated Testing of Aircraft Controller Modules. In Proc.of the 5th International Conference on Software Testing, Dusseldorf, April2004.

[OH05] Aliki Ott and Tobias Hartmann. Domain Specific V&V Strategies for Air-craft Applications. In Proc. of the 6th ICSTEST International Conferenceon Software Testing, Dusseldorf, April 2005.

[Ott05] Aliki Ott. Comments to Draft 1 of ARINC Project Paper 653 Part 3, Con-formity Test Suite. Presented at APEX Working Group Meeting, Lisbon,June 2005.

[PAD+98] Jan Peleska, Peter Amthor, Sabine Dick, Oliver Meyer, Michael Siegel, andCornelia Zahlten. Testing Reactive Real-Time Systems. Tutorial, held atthe FTRTFT ’98, Denmark Technical University, Lyngby, Denmark, 1998.Updated revision. Available as http://www.informatik.uni-bremen.de/agbs/jp/papers/ftrtft98.ps.

[Pel96] Jan Peleska. Formal Methods and the Development of Dependable Systems.Number 9612. Christian-Albrechts-Universitat Kiel, Institut fur Informatikund Praktische Mathematik, December 1996. Habilitationsschrift.

[Pel02a] Jan Peleska. Formal Methods for Test Automation – Hard Real-Time Test-ing of Controllers for the Airbus Aircraft Family. In Proc. of the SixthInternational Conference on Integrated Design and Process Technology(IDPT2002), Pasadena, California, USA. Society for Design and ProcessScience, June 2002.

[Pel02b] Jan Peleska. Hardware/Software Integration Testing for the new AirbusAircraft Families. In A. Wolisz I. Schieferdecker, H. Konig, editor, Testingof Communicating Systems XIV – Application to Internet Technologies andServices, pages 335–351. Kluwer Academic Publishers, March 2002.

Page 431: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

BIBLIOGRAPHY 411

[Pel03] Jan Peleska. Automated Testsuites for Modern Aircraft Controllers. In RolfDrechsler, editor, Methoden und Beschreibungssprachen zur Modellierungund Verifikation von Schaltungen und Systemen, pages 1–10, Aaachen,2003. Shaker.

[PLK07] Jan Peleska, Helge Loding, and Tatiana Kotas. Test Automation MeetsStatic Analysis. In Proceedings of the Workshop on Applied Program Anal-ysis 2007, co-located with INFORMATIK 2007, Bremen, September 2007.

[POAK05] P. Peti, R. Obermaisser, A. Ademaj, and H. Kopetz. A Maintenance-Oriented Fault Model for the DECOS Integrated Diagnostic Architecture.In Proc. of The 19th IEEE International Parallel and Distributed Process-ing Symposium (IPDPS’05) - Workshop 2, Denver, Colorado, USA, page128b, 2005.

[POT+05] P. Peti, R. Obermaisser, F. Tagliabo, A. Marino, and S. Cerchio. An Inte-grated Architecture for Future Car Generations. In Proc. of The 8th IEEEInternational Symposium on Object-Oriented Real-Time Distributed Com-puting (ISORC’05), Seattle, Washington, USA, pages 2–13, 2005.

[PT02] Jan Peleska and Aliki Tsiolakis. Automated Integration Testing for Avion-ics Systems. In Proc. of the 3rd International Conference on Software Test-ing, Dusseldorf, April 2002.

[pUML] The precise UML group. http://www.cs.york.ac.uk/puml/index.html.

[PZ03] Jan Peleska and Cornelia Zahlten. Hard Real-Time Test Tools - Conceptsand Implementation. In Proc. of the 4th International Conference on Soft-ware Testing (ICSTEST 2003), Cologne, April 2003.

[Rag04] Camille Ragi. VICTORIA: Validation platform for Integration of stan-dardised Components, Technologies and Tools in an Open, modulaR andImproved Aircraft electronic system. In 2nd European Congress ERTS,Embedded Real Time Software, Toulouse, France, January 2004.

[Rav95] A. P. Ravn. Design of Embedded Real-time Computing Systems. TechnicalReport ID-TR 1995-170, ID/DTU, Lyngby, Denmark, October 1995. dr.techn. dissertation.

[Ret05] Eric E. Retko. Integrated Modular Avionics (IMA) Trends and Challenges.http://www.faa.gov/aircraft/air_cert/design_approvals/air_

software/conference/fy2005_conf/ima/media/IMA-Retko.pdf, July2005. FAA Software Conference 2005.

[Ros98] A. W. Roscoe. The Theory and Practice of Concurrency. Prentice Hall,1998.

[RT-Tester 5] Verified Systems International GmbH. RT-Tester 5.1: User Manual, 2005.Further information available at http://wwww.verified.de.

[RT-Tester 6.0] Verified Systems International GmbH. RT-Tester 6.0: User Manual, 2005.Further information available at http://wwww.verified.de.

[RT-Tester 6.2] Verified Systems International GmbH. RT-Tester 6.2: User Manual, 2007.Further information available at http://wwww.verified.de.

Page 432: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

412 BIBLIOGRAPHY

[Rus99] John Rushby. Partitioning in Avionics Architectures: Requirements, Mech-anisms, and Assurance. Technical report, Computer Science Laboratory,SRI International, Menlo Park CA 94025 USA, March 1999. Also to beissued by the NASA Langley Research Center as CR-1999-209347 and bythe FAA as DOT/FAA/AR-99/58.

[Rus02a] John Rushby. A Comparison of Bus Architectures for Safety-Critical Em-bedded Systems. Technical report, Computer Science Laboratory, SRI In-ternational, Menlo Park CA 94025 USA, June 2002.

[Rus02b] John Rushby. An Overview of Formal Verification for the Time-TriggeredArchitecture. In Werner Damm and Ernst-Rudiger Olderog, editors, For-mal Techniques in Real-Time and Fault-Tolerant Systems, volume 2469 ofLecture Notes in Computer Science, pages 83–105, Oldenburg, Germany,September 2002. Springer-Verlag.

[Sch95] Steve Schneider. An Operational Semantics for Timed CSP. Informationand Computation, 116(2):193–213, February 1995.

[Sch04] Jan Schwerdter. The A380 Spreads Its Wings in Toulouse. PlanetAeroSpace, (16):10–13, 2004.

[SL03] Andreas Spillner and Tilo Linz. Basiswissen Softwaretest: Aus- und Weit-erbildung zum Certified-Tester. dpunkt-Verlag, 2003.

[SLS07] Andreas Spillner, Tilo Linz, and Hans Schaefer. Software Testing Founda-tions: A Study Guide for the Certified Tester Exam. Rocky Nook Inc., 2ndedition, 2007.

[Smi04] Smiths. The Central Processing Module. Poster at VICTORIA Fo-rum 2004, Athens, Greece, May 2004. http://www.netdev.gr/vic04/posters/Energy/731SIA_040518-5_PA_CPM_poster.pdf.

[SPC03] Miguel A. Sanchez-Puebla and Jesus Carretero. A new approach for dis-tributed computing in avionics systems. In 1st International Symposiumon Information and Communication Technologies, pages 579–584. ACMInternational Conference Proceeding Series, 2003.

[Spi92] Mike Spivey. The Z notation - a Reference Manual. International Series inComputer Science. Prentice Hall, 2nd edition, 1992.

[Spi01] Cary R. Spitzer, editor. The Avionics Handbook. CRC Press, 2001.

[Sto96] Neil Storey. Safety-Critical Computer Systems. Addison Wesley Longman,1996.

[Sto99] Neil Storey. Design for Safety. In Towards System Safety: Proc. 7th Safety-Critical Systems Symposium, Huntingon, UK, pages 1–25, February 1999.

[Sto04] Neil Storey. The Importance of Data in Safety-Critical Systems. SafetySystems – The Safety-Critical Systems Club Newsletter, 13(2):1–4, 2004.

[Tal98] Nancy Talbert. The Cost of COTS – Interview with JohnMcDermid. COM-PUTER, 31(6):46–52, June 1998.

[Tan01] Andrew S. Tanenbaum. Modern Operating Systems. Prentice Hall, Inc.,second edition, 2001.

Page 433: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

BIBLIOGRAPHY 413

[TD04a] Thales Avionics and Diehl Avionik. Thales/Diehl Development Envi-ronment in VICTORIA. Leaflet at VICTORIA Forum 2004, Athens,Greece, May 2004. http://www.netdev.gr/vic04/leaflets/Tools/

320THA_021202-1_TA_THA-DAv_Dev_Env_leaflet.pdf.

[TD04b] Thales Avionics and Diehl Avionik Systeme. Thales/Diehl IMETool Chain. Poster at VICTORIA Forum 2004, Athens, Greece,May 2004. http://www.netdev.gr/vic04/posters/Tools/731THA_

040518-14_PA_THAV-DAV_IME_Tool_Chain_poster.pdf.

[Tec04] Avionics Magazine Tech Report: Testing Technology for the Validation andIntegration of Avionics Systems. http://www.avionicsmagazine.com/TechSATreport0904dj4.pdf, September 2004.

[Tha04a] Thales Avionics. COTAGE. Leaflet at VICTORIA Forum 2004, Athens,Greece, May 2004. http://www.netdev.gr/vic04/leaflets/Tools/

320THA_021202-2_TA_COTAGE_leaflet.pdf.

[Tha04b] Thales Avionics. THA-CPIOM. Leaflet at VICTORIA Forum 2004,Athens, Greece, May 2004. http://www.netdev.gr/vic04/leaflets/Utilities/433THA_030922-1_TA_D4-3-3_THA_CPIOM_Leaflet.pdf.

[Tha04c] Thales Avionics. Utilities Domain CPIOM – Central Processing andI/O Module. Poster at VICTORIA Forum 2004, Athens, Greece,May 2004. http://www.netdev.gr/vic04/posters/Utilities/731THA_040518-11_PA_CPIOM_Poster.pdf.

[Tha04d] Thales Avionics. VICTORIA Platform Architecture. Poster at VICTORIAForum 2004, Athens, Greece, May 2004. http://www.netdev.gr/vic04/posters/731THA_040518-6_PB_VICTORIA_Architecture_Poster.pdf.

[TMM+03] Aliki Tsiolakis, Dirk Meyer, Oliver Meyer, Hans-Jurgen Ficker, ChristofEfkemann, and Jan Peleska. Concept for and Realisation of AutomatedIMA Module Testing. User Manual and Tutorial. Universitat Bremen andVerified Systems International GmbH, December 2003. Internal documentwithin the European project VICTORIA.

[TPLB03] Aliki Tsiolakis, Søren Prehn, Henrik Lauritzen, and Carsten Bergemann.Heterogeneous Interoperation (HI). Technical Note version 1.5, Univer-sitat Bremen and Terma A/S, January 2003. Internal document within theEuropean project VICTORIA.

[Tsi04] Aliki Tsiolakis. Model-based Test Data Generation for Testing IntegratedModular Avionics. In Dagstuhl Seminar Perspectives of Model-based Test-ing, September 2004.

[TSO-C153] Federal Aviation Authorization. Technical Standard Order TSO-C153: Integrated Modular Avionics Hardware Elements. Available athttp://www.faa.gov/aircraft/, May 2002.

[TTT] TTTech Computertechnik GmbH. DECOS Project Provides Future Solu-tions for Integrated Architecture. http://www.tttech.com/technology/docs/general/TTTech-DECOS.pdf.

[UML] Unified Modeling Language. http://www.uml.org.

Page 434: System Testing in the Avionics Domain - elib.suub.uni ...elib.suub.uni-bremen.de/diss/docs/00010881.pdf · systeme auf dem Architekturrahmenwerk Integrated Modular Electronics, das

414 BIBLIOGRAPHY

[V-Modell-XT] V-Modell XT (Version 1.0). http://www.v-modell-xt.de/, 2004. DerEntwicklungsstandard fur IT-Systeme des Bundes.

[vdB94] M. von der Beeck. A Comparison of StateCharts Variants. In Proc. ofFormal Techniques in Real Time and Fault Tolerant Systems, pages 128–148, Berlin, 1994. Springer-Verlag.

[Ver05] Verocel, Inc. Partitioned Configuration Files for Partitioned Systems. Pre-sented at APEX Meeting in Lisbon, Portugal, June 2005.

[VICTORIA] VICTORIA Web Site. http://www.euproject-victoria.org/, January2005.

[ZB05] Peter-Michael Ziegler and Benjamin Benz. Fliegendes Rechnernetz: IT-Technik an Bord des Airbus A380. c’t Magazin, 17:84–91, August 2005.

[ZRH93] Chaochen Zhou, A. P. Ravn, and M. R. Hansen. An extended DurationCalculus for Hybrid Real-time systems. In R. L. Grossman, A. Nerode,A. P. Ravn, and H. Rischel, editors, Hybrid Systems, volume 763 of LectureNotes in Computer Science, pages 36–59. The Computer Society of theIEEE, 1993. Extended abstract.