deliverable d6.1 current test condition and benchmarking

80
X2Rail-1 Action Full Title: Start-up activities for Advanced Signalling and Automation Systems Starting date: 01/09/2016 Duration in months: 36 Call identifier: H2020-S2RJU-CFM-2015-01-1 Grant agreement no: 730640 Deliverable D6.1 Current test condition and Benchmarking report Due date of deliverable Month 06 Submission date 1 st Version Submission date Updated Version 31-03-2017 15-11-2018 Organization name of lead contractor for this deliverable 8 – DB Dissemination level PU Revision R2_Updated version

Upload: others

Post on 07-Jan-2022

14 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deliverable D6.1 Current test condition and Benchmarking

X2Rail-1 Action Full Title: Start-up activities for Advanced

Signalling and Automation Systems Starting date: 01/09/2016 Duration in months: 36 Call identifier: H2020-S2RJU-CFM-2015-01-1 Grant agreement no: 730640

Deliverable D6.1 Current test condition and Benchmarking report

Due date of deliverable Month 06 Submission date 1st Version Submission date Updated Version

31-03-2017 15-11-2018

Organization name of lead contractor for this deliverable

8 – DB

Dissemination level PU Revision R2_Updated version

Page 2: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

Authors

Type Name Contribution

Author(s) Deutsche Bahn AG (DB) Dr. Fabian Schober Bernd Eberts

- Task leader - Leading the developing of

the questionnaire - Evaluation of the

questionnaire - Creating the data base - Main Author of the

deliverable Asociacion Centro Tecnologio CEIT-IK4 (CEIT) Iñigo Adin Jaizki Mendizabal

- Active participation within the discussions (F2F and group work)

- Participating in the development of the questionnaire

- Writing Chapter 6 - Review of the deliverable

Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR) Oliver Röwer

- Active participation within the discussions (F2F and group work)

- Participating in the development of the questionnaire

- Writing Chapter 5 - Review of the deliverable

Mer Mec SPA (MM) Giulio Mongelli

- Active participation within the discussions (F2F and group work)

- Writing Chapter 5 - Review of the deliverable

Siemens AG (SIE) Birgit Jeschka

- Active participation within the discussions (F2F and group work)

- Writing Chapter 5 - Review of the deliverable

Page 3: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 3 of 80

Société Nationale des Chemins de fer Français (SNCF-R) François Fleuret (Systra) Catherine Maton (Systra)

- Active participation within the discussions (F2F and group work)

- Writing Chapter 7 - Review of the deliverable

Trafikverket (TRV) Laura Mayer Andreas Westerberg

- Active participation within the discussions (F2F and group work)

- Evaluation of the questionnaire

- Writing Chapter 7 - Review of the deliverable

Contributor(s) Alstom Transport S.A. (ALS) Fernando Mejia

- Active participation within the discussions (F2F and group work)

- Review of the deliverable Ansaldo STS S.p.A. (ASTS) Luigi Velardi

- Active participation within the discussions (F2F and group work)

- Evaluation of the questionnaire

- Creating the Data Base - Review of the deliverable

AŽD Praha SRO (AZD) Dušan Vokoun

- Participation within the discussions (F2F and group work)

- Review of the deliverable Bombardier Transportation Sweden (BTSE) Paolo Girolami Thiemo Roehrig

- Active participation within the discussions (F2F and group work)

- Review of the deliverable

Kapsch Carriercom AG (KCC) Benoît Faup

- Active participation within the discussions (F2F and group work)

- Participating in the development of the questionnaire

- Evaluating the questionnaire - Creating the Data Base

Page 4: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 4 of 80

- Review of the deliverable

Network Rail (NR) Edwin Morton

- Active participation within the discussions (F2F and group work)

- Review of the deliverable Société Nationale des Chemins de fer Français (SNCF-R) Juliette Fournier Kawtar Hadjadj

- Active participation within the discussions (F2F and group work)

- Writing Chapter 7 - Review of the deliverable

Thales Deutschland GmbH (TTS) Lisa-Marleen Scheile Bettina Dötsch

- Active participation within the discussions (F2F and group work)

- Participating in the development of the questionnaire

- Evaluating the questionnaire - Creating the Data Base - Review of the deliverable

Page 5: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 5 of 80

1 Executive Summary The present document constitutes the first issue of the Deliverable D6.1 “Current test condition and benchmarking report” in the framework of the project titled “Start-up activities for Advanced Signalling and Automation Systems” (Project Acronym: X2Rail-1; Grant Agreement No 730640).

The key objective of zero on-site testing, is to perform functional and non-functional tests (component test, integration test and system test) in laboratory, instead of testing on-site, in order to save time and costs without compromising on safety.

A status-quo analysis within the railway sector has been performed, by means of a questionnaire which has been sent to suppliers, infrastructure managers and research institutes within the railway sector in order to get an overview over today’s testing practice. Due to the focus on signalling systems, no Railway Undertakings participated in our questionnaire. In this analysis, areas of improvement regarding testing have been identified, even though there is no harmonised list of tests that should be shifted from on-site to laboratory, as the system is too complex. Component tests are mainly done in laboratories, where the required test environment is comparably easy to build up, whereas system tests are done on-site as the real environmental conditions are important, e.g. for acceptance tests, including human behaviour.

Overall, laboratory testing can enhance the quality of the product, improve bug fixing and speed up development, as it enables parallelisation of different activities and increases trust on the safety level.

Moreover, a benchmarking with safety-critical industries outside the railway sector has been performed, in order to compare against lessons learnt for railway system applications and give them a wider context. Therefore, another questionnaire has been elaborated and sent out via the project partners. It has been discovered, that there are big differences between sectors, where testing is a relevant input to standardisation of approval processes. For various reasons, nearly all sectors perform tests both in laboratory environment and on-site. Building up a laboratory environment with realistic input data for the simulations, is quite time and cost consuming, but it allows the test of worst case scenarios, which may not be possible in a real trackside environment (without safety impacts) and which is necessary to fully understand the products and services to be developed. Therefore, quite good test coverage is achieved.

Page 6: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 6 of 80

Table of Contents 1 EXECUTIVE SUMMARY .......................................................................................... 5

2 ABBREVIATIONS AND ACRONYMS ...................................................................... 8

3 BACKGROUND ...................................................................................................... 10

4 OBJECTIVES AND REPORT STRUCTURE .......................................................... 11

5 TEST ACTIVITIES WITHIN RAILWAY SECTOR ................................................... 12

5.1 CLUSTER OF ANSWERING COMPANIES ............................................................................ 12 5.1.1 Description of the form distributed ........................................................................................... 13 5.2 EXPERIENCES OF SHIFTING TESTS TO LABORATORY ........................................................ 15 5.3 TEST SCOPE ................................................................................................................. 16 5.4 TEST STRATEGY ........................................................................................................... 18 5.4.1 In-house vs. external tests ....................................................................................................... 20 5.4.2 Reasons for performing laboratory tests ................................................................................. 22 5.4.3 Executing on-site tests ............................................................................................................ 23 5.4.4 Test automation ....................................................................................................................... 23 5.4.5 Effort of tests ............................................................................................................................ 24 5.4.6 Creation of Tests ..................................................................................................................... 24 5.5 REASONS FOR LABORATORY TESTS ................................................................................ 27 5.6 GAIN TRUST AND QUALITY OF TEST ................................................................................. 29 5.7 USE OF FORMAL METHOD VERIFICATION ......................................................................... 30 5.8 SHIFT TEST FROM ON-SITE TO LABORATORY ................................................................... 31 5.8.1 Benefits executing laboratory test and limits for shifting ......................................................... 32 5.8.2 Plans to shift test into laboratory ............................................................................................. 33 5.9 IMPLEMENTATION OF AN IMPARTIAL LABORATORY ............................................................ 33 5.10 MAIN CONCLUSIONS OF RAILWAY SECTOR ANALYSIS ........................................................ 34

6 TEST ACTIVITIES OUTSIDE RAILWAY SECTOR (BENCHMARKING) ............... 35

6.1 DESCRIPTION OF THE ANALYSIS DONE ............................................................................ 35 6.1.1 Aim of the benchmarking ......................................................................................................... 35 6.1.2 Description of the form distributed ........................................................................................... 36 6.1.3 Distribution of company types ................................................................................................. 37 6.1.4 Cluster of the safety-critical industries addressed ................................................................... 38 6.2 RESULTS OF THE ANALYSIS ............................................................................................ 39 6.2.1 Types of tests currently performed .......................................................................................... 39 6.2.2 Tests in laboratory vs. tests on-site ......................................................................................... 41 6.2.3 Laboratory testing .................................................................................................................... 44 6.2.4 Impact of shifting tests from on-site to laboratory .................................................................... 46 6.2.5 Explanation of the approval process & need for harmonisation by sector .............................. 47 6.3 MAIN CONCLUSIONS OF THE BENCHMARKING ANALYSIS ................................................... 49 6.4 RECOMMENDATIONS ..................................................................................................... 50

7 HARMONISATION AND AUTHORISATION ACTIVITIES ..................................... 53

Page 7: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 7 of 80

7.1 EXISTING HARMONISATION ACTIVITIES ............................................................................ 53 7.2 DIFFERENT APPROVAL PROCESSES IN DIFFERENT COUNTRIES ......................................... 56 7.3 CHALLENGES OF EUROPEAN APPROVAL PROCESSES ...................................................... 57 7.4 MAIN CONCLUSION OF HARMONISATION AND AUTHORISATION ANALYSIS ............................ 60

8 COOPERATION WITH VITE .................................................................................. 62

9 CONCLUSION ........................................................................................................ 63

10 REFERENCES ........................................................................................................ 65

11 GLOSSARY ............................................................................................................ 66

12 APPENDICES ......................................................................................................... 67

12.1 QUESTIONNAIRE PART A – STATUS QUO ANALYSES IN RAILWAY SECTOR ........................... 67 12.2 QUESTIONNAIRE PART B – BENCHMARKING WITH SAFETY-CRITICAL INDUSTRIES ............... 76

Page 8: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 8 of 80

2 Abbreviations and Acronyms Abbreviations / Acronyms

Description

3GPP 3rd Generation Partnership Project, collaboration between groups of telecommunications associations

4thRP 4th Railway Package APIS Authorisation for Placing Into Service AsBo Assessment Bodies CTC Centralized Traffic Control CEF Connecting Europe Facility CENELEC Comitée Européen de Normalisation Électrotechnique

(European Committee for Electrotechnical Standardization) CER Community of European Railway and Infrastructure

Companies CSM Common Safety Methods CCS Control Command and Signalling DeBo Designated Bodies DMI Driver Machine Interface EASA European Aviation Safety Agency EDOR ETCS Data Only Radio EIRENE European Integrated Railway Radio Enhanced Network EMC Electromagnetic Compatibility ERTMS European Rail Traffic Management System ESA European Space Agency ETCS European Train Control System EU European Union EUAR / ERA European Union Agency for Railways, formerly known as

European Railway Agency EUG ERTMS Users Group EURATOM European Atomic Energy Community EVC European Vital Computer FAA Federal Aviation Administration (US) FAT Factory Acceptance Test GPRS General Packet Radio Service GSM-R Global System for Mobile Communication Rail HHT Handheld Terminal IAEA International Atomic Energy Agency IEC International Electrotechnical Commission IEEE Institute of Electrical and Electronics Engineers IM Infrastructure Manager IOP Interoperability IOT Interoperability testing IP Innovation Programme IXL Interlocking

Page 9: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 9 of 80

KPI Key Performance Indicator NoBo Notified Bodies LEU Lineside Electronic Unit NSA National Safety Authorities NVIOT Network Vendors Interoperability Testing OBU On Board Unit RBC Radio Block Centre RU Railway Undertaking S2R Shift2Rail SERA Single European Railway Area SIT System- and Integration Test STM Specific Transmission Module SUT System under test TEN-T Trans-European Transport Network TMS Traffic Management System TSI Technical Specifications for Interoperability UIC Union internationale des chemins de fer (International Union

of Railways) UNISIG Union Industry of Signalling VITE Virtualisation of the testing environment WENRA Western European Nuclear Regulators Association WP Work Package

Page 10: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 10 of 80

3 Background The present document constitutes the first issue of Deliverable D6.1 “Current test condition and benchmarking report” in the framework of the project titled “Start-up activities for Advanced Signalling and Automation Systems” (Project Acronym: X2Rail-1; Grant Agreement No 730640).

Shift2Rail (S2R) is the first joint European rail technology initiative, to seek focused research and innovation (R&I) and market-driven solutions. This can be achieved by accelerating the integration of new and advanced technologies into innovative rail product solutions. Shift2Rail will promote the competitiveness of the European Rail industry and will meet the changing EU transport needs. The R&I activities are carried out under the Horizon 2020 initiative and will develop the necessary technology to complete the Single European Railway Area (SERA). Further information can be found on http://shift2rail.org/.

The X2Rail-1 project aims to research and develop six selected key technologies to foster innovations in the field of railway signalling and automation systems. The project is part of a longer term Shift2Rail IP2 strategy towards a flexible, real-time, intelligent traffic management and decision support system.

In particular, Work Package 6 (WP6) “Zero on-site Testing” focuses on testing activities to be standardised within SERA. System- and Integration Test (SIT) is a fundamental method of system verification across a wide range of industrial sectors. Various experiences show that the cost and the time consumption for the SIT is from 30 % up to 50% of the project costs and time. Due to the complexity of signalling systems and the differences between sites, a large number of tests must be carried out on-site, which takes about 5 to 10 times the effort of similar laboratory tests. Reduction of on-site tests for signalling systems is hence a reasonable approach to reducing testing costs. WP6 as part of the X2Rail-1 project will make further improvements and the results will be disseminated on a European level.

This report is the deliverable of Task 6.2 (Assessment of status quo in field testing and benchmarking) and therefore, the first deliverable of WP6.

Page 11: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 11 of 80

4 Objectives and report structure Within Task 6.2 “Assessment of status quo in field test and benchmarking”, the current field test activities have been assessed, in order to identify work packages that could be shifted to laboratory testing. The expectation, on one hand, has been to identify areas where lead time and cost of field testing can be reduced, and on the other hand, to improve further on the quality of delivered solutions. A benchmarking of rail signalling and telecom activities with other safety critical industries like avionics, medical and automotive was an important part of this task.

Data about today’s requirements for tests and verification / validation procedures has been collected taking into account specifications in the European railway networks. Moreover, harmonisation activities and approval processes of different countries have been analysed. Testing experiences by assessing the telecom sector has been gained. Furthermore, a benchmarking with safety-critical industries (automotive, aviation, space, medical, nuclear) has been performed. Therefore, two different questionnaires have been answered by WP6 members and by benchmarked industries. The results and conclusions can be found in this report.

In chapter 5, the test activities within the railway sector are described with focus on test strategy and reasons for laboratory tests and on-site tests. Moreover, there is a section concerning shifting tests from on-site to laboratory environment.

Test activities outside the railway sector have been investigated and the results can be found in chapter 6. Furthermore, some recommendations for improving test activities are presented.

Harmonisation and authorisation activities within the railway sector are summarized in chapter 7.

Page 12: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 12 of 80

5 Test activities within railway sector This chapter consists of a status quo analysis of testing activities within the railway sector in order to get an overview of common practises and lessons learned in the field of testing. Therefore, a questionnaire has been provided and distributed among the members of WP6. The questionnaire can be found in the Appendix 11.1.

Special focus is set on the test strategies and on the possibility to shift tests from on-site to laboratory in order to reduce lead time and costs for testing and therefore, to enable an accelerated approval process.

5.1 Cluster of answering companies 18 companies have answered the questionnaire concerning the status quo analysis within the railway sector. Figure 5.1 shows the distribution of the companies participating in the questionnaire.

A split of the outcomes between customers and suppliers was made. The focus of the following analysis is the Control Command and Signalling (CCS) system. Therefore, customer replies came from Infrastructure Manager (IM) only, and not from Railway Undertakings (RU).

Figure 5.1 – Split of replies by company type

From section 5.6 on, companies which have identified themselves as validator, have been classified in the customer category, while companies which identified themselves as a research centre or as an impartial test laboratory are included in the supplier category.

Customer33%

Impartial Test Laboratory

5%

Research center

6%

Supplier50%

Validator6%

Page 13: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 13 of 80

5.1.1 Description of the form distributed An electronic benchmarking questionnaire was created and distributed via the X2RAIL-1 WP6 partners. Answers were given anonymously and all questionnaires have been collected and evaluated confidentially.

The questionnaire starts with general questions about the company and personal information of the replier:

• Type of your company (e.g. supplier, infrastructure manager, independent assessor or validator)

• Technical area of your company • Your personal area of expertise (e.g. product manager, system engineer, test

manager)

The questionnaire continued with topic related questions, starting with ETCS:

1. ETCS test cases have already been specified on a European level (ERA subset-076 [3]). 1.1. Do you think these test sequences and test cases are sufficient regarding to

interoperability, safety, operational rules etc.? 1.2. If not, where does the need for further test cases come from? 1.3. What are the risks? 1.4. Which problems occur during operation? Which subsystems were affected?

Please explain the unexpected effects on the system. 2. What products/services do you test beside the ones already mentioned in question

1 (subset- 076 [3] ETCS)? Please list the most important tests. 3. Which type of tests do you perform in order to evaluate your product/service?

3.1. Do you perform laboratory environment or on-site/ on real infrastructure / real environment tests?

3.2. If laboratory environment: Are you performing the tests in an in-house laboratory or at an external company/institute? If external, please give the name of the organization.

3.3. If laboratory environment: Why do you perform tests in the laboratory? 3.4. If on-site: real track, test track or temporary track? 3.5. Which of the following test procedure are automated in your company? 3.6. What is the effort of these types of test? 3.7. How are the types of test created?

4. Why do you perform tests in the laboratory, e.g. legal regulations, time saving, cost saving and please specify the effects?

5. Do you use on-site testing as fall back, if laboratory testing was not finished in time?

Page 14: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 14 of 80

6. Do you see on-site testing as complementary to laboratory testing, with a specific interest?

7. If your product is updated, do you repeat the full tests? Do you have a strategy to avoid that?

8. What are you doing to gain trust in the tests executed in the laboratory? 9. Can you think of any ways to increase the quality of your tests? Which ones? 10. Do you think formal verification techniques can replace testing? 11. If yes, which formal verification techniques may replace testing? 12. Do you recommend shifting any further tests from on-site to laboratory? What is

stopping you from doing these tests in the laboratory today? 13. Which tests do you plan to shift? And what are the necessary test system limits

and borders for your shifted test. 14. Which tests do you think can be replaced by formal verifications? 15. How can you make sure to perform only meaningful laboratory tests, which can

serve as a confirmation of product/service quality and safety level? 16. How do you make sure that at least all necessary meaningful tests are performed

in laboratory? 17. What are the tests, which cannot be shifted and will have to be performed on-site?

The next questions are about the experience and lessons learnt concerning shifting tests to laboratories:

18. Please provide your experience about shifting tests to laboratory environment. 18.1. What have been the pains or the challenges you had to deal with before

shifting tests to the laboratory environment? 18.2. What system boundaries have to be taken into account? 18.3. What has helped you solve these challenges? What is the effort?

19. How does shifting tests from on-site to laboratory influence company-specific processes and the organisation of your company (e.g. in the fields of product management, development process, regulation)?

20. Would you like to participate in the implementation of an independent laboratory responsible for CCS testing during Shift2Rail (collaboration between different suppliers in a single laboratory, in remote laboratories, remote testing in distributed laboratories)? 20.1. If yes: What competences and capacities do you like to yield in such a

laboratory?

At the end, a few questions concerning harmonisation and approval activities have been asked. This was not the main focus of our status quo analysis, but should give an idea about the challenges of European approval processes.

Page 15: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 15 of 80

21. Do you participate in any working groups regarding testing in the past or today? Which one?

22. Is there a further need to harmonise test strategies? 23. Explain the approval process in your home country? Please specify the approval

process. 24. Is there a need for a standardised European approval process in the future? Why? 25. What will be the main advantages of a European approval process?

All the questions are intended to get a deeper understanding of the way testing is addressed in the different companies, focusing mainly on their evolution from field testing to laboratory testing.

5.2 Experiences of shifting tests to laboratory Experiences in the railway industry of shifting any tests to laboratories have been evaluated by using the answers of the questionnaires. In this section, general conclusions are provided for shifting various types of tests to laboratory environment. Shifting tests to laboratory induced a partial update of the test processes (including a guidance for testing and the adaption of general rules on different projects), as well as the test tools. The testability of the products and the necessary tools have to be considered during the product development. Moreover, it seemed to be useful to automate testing in order to enable time and cost savings compared to on-site tests.

The organisation requires a team to manage the test environment, not only to perform the tests.

Shifting tests to laboratory environment can enhance the quality of the product, improve the bug fixing and speed up the development, as it enables parallelisation of different activities and increases trust on the safety level (also known as confidence testing).

While shifting tests to laboratory, the assessment processes have to be created and established. These are the proofs to regulators, i.e. NoBo (notified bodies), DeBo (designated bodies) and AsBo (assessment bodies). In this context, more and easier laboratory certifications are needed. These roles may have to be reconsidered during the test process development.

The tests, which cannot be shifted from on-site to laboratory, are those depending on special equipment only available on-site (e.g. an existing interlocking), or tests regarding the connectivity and the interactions to other equipment. These are directly related to geographical or environmental factors which are currently impractical to reproduce in a laboratory environment. On-site tests are the ones related with real time, timing and dynamic behaviour and interaction, e.g. some GSM-R (Global System for Mobile Communication Rail) test relying on signal strength, odometry or inside locomotive

Page 16: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 16 of 80

communication. Some of the operational tests are needed for on-site acceptance by the customer, e.g. EMC tests.

The availability and the costs of appropriate laboratory test equipment are crucial as well.

Taking these factors into consideration, it is important to review the test strategy at the start of any new project, in order to select a very small set of tests that shall be tested on-site and plan to shift the remaining ones into the laboratory.

5.3 Test Scope The survey shows that tests in the rail sector are carried out for different reasons and for various products. The reasons are always to avoid risks (both safety and operational/functional). To put system into successful operation, compatibility has to be guaranteed. There is also the risk, that the tests are not sufficient, because some operational scenarios are not covered by the generic tests.

Besides testing the conformity of the ETCS on-board equipment (subset 076 [3]), the main products and services to be tested are shown in the Figure 5.2.

Figure 5.2 – Products tested beside subset 076 [3]

Some completed and harmonized test specifications are already available. For example, in ETCS, the so called subset 076 [3], deals with the test specification for compliance within the systems requirements specification (subset 026 [1]), and thus with the question of technical interoperability.

The requirements specifications form a good basis for creating test specifications, but there is a need to specify further tests. Thus, a need for testing also results from the

0

1

2

3

4

RB

C

IXL

Fiel

d de

vice

Posi

tioni

ng s

yste

m

TMS/

CTC LE

U

Bal

ises

Trai

n In

terf

ace

DM

I

HW

redu

ndan

cy

HH

T

GSM

-R

STM

num

ber o

f rep

lies

Page 17: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 17 of 80

requirements of the respective project, for example in the case of operational procedures. Although railway undertakings (RU) and infrastructure managers (IM) are working on the harmonisation of operational procedures, there is still the need to define tests adapted to specific applications, e.g. to project and country-specific operational procedures.

According to the replies from IM, common mistakes and failures which occurred during operation are as follows:

• Faults affecting safety or availability are possible during operation. • Interaction with radio block centres (RBCs) or a different wayside design causes

service or emergency brake or failure. • Incorrect or irrelevant information displayed in the driver machine interface (DMI) • Loss of communication between equipment • Problems with timing issues • RBC roll over (protection) • Interoperability problems • Misinterpretation of standards / specifications leading to incorrect implementations • Issues related to the border of systems • Problems caused by external influence (e.g. electromagnetic compatibility (EMC)

of trackside devices and trains)

Tests performed follow a variety of test specifications. A large portion is based on product specific system requirements specifications. For example, in ETCS, the Subset-076 [3] defines test cases and test sequences to check the compliance of the ETCS on-board equipment with the System Requirements Specification (subset 026 [1]), and thus deals with the question of technical interoperability. Moreover, subset 085 (Eurobalise, [4]), subset 092 (Euroradio, [5]), subset 093 (GSM-R, [6]), subset 110, [7] (in conjunction with subset 111 [8] and subset 112 [9]) have to be regarded as well.

In addition, there are customer, supplier or project specific tests:

• Operational rules (including human behaviour) and operational scenarios • Compatibility tests between train and track, besides ETCS subset 076 [3] • Project specific system requirements specifications • Diagnostic systems

There are several types of tests (defined in the glossary in chapter 10) which are executed by most of the partners.

• Functional Tests • System Tests • Performance Tests

Page 18: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 18 of 80

• Stability Tests • Product Tests • Integration Tests • Additional testing of the above mentioned in the field of confidence testing

5.4 Test Strategy The partners were asked to list the different types of tests they perform, when evaluating their own product or service. This way, 23 different types of tests were identified. Each type has its own percentage of sharing between laboratory and on-site activities.

Four types, namely acceptance, net access, validation and principle tests, are only executed on-site or in a real environment. Another five types of test are executed exclusively in a laboratory environment. They are: component, ETCS subset 076 [3], factory acceptance, IOT and environmental tests. The remaining types are tested in both environments, with slight advantage for the laboratory.

Due to the fact that there are certain answers which can be understood to address the same type of testing (e.g. interoperability, IOT, IOP), a more precise definition of the different test types will be provided during the test process definition task. Therefore, only the concrete answers from the questionnaire answered by all types of companies are shown in Figure 5.3, even if they may address the same topic.

Figure 5.3 – Split of laboratory and on-site test execution

In the following figures 5.4 to 5.8 the replies of each individual company type are shown:

0

2

4

6

8

10

12

14

16

18

20

22

24

Acc

epta

nce

Com

patib

ility

Com

pone

nt

Dat

a

EMC

ETC

S Su

bset

-076

Fact

ory

Acc

epta

nce

Func

tiona

l

Har

dwar

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-1

10)

IOT

Net

Acc

ess

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acc

epta

nce

Softw

are

Syst

em

Syst

em In

terf

ace

Envi

ronm

enta

l

Valid

atio

n

num

ber

of re

plie

s

Lab On-Site

Page 19: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 19 of 80

Figure 5.4 – Split of laboratory and on-site test execution – Suppliers

Figure 5.5 – Split of laboratory and on-site test execution – Customers

Figure 5.6 – Split of laboratory and on-site test execution – Validators

Figure 5.7 – Split of laboratory and on-site test execution – Impartial Laboratories

0

2

4

6

8

10

12

14

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-0

76

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-1

10)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acc

epta

nce

Softw

are

Syst

em

Syst

em In

terf

ace

Envi

ronm

enta

l

Valid

atio

n

num

ber

of r

eplie

sLab On-Site

0

2

4

6

8

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-0

76

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-1

10)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acc

epta

nce

Softw

are

Syst

em

Syst

em In

terf

ace

Envi

ronm

enta

l

Valid

atio

n

num

ber

of r

eplie

s

Lab On-Site

0

2

4

6

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-

076

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-

110)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acce

ptan

ce

Softw

are

Syst

em

Syst

emIn

terfa

ce

Envi

ronm

enta

l

Valid

atio

n

num

ber

of r

eplie

s

Lab On-Site

0

2

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-

076

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-

110)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acce

ptan

ce

Softw

are

Syst

em

Syst

emIn

terfa

ce

Envi

ronm

enta

l

Valid

atio

n

num

ber

of r

eplie

s

Lab On-Site

Page 20: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 20 of 80

Figure 5.8 – Split of laboratory and on-site test execution – Research Centres

For detailed description of the terms and definition, see the glossary in chapter 10.

5.4.1 In-house vs. external tests Figure 5.4 shows where laboratory tests are executed. Of those tests executed in the laboratory, the three types (1) compatibility tests, (2) ETCS subset 076 [3] and (3) environmental tests, are fully executed at external companies or institutes. In contrast, component test, data test, functional test, integration test, IOP (subset 110 [7]), IOT, safety and system tests are fully tested in an in-house laboratory. The remaining types are mainly tested in-house and only occasionally given to external partners.

Figure 5.9 – Whether laboratory tests are executed in-house or external

For a more detailed view, the separation into different company types are shown in the following figures 5.10 to 5.14:

0

2

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-

076

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-

110)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acce

ptan

ce

Softw

are

Syst

em

Syst

emIn

terfa

ce

Envi

ronm

enta

l

Valid

atio

n

num

ber

of r

eplie

s

Lab On-Site

0

2

4

6

8

10

12

14

16

18

20

Acc

epta

nce

Com

patib

ility

Com

pone

nt

Dat

a

EMC

ETC

S Su

bset

-076

Fact

ory

Acc

epta

nce

Func

tiona

l

Har

dwar

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-1

10)

IOT

Net

Acc

ess

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acc

epta

nce

Softw

are

Syst

em

Syst

em In

terf

ace

Envi

ronm

enta

l

Valid

atio

n

num

ber o

f rep

lies

Internal External

Page 21: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 21 of 80

Figure 5.10 – Whether laboratory tests are executed in-house or external - Suppliers

Figure 5.11 – Whether laboratory tests are executed in-house or external - Customers

Figure 5.12 – Whether laboratory tests are executed in-house or external - Validators

0

2

4

6

8

10Ac

cept

ance

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-0

76

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-1

10)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acc

epta

nce

Softw

are

Syst

em

Syst

em In

terf

ace

Envi

ronm

enta

l

Valid

atio

n

num

ber

of r

eplie

sInternal External

0

2

4

6

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-

076

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-

110)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acce

ptan

ce

Softw

are

Syst

em

Syst

emIn

terfa

ce

Envi

ronm

enta

l

Valid

atio

n

num

ber

of r

eplie

s

Internal External

0

2

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-

076

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-

110)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acce

ptan

ce

Softw

are

Syst

em

Syst

emIn

terfa

ce

Envi

ronm

enta

l

Valid

atio

n

num

ber

of r

eplie

s

Internal External

Page 22: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 22 of 80

Figure 5.13 – Whether laboratory tests are executed in-house or external – Impartial Laboratories

Figure 5.14 – Whether laboratory tests are executed in-house or external – Research Centres

5.4.2 Reasons for performing laboratory tests Regarding the question of why tests are performed in a laboratory environment, the participants of the survey replied as shown in Figure 5.5. The numbers in the bubbles indicate the number of answers given for that specific question. This is done in Figure 5.6 up to Figure 5.8, too.

Contractual agreements are the main reason for performing factory acceptance, hardware tests, interoperability tests, IOP (subset 110 [7]), tests related to safety, site acceptance tests and software tests in the laboratory. ETCS subset 076 [3], hardware, IOT, software and type tests are required by the NoBo for certification, while compatibility, interoperability and IOP (subset 110 [7]) laboratory tests are of course necessary for inter-system interoperability. Product related activities have a decisive influence on the laboratory execution of component, hardware, product, safety and software tests. Last but not least, system-related activities are the main reason for performing data, functional, integration, IOP (subset 110 [7]) and system interface laboratory tests.

0

2Ac

cept

ance

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-

076

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-

110)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acce

ptan

ce

Softw

are

Syst

em

Syst

emIn

terfa

ce

Envi

ronm

enta

l

Valid

atio

n

num

ber

of r

eplie

sInternal External

0

2

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-

076

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-

110)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acce

ptan

ce

Softw

are

Syst

em

Syst

emIn

terfa

ce

Envi

ronm

enta

l

Valid

atio

n

num

ber

of r

eplie

s

Internal External

Page 23: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 23 of 80

Figure 5.15 – Tests performed in the laboratory environment

5.4.3 Executing on-site tests Taking a look at the on-site tests, most of them are performed on a real track (see Figure 5.6). This is especially true for functional, integration, site acceptance and system tests. There are also some test types, which are executed on dedicated test tracks. They are exclusively built to perform any kind of on-site tests with real equipment, but they are not used as operational tracks for railway operation. The test track is less used than a real track. The third kind of tracks is the temporary one, which is dedicated for specific tests. They are rarely used, and mostly for functional and integration tests.

Figure 5.16 – Tests performed on-site

5.4.4 Test automation A main goal of this project is to shift tests from on-site to laboratory environments in order to simplify and speed up the whole testing process. An effective way of achieving this goal is through the automation of the test procedures. Currently, the most automated part of testing among the different types, is the execution phase, as shown in Figure 5.7, whereas

4 2 4 2 2 1 4 1 2 5 2

1 1 1 3 4 3 1 1 3 5 2 1

2 1 2 1 5 2 1 2

1 5 4 2 6 1 5 1

2 9 3 7 1 1 1 2 9

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-0

76

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-1

10)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acc

epta

nce

Softw

are

Syst

em

Syst

em In

terf

ace

Envi

ronm

enta

l

Valid

atio

n

Contractually Defined

Required by Notified Body for Certification

Necessary for Inter-System Interoperability

Necessary for Product-Related Activities

Necessary for System-Related Activities

number of replies x

1 1 1 1 7 1 4 2 1 1 1 5 1 6 1

1 1 1 1 3 3 2 1 1 1 1

1 1

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Dat

a

EMC

ETC

S Su

bset

-076

Fact

ory

Acce

ptan

ce

Func

tiona

l

Har

dwar

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-1

10)

IOT

Net

Acc

ess

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acc

epta

nce

Softw

are

Syst

em

Syst

em In

terfa

ce

Envi

ronm

enta

l

Valid

atio

n

Real Track

Test Track

Temporary Track

number of replies x

Page 24: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 24 of 80

some test types feature the possibility of automated test case creation and analysis, the majority has to be handled manually.

Figure 5.17 – Some test procedures are automated.

5.4.5 Effort of tests According to the answers in the questionnaire, the greatest test effort in resources such as time and money is needed for acceptance, compatibility, data, ETCS subset 076 [3], IOP (subset 110 [7]), IOT, system and validation tests. The least effort is required by EMC and principle tests. Figure 5.8 displays the grade of effort needed for different types of tests, clustered into a scale from 1 “less effort” up to 5 “huge effort”.

Figure 5.18 – Effort of types of test (1: low effort, 5: huge effort)

5.4.6 Creation of Tests Asking how the types of tests are created and on which sources they are based on, the survey participants gave a wide range of answers. But there are some keywords which appear repeatedly. The main sources of test creation are requirements specifications of

1 1 1 1 2

1 1 2

1 2 7 1 4 1 1 1 2 5 4

1 1 1 1 1 2 1

1

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Data

EMC

ETC

S Su

bset

-076

Fact

ory

Acce

ptan

ce

Func

tiona

l

Har

dwar

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-1

10)

IOT

Net

Acc

ess

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acc

epta

nce

Softw

are

Syst

em

Syst

em In

terf

ace

Envi

ronm

enta

l

Valid

atio

n

Test Case Creation

Selection

Execution

Analysis

Evaluation

number of replies x

3 3,5 3 2,3 1 4 2,5 3,1 2,5 3,1 2,8 4 4 4 1 2,9 4 3,4 2,8 3,4 2 3 4

4 3 3 4,3 2 5 2,5 3 2,3 3,1 2,4 4 4 3 1 2,6 2 3,2 2,9 3,7 2 2 4

3 3 2 4,6 2,5 4 2 3,6 3,3 3,3 2,8 3 4 2 1 3,3 3,5 3 3,5 4 3 3 5

5 5 2 5 1,5 3 3 3,2 2,5 3,2 3,3 5 4 4 1 2,1 1,5 3,6 3,2 3,8 5 3 4

Acce

ptan

ce

Com

patib

ility

Com

pone

nt

Data

EMC

ETCS

Sub

set-0

76

Fact

ory

Acce

ptan

ce

Func

tiona

l

Hard

war

e

Inte

grat

ion

Inte

rope

rabi

lity

IOP

(Sub

set-1

10)

IOT

Net A

cces

s

Prin

cipl

e

Prod

uct

Safe

ty

Site

Acc

epta

nce

Softw

are

Syst

em

Syst

em In

terf

ace

Envi

ronm

enta

l

Valid

atio

n

Resources for Performing Tests

Time Needed for the Tests

Test Preparation and Execution

Test Relevance for Service Operation

number of replies x

Page 25: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 25 of 80

all types, especially ETCS subset 026 [1], the 3GPP & EIRENE (European Integrated Railway Radio Enhanced Network) standards, the EMC standard and other national standards. If already available, a test specification is used, e.g. ETCS subset 076 [3]. Further significant items are operational scenarios, test specifications of the customer and project specifications. It is also worth mentioning that hands-on experience can be an important source, too. This can be either tester experience, information about training modules or even knowledge about already occurred hazardous situations. Functional, product and system tests are also created using a model based approach, which can significantly speed up the whole testing process. Table 5.1 lists the sources for the creation of specific tests. For detailed description of the terms and definition, see the glossary in chapter 10.

Page 26: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 26 of 80

The creation of tests for each type is based on...

Acceptance Requirements Compatibility Test specification of the customer Component Requirements Data Requirements specifications, track layout EMC EMC standards ETCS ETCS subset 076 [3] Factory Acceptance

Operational scenarios, test specifications, requirements, agreed test plan

Functional Requirements specifications, SSRS, 3GPP & EIRENE standards, subset 093 [6], Model, agreed test plan, subset 026 [1], project specifications

Hardware Internal technical specifications, subset 076 [3], Requirements specifications, System validation

Integration Requirements specifications, interface specifications, operational procedures and scenarios, requirements, operational rules, training modules, hazards

Interoperability 3GPP & EIRENE standards, subset 076 [3], trackside supplier, customer defined test cases

IOP (subset 110) Operational procedures, subset 110 [7], subset 026 [1] IOT Agreed test plan Net Access Customer catalogue Principle National standards Product Subset 026 [1], requirements specifications, model based

approach, internal technical specification, customer catalogue, tester experience

Safety National safety requirements, requirements specifications Site Acceptance System test specifications, EIRENE standards, subset 093 [6],

operational procedures, operational scenarios Software Requirements, operational scenarios, subset 076 [3], tester

experience, software requirement standards (SWRS), software accomplishment summary (SWAS), software desing description (SWDD)

System Subset 026 [1], requirements specifications, internal technical specification, 3GPP & EIRENE standards, tester experience, model based approach, project specification

System Interface System requirements specification (subset 026 [1]), subset 076 [3] Environmental Sources Validation Requirements specification, operational test scenarios

Table 5.1 – Sources of tests

Although, there are lots of sources where tests can be derived from, as shown in Table 5.1, it seems to be challenging to clearly define exit criteria for the whole testing process.

Page 27: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 27 of 80

Today they are mainly based on personal judgement or contractual agreements which make use of lessons learnt from previous test and project experiences.

5.5 Reasons for laboratory tests It has been found from the replies that more than half of the partners, perform laboratory tests, the main reason being to save time. Secondly, with 54 %, they want to save costs. About a third of the partners perform their tests in the laboratory to be able to test specific situations. Several other reasons were mentioned sporadically by the partners. Most of the partners (about 80 %) see on-site testing as complementary to laboratory testing.

In Table 5.2, the reasons for performing test in laboratory are listed. Therefore, answers from customers and suppliers have been evaluated separately.

Customer Supplier Total Time saving 56% 67% 63% Cost saving 44% 60% 54% Enable specific situations 22% 47% 38% Early detection of failures 0% 27% 16% Safety reasons 22% 27% 25% Flexibility of test environment 0% 20% 13% Availability of experts and time/effort to fix errors

0% 20% 13%

Quality reasons 0% 20% 13% Contractual or legal regulation 22% 13% 17% Availability of real environment 11% 13% 13% Proof of fulfilling all requirements/Assessment

0% 13% 8%

Risk reduction 11% 0% 4% Integration of different Suppliers Subsystems

11% 0% 4%

Demonstration of Interoperability 11% 0% 4% Integration of Hardware and Software 11% 0% 4% Data Validation 11% 0% 4% Time flexibility 11% 0% 4%

Table 5.2 – Reasons for testing in laboratory

Suppliers mentioned laboratory tests more often than customers, see Figure 5.9. At the same time, the suppliers mainly enjoy the advantages of high availability, safety and

Page 28: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 28 of 80

flexibility in the laboratory environment. The customers, on the other hand, use the laboratory tests to ensure integration of subsystems and interoperability.

Figure 5.19 – On-site as a complementary to laboratory testing?

The on-site fall back is only required as a last resort, e.g. if further laboratory tests were too expensive, but only if the tests do not avoid the execution of already planned on-site tests. The customers rather tend to on-site fall backs than the suppliers, see Figure 5.10.

Figure 5.20 – On-site testing as a fall back?

If a product is updated and a full test has already been made with the previous version, most of the partners would not repeat the full tests. The detailed answers (in percentage)

0% 20% 40% 60% 80% 100%

Supplier

Customer

Do you see on-site as a complementary to laboratory testing?

yes no n/a

0% 20% 40% 60% 80% 100%

Supplier

Customer

Do you use on-site testing as fall back?

yes no n/a

Page 29: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 29 of 80

are shown in Figure 5.11. Instead, they rely on impact analysis to select the tests to be repeated. According to five replies, this is done for changes with minor impact, by choosing a limited subset of the tests to execute, while for a major update complete regression tests are performed.

Figure 5.21 – Handling of updated products, regression tests

A high level of automation of the test environment can impact the decision, by lowering testing costs and increasing the number of selected tests. On the other hand, increasing experience and confidence in the testing process, may reduce the amount of selected tests.

5.6 Gain trust and quality of test The answers given in the questionnaire show that suppliers check a part of the customers’ requirements, through tests executed in the laboratory and the success of the test execution can be obtained only with a high level of trust and quality in the test activities.

Robust documentation is the basic requirement to increase quality, which means requirements documents with high level of quality are needed and/or the early involvement of the supplier in writing requirements is important. Traceability between requirements (or any other input to define tests) and tests must be well documented and must be maintained up to date.

Automatic and configurable simulators are another important aspect, because they can integrate the highest possible number of real or virtualised elements, such as test environments (RBC, EVC, etc.). Configurable simulators should be equipped with

0% 20% 40% 60% 80% 100%

Supplier

Customer

If your product is updated, do you repeat the full tests?

yes no depends on

Page 30: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 30 of 80

additional tools that allow performing tests according to a configurable sequence of automated commands, resulting in different test scenarios. In this way it is possible to obtain many benefits, like test reproducibility and multi-session based testing, so that some kind of faults is found earlier. Simulators increase the ability to compare the result obtained with the requirements and on the other hand the possibility to perform tests in degraded conditions.

The use of external laboratories offers an independent view on the system and the laboratories can provide a standardisation in test procedures and databases. Laboratories can also make use of and share their knowledge of operational rules used in different countries, different engineering rules, as well as cross-industry sharing of test results.

Finally, to really gain trust in test environments, test equipment and its capabilities are a key goal. The verification of methods, procedures and equipment by audits is very helpful.

In order to perform only meaningful laboratory tests, good quality requirements and specifications is needed, as well as using requirements tracing. Furthermore, the adequacy and the completeness of the tests have to be verified as well.

The implementation of a well-defined test strategy and test plan, including the methods used is very important. The quality of the deliverables has to be continuously evaluated, followed by improvement actions. Also the experiences with previous tests have to be considered.

The evaluation of previous projects improves the quality for following projects, also the experience with (and the relation to) previous tests (for example issues identified in on-site tests) should be considered here. FAT or agreements with the stakeholder can also be used. This gives rise to continuous improvement of the tools and processes. To make sure that all necessary meaningful tests are performed in laboratory, the tests are checked against the requirements to cover the specification completely. This is defined / planned in the test strategy / test plan as well. This gives the required ‘holistic approach’.

The process, as well as the used methods and the documentation of the different configurations, including a definition of specific test scenarios for each configuration (i.e. application specific) is very important.

5.7 Use of formal method verification Based on the answers in the questionnaires, formal verification techniques are not considered as a way to replace testing in railway industries. They can be used just as a complement of test activity, simplifying some test specifications and reducing test effort by reducing the number of tests in the laboratory, see Figure 5.12.

Page 31: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 31 of 80

Figure 5.22 – Utility of formal verification

Indeed, even though there are very few companies (both customers and suppliers) claiming that it is possible to replace tests with formal verification, their real feeling is focused to reduce total test effort or the number of tests in the laboratory, that is not a real replacing operation.

Testing and formal verification are complementary activities and formal verification will not be able to completely replace testing, because some portions of complex signalling systems can be tested by formal verification; fit for safety validation, not performance. There is no unified opinion that formal verification is able to cover degraded mode or if it can only be tested by operation tests on site.

Some replies show that formal verifications can be useful for the first steps of software test, where generic low level problems must be identified and corrected, such as memory usage, overflow, divide by zero, out-of-bound array, state flow machine verification and functional requirements verification. But, from the experiences of recent projects, formal methods can also be applied on a system level.

5.8 Shift test from on-site to laboratory In the railway sector, tests have historically been performed on-site and therefore, on real tracks. Over the last decades, many of these tests have successfully been shifted to laboratories. Today, the question arises, if one can recommend shifting any further tests from on-site to laboratory environment. As one can see from Figure 5.13, it is not clear, if it would be useful to shift any further tests to laboratories for various reasons which will be discussed in the next sections.

0% 20% 40% 60% 80% 100%

Supplier

Customer

Do you think formal verification techniques can replace testing?

yes partially no n/a

Page 32: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 32 of 80

Figure 5.23 – Recommendation of shifting further tests to laboratory environment

5.8.1 Benefits executing laboratory test and limits for shifting The opportunity to shift test(s) in the laboratory put more focus on the current laboratories to increase their capability and become able to reproduce scenarios as realistically as possible. One option to do this, is to set up impartial laboratories, able to perform compatibility / integration tests with different on-board units and track-side equipment that ensure neutral view on the results. Interface between laboratories should be fully specified, in order to allow easy connection of external equipment (located in the supplier facilities).

Some replies stated that the impartial laboratory should make agreements with supplier and IMs or RUs in order to have access to railway data and equipment, and become able to reproduce operational scenario with specific railway rules.

Current limitations to shift tests to laboratories are related to the environment simulation, which must be as accurate as possible in order to avoid a large volume of re-testing when the system is installed in the real environment.

Some companies provided answers about having additional problems related to the lack of confidence in ETCS rules and data in different countries (i.e. balises, RBC data and behaviour). So from their point of view, it is much better to perform the test on-site instead of trying to reproduce the infrastructure and installation in the laboratory.

Further investigations on the limits and the benefits for shifting to laboratory will be performed in the test process definition.

0% 20% 40% 60% 80% 100%

Supplier

Customer

Do you recommend shifting any further tests from on-site to lab?

yes no n/a

Page 33: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 33 of 80

5.8.2 Plans to shift test into laboratory Plans to shift tests in laboratory are mainly focused on specific test activities:

• Integration tests: In terms of integration between different subsystems (provided by same or different suppliers), protocols, GPS, preliminary system tests

• Functional tests: that means functional and operational test cases to prove the complete compatibility for railway operation

• Negative tests: In terms of abnormal end-to end delay, errors on protocol level, fault injections

• GSM-R or future technology for signals characterisation: In order to reproduce on-site signals plus connection error ratio, connection loss, radio holes, network registration delay Note: GSM-R end of life is planned to be in 2030, new technologies (4G/5G RAT) introduction will start in 2022. Consideration on future test platform should take into consideration that evolution.

In any case, the choice of laboratory or site should be taken by the project depending on several aspects including availability and cost of the laboratory and site.

Site acceptance tests can also be reduced with:

• Standardized interfaces

• Railway line data availability

• Agreements between different supplier aiming to prove compatibility between their equipment

5.9 Implementation of an impartial laboratory 10 Suppliers and 4 customers are interested in implementation of an impartial laboratory responsible for Control Command & Signalling (CCS) testing during Shift2Rail. They bring in a lot of competences like test architecture and laboratory design, experience with test tools and processes, simulation software development, experience in automated testing, IOP testing, GSM/GSM-R and GPRS testing, railway operation and operational planning, staff as laboratory and field tester, track designer, wired and wireless saboteurs for positioning and communication systems.

Page 34: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 34 of 80

5.10 Main conclusions of railway sector analysis In general, the greatest challenge while shifting tests to laboratory is the interaction of the different products, to create an environment close to the reality. The complexity of the system is especially important.

Rework or update of the test environment may be necessary.

Building the laboratory test environment is very expensive and time consuming. Much effort has to be spent for creating a strategy and for buying or developing a test environment with various tools.

Another challenge is to add as much trust into the laboratory as possible (e.g. by testing in impartial laboratories), and therefore in the laboratory tests. It seems to be challenging to clearly define exit criteria for testing.

System boundaries, especially the (external) interfaces and hardware as well as the GSM-R and network communication have to be considered thoroughly. Moreover, the human factor is very important in a dynamic and complex environment.

Mixed teams with different experts together with a tool harmonisation (new software tools, analysis tools, development of a test bench) are beneficial while shifting tests to the laboratory. An often used procedure is “record and replay” of real on-site scenarios and data.

Nevertheless also tests according to subset 111 [8] and the rent of test capacity from the suppliers are helpful.

The analysis raises the question of where accountabilities and liabilities will reside during the transfer of testing to the laboratory. This will be discussed and confirmed during the development of the test process.

Page 35: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 35 of 80

6 Test activities outside railway sector (benchmarking) In this chapter, our benchmarking process with industries providing solutions for safety-critical products or services will be described. Some of them have already shifted tests to laboratory environment and may provide useful lessons learnt to our railway sector. Moreover, approval processes may differ significantly with respect to harmonisation across Europe (or the world).

Therefore, a questionnaire has been prepared and distributed among different industries outside the railway sector. The questionnaire can be found in Appendix 11.2. Most of the questions asked to aviation, space, nuclear, medical and other sectors are quite the same as asked for in the railway sector part. The answers are discussed below and grouped into different fields.

6.1 Description of the analysis done This section includes the context in which the benchmarking activities have been carried out. First, the aim of the benchmarking is explained, then, the questionnaire distributed is described, after that, the timeframe and number of answers received is presented and finally a clustering of the safety critical industries is shown.

6.1.1 Aim of the benchmarking The concept of shifting tests from on-site into laboratories using simulation is not railway specific. This process has already been started in several industries and may be extended furthermore. Since ideas from other safety critical industries might be applied to the railway sector, first of all as much information as possible about the current field test and laboratory test activities in other industries have to be gathered with the aim of identifying areas which can help to the goal of the zero on-site testing.

The major objectives of this benchmarking are:

• Identifying the types of test performed in laboratory and on-site • Getting information about the reasons to perform the tests in laboratory and on-site • Gathering experiences made by shifting tests

As stated, the focus has been set on safety critical industries which are comparable to the railway sector and especially to the command and control systems within the railway sector. Knowing that information about these internal processes is difficult to collect, the broad variety of X2Rail-1 partners has been used. Every partner used his own network, to get in contact with the relevant persons in other industries. The major advantage of increasing the return rate by using this way of distributing the questionnaire came along with the disadvantage of not having a complete list of companies addressed and replied

Page 36: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 36 of 80

to the questionnaire, due to confidentiality. Therefore, getting in contact with the people who answered to the questionnaire is not possible easily.

6.1.2 Description of the form distributed An electronic benchmarking questionnaire was created and distributed via the X2RAIL-1 WP6 partners. Answers were given anonymous and all questionnaires have been collected and evaluated confidentially.

The questionnaire starts with general questions about company and personal information of the replier:

• Type of your company (e.g. supplier, infrastructure manager, independent assessor or validator)

• Technical area of your company • Your personal area of expertise (e.g. product manager, system engineer, test

manager)

The questionnaire proceeds by topic related questions:

1. What tests do you perform in the laboratory and what tests in the real environment?

2. Why do you perform tests in the laboratory, e.g. legal regulations, time saving, cost saving and please specify the effects?

3. Please provide your experience about shifting tests from on-site to laboratory environment. 3.1. What have been the pains or the challenges you had to deal with before

shifting tests to the laboratory environment? 3.2. What system boundaries have to be taken into account? 3.3. What has helped you solve these challenges? What is the effort?

4. How does shifting tests from on-site to laboratory influence company-specific processes and the organisation of your company (e.g. in the fields of product management, development process, regulation)?

5. How has or will shifting tests from on-site to laboratory influence certification and/or approval processes and documents?

6. How can you make sure to perform only meaningful laboratory tests, which can serve as a confirmation of product/service quality and safety level?

7. What are the tests, which cannot be shifted and will have to be performed on-site? Why?

8. What are you doing to gain trust in the tests executed in the laboratory (in contrast to on-site tests)?

Page 37: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 37 of 80

All the questions intend to achieve a deeper understanding of the way of testing of the companies addressed, focusing mainly in their evolution from field testing to laboratory testing.

6.1.3 Distribution of company types The questionnaire was distributed in the end of November 2016. Deadline for receiving answers was the in the middle of January 2017.

We have received 22 replies from 15 different companies. Some companies have sent answers from different internal departments or sectors. Most of the replies came from countries inside the European Union. However, there was also a reply from North America.

Figure 6.1 shows the distribution of answers related to the company types. A broad spectrum of industries into this type of companies can be identified and the replies are randomly distributed among these industries:

• Supplier: • Regulator • Data & Service Provide • Consultancy • Research Institute • Manufacturer

Figure 6.1 – Split of replies divided into company types

Supplier27%

Regulator5%

Data & service provider

9%

Consultancy14%

Research institute

27%

Manufacturer18%

Page 38: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 38 of 80

6.1.4 Cluster of the safety-critical industries addressed The analysis of the replies in terms of areas of industry shows that the number of replies from medical or defence sectors is very small, see Figure 6.2. Although the questionnaire has been distributed to a large number of companies from these areas, the feedback was quite small. One reason for that is the fact that benchmarking and the questionnaire is dealing with highly critical issues for these industries and therefore, these sectors have by nature high restrictions to reply to such questionnaire and to give information to the public or other industries or companies.

Figure 6.2 – Area of safety critical industries

Note: The answers coming from the communication industry are analysed in part A, as they are part of the railway related industry.

Due to the distribution procedure and the fact that more or less all members of X2RAIL-1 WP6 are from the transportation sector, the number of replies from that sector is very high, as can be seen in Figure 6.3. Other sectors like the nuclear, the medical devices, defence or consultants for system engineering also participated, but only in a very limited way.

Space27%

Aviation18%

Nuclear9%

System engineering &

validation9%

Multimodal traffic

management9%

Wireless communication

& navigation9%

Automotive & truck

manufacturing9%

Medical devices5%

Defense5%

Page 39: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 39 of 80

Figure 6.3 – Sector of industries

6.2 Results of the analysis All the information collected has been analysed with the aim of obtaining conclusions towards the objective of the zero on-site testing. The analysis shows that in some topics, there is broad agreement among different sectors, and not such an agreement in some other topics.

The information is arranged as follows:

• Types of tests currently performed • Tests in laboratory vs. tests on-site • Laboratory testing • Impact of shifting tests from on-site to laboratory • Explanation of the approval process & need for harmonisation by sector • Main conclusions of the analysis

6.2.1 Types of tests currently performed The analysis of the data collected shows that all the companies are performing laboratory tests. However, not all of them are performing tests in a real environment. This is due to the fact that sometimes, there is no accessibility to the real environment, e.g. components deployed in the space or in the satellite communications industry.

6.2.1.1 Types of tests performed in laboratory The laboratory tests can be split into three different types of test: subsystem tests, integration test and system validation tests.

Transpor-tation73%

Nuclear9%

System Engineering

9%

Medical Devices

4%

Defense5%

Page 40: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 40 of 80

These types of tests are inter-dependent and follow a specific sequence. First the subsystem tests are performed. Once the subsystems have been validated satisfactorily, integration test(s) can be performed. Finally, system validation tests are carried out when integration(s) test have succeeded. Therefore, laboratory tests are not limited to subsystem tests, but also integration and/or even system tests may be performed in laboratory.

Beside functional validation tests, laboratory tests are also used to perform non-functional tests like safety, security and qualification tests (such as electromagnetic compatibility, temperature tests…).

Figure 6.4 shows that most tests performed in laboratory are subsystem tests, which are needed before the integration tests. Therefore, the number of integration tests is greater than the system tests, which require the completion of the other types of tests.

Figure 6.4 – Split of types of test performed in the laboratory

6.2.1.2 Types of tests performed on-site In contrast to laboratory testing where three types of tests could be identified, on-site tests cover also other types of test, but the focus is mainly put on system validation test. Moreover, no subsystem tests are performed on site as it can be seen in the Figure 6.5.

Beside functional systems validation tests and integration tests, non-functional tests are also performed on-site.

Moreover, the vast majority of the answers state that on-site tests are carried out for activities that could not be reproduced in laboratory tests. This does not include acceptance tests which can be considered as additional tests that might be done on-site due to the customer request.

Subsystem50%

Integration30%

System20%

Page 41: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 41 of 80

Figure 6.5 – Split of types of test performed on-site

6.2.2 Tests in laboratory vs. tests on-site Laboratory tests have a number of benefits. First, laboratory testing is safer for human, material and environment than the equivalent on-site testing. Laboratory testing can be performed in early stages of the product lifecycle, when failures and features can be identified very early and therefore, costs and time can be saved. Since the laboratory is a controlled environment in opposite to on-site environment, it is possible to test worst-case scenarios and stress-test scenarios. Moreover, the repeatability of the test is guaranteed. Furthermore, tests can be corrected and modified more easily (e.g. for regression tests) and various scenarios can be performed within a short time, both leading to shorter overall testing time. Finally, the reduction of testing time also comes from the higher availability of laboratory for testing compared to on-site areas, i.e. on-site test areas might be restricted or not usable due to logistics or environmental conditions.

Considering all these advantages, there is a number of reasons that makes the companies asked perform laboratory test as shown in Figure 6.6. These reasons can be classified into three groups. The main reason is the cost and time savings with a 36 % of the answers. 27 % of the answers expose that the laboratory tests are contractually defined or required by the authorities. Finally, the functional reasons such as interoperability, product related and worst-case scenarios (safety and security, reproducibility) cover 37%.

Subsystem0%

Integration9%

System76%

Non-functional10%

Acceptance5%

Page 42: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 42 of 80

Figure 6.6 – Distribution of the reasons for performing laboratory tests

6.2.2.1 Challenges of shifting tests from on-site to laboratory Shifting from on-site tests to laboratory tests has a number of challenges to be fulfilled. These challenges can be grouped into three main topics: creating a laboratory simulation/test environment, technical issues and gaining trust in laboratory tests.

A number of difficulties are found at the time of creating a laboratory simulation/test environment. Usually they require an expensive and long-term development of simulation systems and laboratory test setup environment. Sometimes, it is required to add specific operational modes to the simulation environment with regard to the company or country needs. In addition, sometimes, final test cannot even be performed in the laboratory (i.e. medical tests).

Technical issues might not allow laboratory testing due to technical limits of the laboratory itself. In case these limits can be overcome, there is the need to gain experience about the real environment conditions first, before transferring it to a laboratory environment. For performing the laboratory tests, creating standardised testing interfaces is very useful for the target sector where different suppliers and different end-users can employ the same tests and test setup, and external impartial laboratories can also take part in the process.

Contractually defined

11%

Required by notified body

16%

Safety and security

5%Reproducibility9%

Interoperability13%

Cost savings19%

Time savings17%

Product related tests

10%

Page 43: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 43 of 80

Finally, in order to gain trust in laboratory tests, a “change in mindset” is required to rely on laboratory tests and their outcomes Moreover, there is a need for the acceptance of laboratory test results for safety and security.

6.2.2.2 What has helped to solve those challenges? With the aim of solving these challenges, a look at the experience and lessons learnt can be taken. From there, a number of actions to be applied have been identified. For example use simulation tools, make the information available internally and from other actors, and define the benefits of a laboratory testing.

Currently there are a number of new simulation tools available and easy to adapt to many needs. Therefore, it is challenging to find the right tool for a specific application. For this, a review of current industry practices and the definition of the test plan together with the customer are really helpful. The contribution of the customer is the key, mainly due to two aspects: the synergy created between the supplier and the customer and the need of the on-site test data, which the customer has access to, with the aim of comparison with the laboratory test data. Moreover, not only the involvement of the customer is needed but also the involvement of employees with relevant experience that will lead to the definition of the test plans and procedures. This should include the demonstration of the coverage of as much hazards as possible showing, if needed, complete safety and security tests. Finally, external information coming from the exchange with regulators and experts (e.g. universities) to obtain good models, and the standardisation of tests is also a good practice towards overcoming the limitation of the laboratory testing.

Among all these actions, the validation of the test site should be highlighted. For that, it is necessary to compare laboratory data with on-site data to ensure that the environment reproduced in laboratory matches with the on-site and the behaviour of the system under test (SUT) in laboratory matches with the behaviour on-site.

6.2.2.3 Boundaries of shifting to laboratory In the process of shifting on-site tests to laboratory test a number of boundaries are found. These boundaries have to be overcome to improve laboratory testing with the aim of reducing on-site testing.

It has been stated that the real environment is the main boundary, since reproducing that environment in the laboratory is not obvious. The data about real environment is difficult to collect and it is not foreseen that it will ever be completely available. Additionally, costs, technology and organisational limits, where different responsibilities of factory and customer and environments (on-site) are available makes it difficult to develop the

Page 44: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 44 of 80

laboratory testing site. Finally, human behaviour affecting the system needs to be simulated as well; therefore, this effect should be also included in the laboratory test site.

6.2.3 Laboratory testing With the aim of learning how to identify the laboratory test to be done, this subsection describes the way other sector works to ensure only the use of meaningful tests in laboratory, the identification of tests that cannot be shifted to the laboratory and the way to gain trust on those laboratory tests.

6.2.3.1 How to ensure to perform only meaningful tests in the laboratory

With the aim of overcoming the drawback of performing non-useful test in the laboratory, the replies of the companies that answered the questionnaire varied. However, there was one common answer related to rely on the experience and lessons learnt during the years in the sectors the companies are targeting.

In order to complement the experience or to fill the lack of experience, a number of specific actions were also mentioned. In order to find the right balance between requirements and over specifications a clean traceability is proposed. Moreover, definition of the test and the test coverage is also a key together with the essential preparation work. Having a product risk analysis is also useful for performing only meaningful tests. When possible, standardised tests, which have already been validated by the regulator, should be employed. Additionally, to ensure that meaningful test are performed, only standard tests, detailed in a specific European subset and performed in qualified laboratories can be performed. Finally, adding realistic data to the laboratory test, which were recorded under real life conditions, also contributes to the quality of the laboratory tests.

6.2.3.2 Which test cannot be shifted into laboratory There are many challenges and boundaries when shifting from on-site testing to laboratory testing. Even though challenges can be fulfilled, there are boundaries that cannot be overcome and therefore some tests cannot be shifted to the laboratory.

First, the tests are split into seven different types:

• Non-reproducible tests: Tests with no possibility to measure, simulate or reproduce real environment data (e.g. tests with real mobility effects, …)

• Specific performance tests: Some specific performance tests have a significant meaning if performed on-site only.

• Installation tests: Installation test can only be performed once the installation is on-site.

Page 45: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 45 of 80

• Field acceptance tests: Some tests, which have already been performed in the laboratory, have to be repeated on-site.

• Area specific tests: These kind of tests apply to the space and nuclear sector and are related for example to interaction of rocket and start base, interaction of cold & warm reactor, etc.

• Specific on-site tests: Tests required by authorities, standards and/or customers (e.g. in the avionics and medical areas: “no one will accept an aircraft which is not flight-tested”).

• Human behaviour tests: Tests including human behaviours are difficult to perform in the laboratory properly. This is usually the case in the automotive industry.

Second, these seven types of tests are arranged into two categories of reasons for laboratory testing: functional and requested. These reasons are linked to the main boundaries for laboratory testing, namely creating a simulation environment and gaining trust in tests, see Table 6.1.

Page 46: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 46 of 80

Boundaries for laboratory testing

Creating a simulation environment (technical issues):

Gaining trust in laboratory tests:

Rea

sons

for l

abor

ator

y te

stin

g

Functional

Non-reproducible tests Specific performance tests Specific performance tests Installation tests Field acceptance tests Area specific tests Area specific tests Specific on-site tests Human behaviour tests

Requested

Specific performance tests Specific performance tests Field acceptance tests Field acceptance tests Area specific tests Area specific tests Specific on-site tests Specific on-site tests

Table 6.1 – Reasons and boundaries for laboratory testing.

6.2.3.3 How to gain trusts in tests executed in laboratory? A number of different activities performed by different industries have been collected with the aim of gaining trust in laboratory testing. All these help to demonstrate that the proposed laboratory tests are reliable and significant in comparison to the on-site tests.

Quantitative KPIs for each test are employed, and when possible, those KPIs agreed upon with the customer. Customers and/or authorities should also participate in certain tests. External audits resulting in certification are also performed. Moreover, test tools which are qualified against applicable codes and standards are employed. The laboratory test is validated performing the same test on-site and comparing the results. Besides, systematic and transparent execution of tests and documentation including traceability is considered. Finally, the reproduction is a key factor of the laboratory tests.

6.2.4 Impact of shifting tests from on-site to laboratory This subsection describes the impact that laboratory testing had in the company-internal process and on the certification or approval of the different sector analysed.

Page 47: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 47 of 80

6.2.4.1 Impacts on company processes Shifting on-site tests to laboratory tests can affect the company processes. In this context, the experience of the companies that replied to the questionnaire has been collected. The most important topics addressed reflect the benefits of the laboratory testing.

One of the most important points is that internal processes need to reflect importance of testing and the processes have to be completed around laboratory testing, including criteria, people, etc. Moreover, benefits of including the laboratory testing in the internal processes reflect that investments in upgraded test equipment should be done.

The reduction or increase of the complexity of the internal processes due to the introduction of laboratory testing differs significantly across the responses. Some companies mentioned that technical choices are simplified, while others, especially automotive, inform that laboratory testing increases the complexity of the processes and requires additional competencies of the staff. Nevertheless, there is an agreement on the reduction of logistics, management and communication processes due to laboratory testing.

6.2.4.2 Impacts on certification or approval Nowadays, laboratory testing is part of the process for certification or approval. Usually it precedes the next process of the certification or approval, as in the case of the aviation sector, where laboratory testing is a prerequisite for flight-testing permissions.

Even though most approval processes are based on laboratory tests today, the trend might change. In the case of the automotive sector, the trend goes rather in the opposite direction by shifting from laboratory tests to on-site tests for certification (e.g. consequences of diesel emission affair).

It also happens that customers and regulators assess the whole development process (planning, requirements, basic design, detail design, manufacturing, and all testing levels) and certification is planned together with authorities (ARP4754 – Aerospace Recommended Practice) as in the case of nuclear and space sector.

6.2.5 Explanation of the approval process & need for harmonisation by sector

There are two general comments that apply to most of the sectors studied. Currently, in most areas standardized processes for approval are established that pave the way towards putting the product into the market. In case that European standards do not exist, there is a need for harmonisation in all areas.

Page 48: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 48 of 80

Specific description of the processes for each area studied is described in Table 6.2. The main conclusions are that the aviation is an area with a large number of approved standards and regulations and it is a safety critical area. Therefore the principles in the aviation area can be a useful input for the railway sector. Moreover, part of the communication (GSM-R) is already included in the railway sector.

Area Explanation of the approval process Wireless Communication

• Already worldwide standardized by 3GPP, IEEE, ETSI • The Third Generation Partnership Project (3GPPTM) was

established in 1998 to develop specifications for advanced mobile communications. It comprises:

o seven regional Standards Development Organizations (SDOs) including ETSI

o market associations o several hundred companies

• The original scope of 3GPP was to produce globally applicable reports and specifications for a third generation mobile system based on evolved Global System for Mobile communication (GSM) core networks and the radio access technologies that they support.

Defence • Each country and each customer have different requirements. So it is not easy to harmonise and standardise the approval process. Although the System engineering management is quite similar.

Nuclear • At European or international level, there are quite a lot of regulation bodies dealing with regulation and control, e.g.:

o IAEA: International Atomic Energy Agency o WENRA: Western European Nuclear Regulators

Association o EURATOM: European Atomic Energy Community

• Standards and guidelines are provided in IEEE and IEC • Significant differences among countries • Country specific regulations, such as safety aspects,

environmental conditions Aviation • Aircraft need world-wide flight permission.

• To reduce certification effort for safety-critical systems, equal standards are necessary and do exist.

Page 49: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 49 of 80

o Certification process is specified by EASA (European Aviation Safety Agency (EU)) and FAA (Federal Aviation Administration (US).

o Other countries have a bilateral agreements referring to these processes.

• In the EU area, identical regulations apply to all countries (for example, EU Regulation No. 748/2012 for development and production, EU Regulation No. 1321/2014 for maintenance), which are integrated into all country-specific legal areas of the EU member states

• For drones: o There is a need for harmonisation. o Currently there is no harmonisation in the drone

industry. Each country has its own legislation. o EASA and FAA are now working in harmonisation to

give guidelines for construction and operation. Space • There are differences between national projects.

• In the projects contracted by the European Space Agency (ESA) the approval process is mainly harmonised.

• European harmonisation would simplify the administrative and contractual work.

Multimodal traffic management

• Less standardised area • Traffic Management Systems (TMS) are different in every

country and have country specific solutions. • Standardisation and harmonisation initiatives are useful.

Medical • No answer given to that question.

System engineering and validation

• Not enough data given to that question.

Table 6.2 – Statement of the standardisation level of the approval process per sector studied

6.3 Main conclusions of the benchmarking analysis The main conclusion obtained from the analysis that will help towards the objectives of the zero on-site testing are the following:

• The questionnaire is mainly related to safety, not to security.

Page 50: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 50 of 80

• The questionnaire shows that next to the railway sector, other safety critical industries exist.

• These industries and companies have to deal with the same issues as the railway sector does.

• Replies from many companies and departments have been received. • Although attempt, very limited answers from additional sectors like chemical, pharma

and medical were given. But due to the reasons already mentioned in section 6.1.4, no further answers have been gathered.

• Almost all companies of all areas perform both test: laboratory and on-site. • Taking functional tests into account (subsystem, integration and system tests), all

subsystem tests are performed in labs and a vast majority of system tests are performed on-site.

• For obvious reasons there are tests, which cannot be shifted into laboratory (e.g. customer acceptance, installation, simulation of human behaviour).

• No industry has shown the capacity and capability to perform all relevant tests in laboratory. On-site tests are still required.

• Automotive is showing that some tests currently performed in laboratory will be shifted back to on-site in the future (Diesel emission affair).

• Performance tests take place in laboratory as well as on-site, but only up to a certain degree in labs as they have to be completed on-site for certain boundaries.

• The more the companies know about the on-site tests, the more data they can provide as an input for laboratory environment in order to gain trust in the laboratory test and shift tests from on-site to laboratory as much as possible.

6.4 Recommendations This section includes a list of recommendations to be applied in the test process and the test architecture for railway systems coming from the experiences provided by other industries, see Table 6.3.

Recommendation no. 1 Tests shall be carried out in a safe manner with regard to human, material and environment. Recommendation no. 2 Different level of test should be defined such as subsystem tests, integration test and system validation tests. Moreover, non-functional tests should also be included. Laboratory tests should start in the early stages of the product lifecycle, when failures and features can be identified very early and therefore, costs and time can be saved. Thus, these tests should allow testing a number of scenarios within a short time, leading to shorter overall testing time. Laboratory tests should be planned in such a way that

Page 51: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 51 of 80

logistics, management and communication processes should be reduced compared to on-site testing. When required, customers and regulators should be included in the assessment of the whole development process (planning, requirements, basic design, detailed design, manufacturing, and all testing levels) including certification and laboratory testing activities. Recommendation no. 3 Acceptance tests can be considered as additional tests and is requested by the customer to be done on-site and not to be moved to the laboratory. Evidence gathered through laboratory testing may, however, contribute to acceptance tests. Recommendation no. 4 Laboratory tests should include worst-case scenarios and stress-test scenarios. Repeatability of laboratory tests should be guaranteed. For that, the right tools should be found among those tools available. For example, the test tools which are qualified against applicable codes and standards should be used. One of the key aspects for this selection is the flexibility and easy way to adapt it to many needs. Recommendation no. 5 Laboratory test should allow being easily corrected and/or modified allowing e.g. regression tests. Furthermore, they should be flexible with the aim of allowing an easy addition of specific operational modes required by the customer. Recommendation no. 6 Standardised testing interfaces should be created with the aim of allowing the use of the same tests and test setup independent of the supplier and end-user. Moreover, it allows external impartial laboratories taking part in the process. Recommendation no. 7 Laboratory test should enable a “change in mindset” with the aim of gaining trust in laboratory tests to rely on laboratory tests and their outcomes. Experience about the real environment conditions should be gained first, before transferring the environment to the laboratory. At the end, the validation of the test site should be done by means of comparing laboratory data with on-site data, to ensure that the environment reproduced in laboratory matches with the on-site and the behaviour of the system under test (SUT) in laboratory matches with the behaviour on-site. Realistic data, recorded under real life conditions, should be added to the laboratory tests to increase the quality of the tests. External audits of the laboratory and laboratory test resulting in certifications should be done. Recommendation no. 8

Page 52: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 52 of 80

Laboratory tests should also demonstrate that its results can contribute to the acceptance of safety (and security). For that, the test plans/procedure should include the demonstration of the coverage of as much hazards as possible showing, if needed, a complete safety and security test. The definition of the test and the test coverage together with the essential preparation work should be considered for the laboratory testing. In order to find the right balance between requirements and over-specifications, a clean traceability should be applied. When testing a product in the laboratory, a Product Risk Analysis should be included. Laboratory tests should include systematic and transparent execution and documentation including traceability. Recommendation no. 9 When possible, standard tests, which have already been accepted by governmental bodies, should be employed. One of the easiest ways to be sure that is to perform the tests in an impartial and qualified laboratory. Recommendation no. 10 Quantitative KPIs for each test, and when possible agreed upon with the customer, should be used. Recommendation no. 11 Importance of testing and testing processes should be included in the internal processes. This includes laboratories, people, etc. Recommendation no. 12 Investments in upgraded test equipment should be done for laboratory testing. Recommendation no. 13 Laboratory tests should be part of the process for certification or approval.

Table 6.3 – Recommendations for the laboratory testing from the answers received from outside the railway sector.

Page 53: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 53 of 80

7 Harmonisation and authorisation activities In this chapter, the different approval processes in different countries for railway systems are considered and testing in laboratory or on-site is one of the important steps to fulfil the specific approval requirements for CCS systems. Moreover, the interest in existing working group and interest in standardized European approval process for the overall CCS system that may be established in the future has been evaluated.

Approval statement is a crucial phase in a project because it gives the authorisation of placing into service. The process is related to various parts/subsystems of the railway system according to each project. Approval requirements come from different sources such as Technical Specifications for Interoperability (TSI), National Rules, and customer and supplier specifications. That leads to rather complex and different approval process. And as evidence for approval, requirement test activities are central.

It has to be recognised that these harmonisation and authorisation activities are not in scope of the X2Rail-1 project and information gathered will be shared with those dealing with this, e.g. ERA. However, the following sections of this report summarise the answers of five questions on harmonisation activities and approval process as they have been included in the questionnaire in order to gain an understanding of the current situation in the railway sector.

In this context, also any other TDs which might influence the approval processes, e.g. TD2.7 Formal Methods, will be taken into account. Once these research activities reach that level, a more detailed investigation is needed, since the harmonisation and authorisation activities require a closer cooperation with the relevant authorities.

7.1 Existing harmonisation activities First of all, it was important to know, if there are already existing working groups regarding testing and which the most important ones are. The results can be seen in Figure 7.1.

Page 54: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 54 of 80

Figure 7.1 – Testing Working Group participation

All groups that have less than two specified answers are displayed as “others” and can be seen below.

Others:

• EMC working groups for ERTMS subsystems • ESA for positioning applied to railway applications • NVIOT / Network Vendors Interoperability Testing • RAN working group • TEN 3rd Call & "ETCS2 on GPRS experimentation" projects • UIC, participating to GPRS testing , Austria • CEF, participating to Network test cases definition • Infrabel CFL ADIF Holland • Preparation of Project "ETIP" • Subset 036 [2] • ERA working groups under the 4th RP for vehicle authorisation and TS approval

It appears that there are different and a large number of working groups regarding testing that do exist at the moment or did exist in the past. At least, there are five main groups: EUG, UNISIG IOP, groups related to TEN T, validation & test sub group ERA ETRMS

0123456789

No

Oth

ers

EUG

UN

ISIG

IOP

Gro

ups

rela

ted

to T

EN-T

Gro

up o

f ER

A E

RTM

Sst

akeh

olde

rs

Gro

ups

rela

ted

to s

ubse

t-076

No

answ

er

num

ber o

f rep

lies

Do you participate in any working groups regarding testing in the past or today? Which

one?

Page 55: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 55 of 80

stakeholders” platform, group related to subset 076 [3]. EUG and UNISG IOP working group appear to be the most important ones. Eight stakeholders have no participation at all. According to the questionnaire results, there is a real interest towards working group, but testing working group activities seems to be fragmented. Some companies, in particular some suppliers, participate in different working groups but in the same time, others indicated to have no participation.

Concerning the question for further need to harmonise test strategies, most of the replies confirm that there is a further need in harmonisation of test strategies in the future, see Figure 7.2. A few also provided a reason or an area where they see a specific need, see Figure 7.3.

Figure 7.2 – Need for harmonisation of test strategies

Yes75%

No 12%

No answer13%

Page 56: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 56 of 80

Figure 7.3 – Possible targets of harmonized test strategies

27 % of the repliers who provided a reason (suppliers and system integrator) think that interoperability is in need of harmonisation. The other answers are mostly connected to effectiveness and a common management.

7.2 Different approval processes in different countries The approval process for CCS systems in different countries should be explained in an additional question and table in order to compare them. Although ERTMS is already harmonised, it is known that there are still some differences in the application of the ERTMS related processes in different member states and therefore the way to deal with the ERTMS application has been included in the questionnaire. This topic will be taken into account in the future TD 2.6 activities, especially in X2Rail-3 and X2Rail-5.

As testing in laboratory or on-site is one of the important steps to fulfil the specific approval requirements for CCS systems, approval processes must be considered when discussing testing topics.

In fact, only 10 companies over 24 answered to this question (42%). Few companies gave 2 answers that make 12 answers to analyse. It is well balanced because 5 answers come from infrastructure manager, 5 from suppliers. These answers cover 7 different European countries (1 answer per country + 2 answers for Denmark). 2 answers concern CENELEC, 1 telecom product and other TSI (almost empty).

Over all evaluated approval processes, the number of process steps varies from 2 to 8 process steps. In general, a lack of information on these steps was provided.

Yes, common authorisation

18%

Yes, common process

9%

Yes, more effective, cost

and time saving28%

Yes, reduce level of

equipment18%

Yes, interoperability

27%

Page 57: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 57 of 80

Consequently, there is a need to request additional information and details on each national process to compare them into more detail in the field of CCS systems.

However, the NSA responsibilities appear to be similar in every country: Process verification and approval in line of CSM.

It is shown from the responses that the project leader (infrastructure manager or supplier) has the responsibilities to provide evidence by undertaking tests or presenting the results of test(s). RUs have not been asked.

The estimated time frame for the whole approval process is generally not precisely known even if at least several months are requested. However, there are time-frame indications, such as from 6 months to 26 months which is in the range of 30% of the project time.

There are indications that some test labs exist or will be available soon. These are predominantly CENELEC test laboratories and integrity test laboratories for suppliers.

From the answers, it appears that the approval processes are different for every railway network. Some of the processes are standard for systems such as ERTMS, but there are differences in the application of the processes in different member states. Some processes rely entirely on the supplier process to issue relevant documentation.

The process is driven by national rules (except for ERTMS). The processes have been in place for many years and some countries are currently going a step forward to simplification, transferring responsibilities and saving time and money to accept a system, e.g. the German New Approval Process (NTZ).

System engineering management seems to be quite similar (despite from 2 to 8 steps depicted). Validation processes must (and do) include the CENELEC standards. The difference is how (and by whom) the process is to be supported. Through Europe, the different NSAs have the responsibility to verify the process and approval in line with CSM. Alternatively, IM or suppliers have the responsibilities to provide evidence by performing test or presenting result of test. Approval process can take several months to few years according to country-specific processes and project time frames.

It has to be noted that there are already few test laboratories in Europe, but it seems that they don’t exist for the same purpose.

7.3 Challenges of European approval processes There is a large interest in a standardised European approval process because 70% of the replies answered this particular question and all of them saw a need and an opportunity in standardisation of the approval process for CCS systems, comparable to ERTMS which has already been standardised among Europe.

Page 58: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 58 of 80

Several reasons for the interest in any further European approval process are visualised in Figure 7.4.

Figure 7.4 – Need for standardised European approval process? Why?

A few of the repliers answered that there is a need for standardisation of the OBU approval process, but they also thought that the trackside equipment was not needed or possible since there are big differences between the system configurations. A standardisation of CCS systems and products will influence the related tests which are needed for approval. Other possible advantages concern project aspects such as time and cost reduction or the increase of effectiveness.

The different advantages are listed and evaluated in Figure 7.5.

OBU integration12%

OBU integration, without trackside

equipment17%

Time and cost reduction and/or

increase of competition

21%

Increase effectiveness in

the approval process

21%

No answer29%

Page 59: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 59 of 80

Figure 7.5 – Possible advantages of European approval process

Main expected advantages of a European approval process are time and competition. Figure 7.6 show what the infrastructure managers see for the main advantages of a European approval process, whereas Figure 7.7 shows the supplier perspective.

Figure 7.6 – Advantages of European approval process - IM views

0

1

2

3

4

5

6

7

8

Tim

e

Com

petit

ion

Uni

form

ity

OB

U

Ease

Mon

ey

Acc

epta

nce

Red

uce

trai

ning

Safe

ty

num

ber o

f rep

lies

What will be the main advantages of a European approval process?

0

1

2

3

4

OB

U

Uni

form

ity

No

answ

er

num

ber o

f rep

lies

What will be the main advantages of a European approval process? (Infrastructure Managers)

Page 60: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 60 of 80

From IM perspective, the expected advantages are limited to OBU approval process and uniformity only. If systems or products are uniformed and the approval processes as well, the test efforts and the number of tests related to project specific concerns (e.g. OBU in different countries and from different suppliers) will be reduced as well.

Figure 7.7 – Advantages of European approval process - suppliers views

From supplier perspective, the main advantage of a European approval process is time saving. The second shared advantage for supplier is competition. These reasons differ from infrastructure manager’s views.

7.4 Main conclusion of harmonisation and authorisation analysis

ETCS as standardised system aiming at running trains throughout Europe could be the vector to support a railway sector harmonisation. The difficulty will be to propose a common approach, with consideration of national reluctance and railway local standards.

A large majority of participants from the railway sector (62%) consider that there is a need to harmonise test strategies. Interoperability is identified as a need for harmonisation of test strategies.

In the same way, a large majority are in favour of a standardised European approval process in particular for OBU. There is less support for such activities on trackside equipment due to large differences between system configurations in each country interfacing with the legacy systems. For infrastructure manager, the main advantage of a European approval process is the standardisation of OBU approval process. It has to be

0

1

2

3

4

5

6

7

Tim

e

Com

petit

ion

Acc

epta

nce

Ease

Uni

form

ity

Safe

ty

Mon

ey

no a

nsw

er

num

ber o

f rep

lies

What will be the main advantages of a European approval process? (Suppliers)

Page 61: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 61 of 80

recognised that this area is not in scope of the X2Rail-1 project and information gathered will be shared with those dealing with this, e.g. ERA.

Page 62: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 62 of 80

8 Cooperation with VITE Beside the Shift2Rail member project X2Rail-1, some open call projects are working on IP2 topics in parallel. For the WP6 (work package) a project called VITE (Virtualisation of the testing environment) was set up. General information about the project can be found on the Shift2Rail website1. The project partners of VITE are:

• CENTRO DE ESTUDIOS Y EXPERIMENTACION DE OBRAS PUBLICAS – CEDEX (Spain)

• MULTITEL (Belgium) • RETE FERROVIARIA ITALIANA – RFI (Italy) • ADMINISTRADOR DE INFRAESTRUCTURAS FERROVIARIAS – ADIF (Spain) • RENFE-FABRICACIÓN Y MANTENIMIENTO, SOCIEDAD ANÓNIMA (Spain) • ASOCIACION DE ACCION FERROVIARIA CETREN (Spain) • RINA SERVICES SPA (Italy) • BELGORAIL (Belgium) • OLTIS GROUP AS (Czech Republic) • UNIVERSITA DEGLI STUDI DI ROMA LA SAPIENZA (Italy)

The X2Rail-1 WP6 highly appreciates the participation of the IMs from Italy (RFI) and Spain (ADIF) in the VITE project. This will help to bring in more experiences and knowledge into Shift2Rail.

The questionnaire has also been sent to VITE in order to investigate on their testing activities. The VITE answers don’t modify the WP6 view of shifting tests from site to lab. Their response to the questionnaire show a complete compatibility with the current WP6 proposals and activities.

VITE has highlighted three aspects in their response.

(a) Lab tests are an appropriate option to prepare for on-site tests, i.e. to eliminate general failures.

(b) Improving the quality of service for ETCS Data Only Radio (EDOR) tests. Special attention is given to the IM responses of VITE. They have already performed a significant number of tests in the laboratory and plan to have even more test to be simulated and shifted into the lab. This will help monitoring the system in service. Based on the monitoring results, additional operational scenarios can be defined easily.

(c) The validation process can be improved by lab tests to allow the testing of any possible combination.

1 https://shift2rail.org/project/vite/

Page 63: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 63 of 80

9 Conclusion The key objective of zero on-site testing for CCS systems is to perform functional and non-functional tests (component test, integration test and system test) in laboratory instead of testing on-site in order to save time and costs without reducing safety aspects. As the focus is on CCS systems, RUs have not been taken into account.

In the railway industry, most component tests are performed in laboratory due to ease handling of these tests saving time and costs compared to on-site testing. Integration tests are done in laboratory environment as well as on site, depending on the possibility to simulate the real environment and worst-case scenarios in laboratories. Therefore, real environmental data have to be recorded first. The complexity of the system is important when assessing to shift tests from on-site to laboratory environment. System tests, and especially performance tests, are mainly done in real environment in order to gain trust in the results achieved. It is important to evaluate the interaction of the product or service with the real environment, including human behaviour which is generally quite difficult to simulate. As system boundaries, especially the (external) interfaces and the hardware as well as the GSM-R and network communication have to be considered.

There is a trend to shift as much tests as possible to laboratories, even if many IMs do not have own laboratories available. They mainly rent supplier’s laboratories or laboratories of impartial institutes. Sometimes, customer force tests to be done in real environment instead of laboratory tests in order to gain trust in the results. Reducing time and costs by performing laboratory testing is possible only, if tooling is established and grade of automation is high regarding test case generation, test execution and evaluation of the results. Moreover, it seems to be challenging to clearly define exit criteria for tests. Mixed teams with different experts together with a tool harmonisation are beneficial while shifting tests to laboratory. Over all, a complete reduction of on-site tests according to zero on-site testing won’t be possible in the near future due to technical restrictions at laboratory environment.

Moreover, a benchmarking with safety-critical industries outside the railway sector has been done, mainly with avionics, space and automotive sectors. Most of them state that building up the laboratory test environment has been shown to be expensive and time consuming at first, but very useful and helpful reducing project time and costs in the next steps.

Almost all companies of all areas perform both laboratory tests and on-site tests. No industry has shown the capacity and capability to perform all relevant tests in laboratory. On-site tests are still required. The more the companies know about the on-site tests, the more data they can provide as an input for laboratory environment in order to gain trust in the laboratory test and shift tests from on-site to laboratory as much as possible.

Page 64: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 64 of 80

Automotive is showing that some tests currently performed in laboratory will be shifted back to on-site in the future.

It is recommended to let laboratory tests start in an early stage of the product lifecycle, when failures and features can be identified very early and therefore, costs and time can be saved. Thus, these tests should allow testing a number of scenarios within a short time, leading to shorter overall testing time. Laboratory tests should be planned in such a way that logistics, management and communication processes should be reduced compared to on-site testing.

When required, customers and regulators should be included in the assessment of the whole development process (planning, requirements, basic design, detailed design, manufacturing, and all testing levels) including certification and laboratory testing activities. This can be done by standardising tests and performing standardised tests afterwards. Laboratory tests are part of the process for certification or approval, e.g. in medical or aviation sector.

Validation of the test site should be done by means of comparing laboratory data with on-site data to ensure that the environment reproduced in laboratory matches with the on-site and the behaviour of the system under test (SUT) in laboratory matches with the behaviour on-site. When testing a product in the laboratory, a Product Risk Analysis is quite useful. Laboratory tests should include systematic and transparent execution and documentation including traceability.

The aviation is an area with a large number of approved standards and regulations and it is a safety critical area. Therefore, the principles in the aviation area can be a useful input for the railway sector and has to be analysed more in detail in future.

The recommendations coming from the answers of the benchmarking industries will be further investigated in order evaluate if some of them will be applicable for the railway sector as well.

For railway application, a first attempt to unification of testing of ETCS systems has been realized by the ETCS Interoperability (IOP) working group of UNISIG developing standards for testing IOP in laboratory environment.

A large majority of participants from the railway sector consider that there is a need to harmonise test strategies and they favour a standardised European approval process in particular for OBU. There is less support for such activities on wayside due to large differences between systems in each country interfacing with the legacy systems.

Page 65: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 65 of 80

10 References [1] Subset 026: System Requirements Specification [2] Subset 036: FFFIS for Eurobalise [3] Subset 076: Scope of the test specifications, test sequences and tests cases [4] Subset 085: Test Specification for Eurobalise FFFIS [5] Subset 092: ERTMS EuroRadio Conformance Requirements ans test cases safety

layer [6] Subset 093: GSM-R interfaces [7] Subset 110: UNISIG Interoperability Test – Guidelines [8] Subset 111: Interoperability Test Environment Definition [9] Subset 112: UNISIG Basics for Interoperability Test Scenario Specifications [10] X2R-WP02-A-DBA-007-01, Version 4, 2017-11-16: Shift2Rail integrated glossary v4

Page 66: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 66 of 80

11 Glossary Definitions for most of the terms used in this document can be found in the Shift2Rail integrated glossary [10]. The following table lists those terms, which are missing in the current version of the integrated glossary. They will be added in the next version of the integrated glossary.

Term Definition Assessment Process of analysis to determine whether the Design Authority

and the Validator have achieved a product that meets the specified requirements and to form a judgement as to whether the product is fit for its intended purpose. [EN50128, 2001]

IOP tests Interoperability testing according to ERA subset 110.

Page 67: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 67 of 80

12 Appendices The next two sessions consist of our questionnaires provided by WP 6 and distributed among WP6 members (section 11.1) and among benchmarking industries (section 11.2).

12.1 Questionnaire Part A – status quo analyses in railway sector

Page 68: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 68 of 80

Page 69: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 69 of 80

Page 70: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 70 of 80

Page 71: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 71 of 80

Page 72: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 72 of 80

Page 73: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 73 of 80

Page 74: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 74 of 80

Page 75: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 75 of 80

Page 76: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 76 of 80

12.2 Questionnaire Part B – benchmarking with safety-critical industries

Page 77: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 77 of 80

Page 78: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 78 of 80

Page 79: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 79 of 80

Page 80: Deliverable D6.1 Current test condition and Benchmarking

Deliverable D6.1 Current test condition and Benchmarking report

GA 730640 Page 80 of 80