armour deliverable d1.2 v1.0...deliverable!d1.2!...

134
Deliverable D1.2 ARMOUR Experimentation approach and plans Version Version 1.0 Lead Partner UNPARALLEL Date 14/02/2017 Project Name ARMOUR – LargeScale Experiments of IoT Security Trust

Upload: others

Post on 08-Jul-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

Deliverable D1.2

ARMOUR Experimentation approach and plans

Version Version 1.0

Lead Partner UNPARALLEL

Date 14/02/2017

Project Name ARMOUR – Large-­Scale Experiments of IoT Security Trust

Page 2: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

2

Call Identifier H2020-­ICT-­2015

Topic ICT-­12-­2015 Integrating experiments and facilities in FIRE+

Project Reference 688237

Type of Action RIA – Research and Innovation Action

Start date of the project February 1st, 2016

Duration 24 Months

Dissemination Level X PU Public CO Confidential, restricted under conditions set out in Model Grant Agreement CI Classified, information as referred to in Commission Decision 2001/844/EC Abstract In this document the security experiments to be executed in the context of ARMOUR are described, extending description in Deliverable D1.1 and complementing information provided in Deliverable D2.1. The detail level of experiment descriptions is increased and is provided information on how to implement each experiment in small testing environments and on testbests capable of support large-­scale experiments.

This document also proposes structures to report the results of the experiments while having in mind the outputs of the labelling and certification processes described in Deliverable D4.1.

Disclaimer This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688237, but this document only reflects the consortium’s view. The European Commission is not responsible for any use that may be made of the information it contains.

Page 3: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

3

Revision History

Revision Date Description Organisation

0.1 27/06/2016 Creation of the Table of Contents UNPARALLEL

0.2 06/12/2016 Introduction UNPARALLEL

0.3 12/12/2016 Added D2.2 concepts to Introduction UNPARALLEL

0.4 15/12/2016 Added ARMOUR Experimentation Approach UNPARALLEL

0.5 20/12/2016 Contribution to ARMOUR Experimentation Approach

EGM, SMA, SYN, ODINS

0.6 18/01/2017 Added description of EXP1, EXP3, EXP5, EXP6, EXP7

EGM, SMA, SYN, ODINS, INRIA

0.7 23/01/2017 Added description of EXP2 UNPARALLEL

0.8 06/02/2017 Added description of EXP4. Minor revisions of experiment descriptions INRIA, UNPARALLEL

0.9 10/02/2017 Updates to the description of all experiments. Added section 5, Executive Summary and Conclusion

EGM, INRIA, ODINS, SMA, SYN, UNPARALLEL

1.0 14/02/2017 Proof-­reading and minor corrections UPMC

Page 4: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

4

Table of Contents 1 Executive summary ....................................................................................................... 5

2 Introduction ................................................................................................................... 6

2.1 Purpose .................................................................................................................. 6

2.2 Experiment Description Methodology .................................................................... 7

3 ARMOUR Experimentation Approach ........................................................................... 9

3.1 Introduction ............................................................................................................ 9

3.2 Experiments Background ....................................................................................... 9

3.3 Experimentation ................................................................................................... 11

4 Experiments Description ............................................................................................. 17

4.1 EXP 1: Bootstrapping and group sharing procedures .......................................... 17

4.2 EXP 2: Sensor node code hashing ...................................................................... 47

4.3 EXP 3: Secured bootstrapping/join for the IoT ..................................................... 62

4.4 EXP 4: Secured OS / Over the air updates .......................................................... 73

4.5 EXP 5: Trust aware wireless sensors networks routing ....................................... 88

4.6 EXP 6: Secure IoT Service Discovery ............................................................... 103

4.7 EXP 7: Secure IoT platforms ............................................................................. 116

5 Results Report .......................................................................................................... 128

5.1 Background analysis .......................................................................................... 128

5.2 Report Structures ............................................................................................... 129

6 Conclusion ................................................................................................................ 134

Page 5: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

5

1 Executive summary This document starts its discussion with the definition of the ARMOUR approach for the future work on the description and implementation of the security experiments. A study of the characteristics and vulnerabilities of each experiment is performed to identify common points between them, allowing the identification of common technologies and synergy opportunities. These findings can be exploited to produce more focused experiments and maximise the contribution of experiments to the development and validation of ARMOUR security framework.

Detailed descriptions of the experiments are provided, updating and extending the information provided in Deliverable D1.1, being complemented with the specification of the different test patterns that each experiment will implement. Experimentation infrastructures where each experiment will run are identified, both for the execution of initial and limited test, as for tests targeting large-­scale situations. Description of each experiment also identifies the data to be collected and metrics to validate and evaluate each test.

This document also proposes structures to present the organise and report the results from the execution of the experiments, using as inspiration the standard IEEE 829 -­Standard for Software and System Test Documentation. These structures were designed to facilitate the presentation of the outputs from the Certification and Labelling process.

Page 6: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

6

2 Introduction

2.1 Purpose

The execution of Security Experiments plays a central role in ARMOUR, being essential for the definition and validation of ARMOUR experimentation suite and benchmarking methodology for large-­scale IoT. ARMOUR experiments were specified in such a way that each experiment addresses the security of different processes on IoT networks and services, or have different purposes and requirements for the security procedures. Moreover, security procedures target different technology levels, varying from communication between low-­power end-­point devices to server-­to-­server communications. These characteristics allows to validate ARMOUR experimentation suite and benchmarking methodology against a large set of security solutions.

Nevertheless, when considering the analysis of different security solutions, one must be extra careful to ensure that all solutions are analysed within the same scope. To make results from different security experiments comparable to some extent, ARMOUR experiments should be defined following a common methodology. Such methodology must be able to ensure that experiments are reproducible. This implies that the testing scenarios must be well identified and described, including the specification of the parameters that will be tested and the parameters that will be used to assess the performance of the experiment.

The goal of this document is to describe the seven security experiments that will be implemented by the ARMOUR consortium to evaluate the ARMOUR experimentation framework. The description of each experiment comprises:

• The definition of the experiment main objectives, identifying the purpose and context of the experiment and its expected impact;;

• The characterisation of the testing scenario including several specifications like the network topology, the entities used that have a role on the testing scenario, the communication stages and protocols used, etc.

• The identification of the security algorithms or mechanisms to be tested on the experiment, as well as the parameters that will be subject to different configuration during the experiments;;

• Specifications of how to implement the experiment on a large-­scale testbed, such as IoT-­Lab;;

• The identification of the data that will be collected during the execution of the experiment, as well as the identification of the measurement methods used and the algorithms used to process that data, if any;;

• Identification of the metrics to assess the results of experiment (success or failure) and the metrics to determine the validity of obtained results on a real-­world situation.

The specification of the experiments must be defined and implemented in such a way that allows ensuring that experiments are reproducible. At the same time, experiment must also be defined in a way that allow security threats to be executed on ways different with some degree of randomness, where appropriate. This will allow experiments to explore the execution of security attacks on ways uncontrolled, and potentially unforeseen by the experimenter. Such feature is necessary to ensure that

Page 7: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

7

the experiment simulates real world security attacks, where the attacks are not controlled by the security manager.

This document is the output of Task 1.2 – “Armour Experimentation approach and plan” from Work Package 1 – “Large-­Scale IoT Security and Trust Experiments”. As depicted in Figure 1, Task 1.2 takes as input the experiment requirements and vulnerability patterns to be tested from Task 1.1 – “Experiments definition and requirements”, the description and integration guidelines to large-­scale testbeds from Task 3.1 – “Testbeds detailed analysis and integration guidelines”, the test patterns and test models corresponding to each experiment defined in Task 2.1 and the methodology presented in Task 2.2 to describe the experiments in order to provide clear and complete descriptions that support the description of the tests to be executed . At the same time, this document contributes to Task 1.3 – “Experiments’ setup and execution” with the experimentation plan to be implemented and to Task 3.2 -­ “Integration with selected target testbeds” by defining the configuration needed to implement the experiment on the selected testbeds.

Figure 1-­ Interaction between Task 1.2 and the other tasks

2.2 Experiment Description Methodology

In Deliverable D2.2 – “Test generation strategies for large-­scale IoT security testing”, a set of concepts and best practices on testing definition is presented. These concepts should be instantiated, when possible and applicable, by each experiment to make the transition from test pattern to test definition smoother. A summary of the concepts will be revised in this deliverable for readers’ sake. A detailed description can be found in Section 2.2 in D2.2.

According to the methodology defined in D2.2, the definition of tests is based on the concepts of:

T1.1 Experiments definition and requirements

T3.1 Testbeds detailed analysis and integration

guidelines

T1.2 ARMOUR Experimentation approach and

plans

T1.3 Experiments’

setup and execution

T3.2 Integration with selected

target testbeds

T2.1 Generic test patterns and test

models for IoTsecurity testing

T2.2 Combined test strategies for large-scale IoTsecurity testing

Page 8: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

8

• System Under Test (SUT), corresponding to the part of the system that will be tested;;

• Tester, referring to the actor (human or system) that interact with the SUT to execute the tests.

In the IoT deployment scenarios, three types of nodes can be identified:

• Server Node (SN), corresponding to a node providing advanced services;; • Client Node (CN), generally are resource constrained devices referring to the sensors, actuators or applications;;

• Middle Node (MN), corresponding to nodes in between the communication of client and server nodes.

These nodes can assume the role of Requestor or Responder depending if it starts the connection with a request or respond to a request. It is assumed that SNs play the role of responders, CN are requestors and that MNs can play both roles.

The concepts can be explored to represent several testing configurations, where the type nodes considered as SUT change, being considered several combinations of different types of nodes. Some possible testing configurations are shown in Figure 2, presenting different SUT compositions, ways to exchange messages between the test system and the SUT, and providing the tester with different test control and interaction mechanisms.

Figure 2 – Some possible test configurations

Config ID: SST_01 Config ID: SST_02

Config ID: SST_MR_01 Config ID: SST_MR_02

Config ID: CST_01 Config ID: CST_02

Page 9: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

9

3 ARMOUR Experimentation Approach

3.1 Introduction

The development and validation of ARMOUR Security Tool Suite, to test and evaluate security on Internet of Things scenarios, will be supported by the execution of several security experiments defined and executed by ARMOUR consortium members. However, to get the most out of each experiment and from the composition of all experiments, a common experimentation approach must be defined. This approach, the ARMOUR Experimentation Approach, should align the scope of all experiments to promote the coverage of different security threats, while reducing the overlap between experiments. Moreover, experiments must share common execution and testing approaches to enable the definition of common benchmarking methodologies that allow the comparison of results.

The overall aspects of ARMOUR Experimentation Approach will be presented and discussed on the following section, using as starting point the experiment definitions described on previous deliverables.

3.2 Experiments Background

As identified in deliverable D1.1 – “ARMOUR Experiments and Requirements”, ARMOUR proposes the setup and execution of seven experiments that addresses different dimensions of the IoT value chain. The ARMOUR experiments exploit security threats that range from devices authentication to establishment of secure communication channels and the setup of security mechanisms over services and platforms, as depicted in Figure 3. Deliverable D1.1 also identified the security vulnerabilities that each experiment can evaluate in ARMOUR context, identifying a total of 18 different vulnerabilities. These vulnerabilities cover a large range of security related aspects ranging from discovery of and access to security keys stored on devices and infrastructure equipment, up to the security of communication protocols and the access and usage of services.

Page 10: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

10

Figure 3 – Position of ARMOUR Experiments

As such, ARMOUR proposes the realisation of 7 security experiments, where a total of 18 different vulnerabilities were identified as being a potential security issue on the experiments. As shown in Table 1, some security vulnerabilities can be addressed by multiple experiments while others are expected to be addressed by only one experiment. Each experiment is based on its specific scenario and on the technologies used to support the scenario. Even though several experiments may address the same security vulnerability, the actual implementation of the test to evaluate the vulnerability will differ from experiment to experiment. Therefore, implementation cases of different experiments to evaluate a given vulnerability should be considered as different testing activities. Following this reasoning, where the evaluation of a security vulnerability by each experiment is considered a different testing activity, then we can conclude that 47 testing activities will be tested in ARMOUR.

Table 1 – Number of ARMOUR experiments that evaluate each security vulnerability

Id Title Nr Exp

V1 Discovery of Long-­Term Service-­Layer Keys Stored in M2M Devices or M2M Gateways

3

V2 Deletion of Long-­Term Service-­Layer Keys stored in M2M Devices or M2M Gateways 2

V3 Replacement of Long-­Term Service-­Layer Keys stored in M2M Devices or M2M Gateways

2

V4 Discovery of Long-­Term Service-­Layer Keys stored in M2M Infrastructure 1

Page 11: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

11

V5 Deletion of Long-­Term Service-­Layer Keys stored in M2M Infrastructure equipment 1

V6 Discovery of sensitive Data in M2M Devices or M2M Gateways 2

V7 General Eavesdropping on M2M Service-­Layer Messaging between Entities 1

V8 Alteration of M2M Service-­Layer Messaging between Entities 6

V9 Replay of M2M Service-­Layer Messaging between Entities 6

V10 Unauthorized or corrupted Applications or Software in M2M Devices/Gateways 5

V12 M2M Security Context Awareness 1

V13 Eaves Dropping/Man in the Middle Attack 5

V14 Transfer of keys via independent security element 3

V16 Injection 1

V17 Session Management and Broken Authentication 2

V18 Security Misconfiguration 2

V19 Insecure Cryptographic Storage 3

V20 Invalid Input Data 1

Total of Security Vulnerabilities to be tested 47

3.3 Experimentation

The proper definition and implementation of such large amount of testing activities (47) may become a challenge itself, increased by the short timeframe for the execution of ARMOUR project. Such line of action may potentially result in high-­level descriptions of testing activities and corresponding incomplete implementations. Moreover, the investment of a large amount of resources on definition and implementation of testing activities may result in the lack of resources to properly develop important tasks of ARMOUR, such as

Page 12: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

12

the development of ARMOUR testing framework and the definition of the Certification and Labelling methodology.

The identification of these risks, and the advice of the reviewers, motivated the ARMOUR consortium to take the decision of revising the Experiments to reduce the amount of testing activities to be defined and implemented, while trying to maintain the overall scope of ARMOUR Experimentation Approach. As such, it was decided to maintain all the experiments identified in deliverable D1.1, but with some modifications to allow the reduction of the total number of testing activities. This approach allows ARMOUR tests to cover a large range of purposes and testing contexts defined by the experiments scenarios.

The methodology used to reduce the amount of testing activities is composed of three strategies:

• Limit targeted vulnerabilities: description of experiments will be subject to some changes, focusing the experiment on the most critical and unique security vulnerabilities;;

• Reduce technologies used: experiments perform an effort to use the same technologies when possible and applicable to reduce effort in the implementation of testing activities;;

• Identification of Synergies between experiments: complementarities among experiments are identified to define synergies between experiments, taking advantage of overlaps between experiments to create scenarios that provide extra functionalities.

3.3.1 Selection of focused Security Threats ARMOUR experimenters define the scenario behind the experiments based on their purposes and internal agendas, aligning the experiment objectives with them. Moreover, most of experimenters are SMEs, being the experiments defined in ARMOUR connected in some extent to products in development. This characteristic allows to design and execute ARMOUR experimentation suite to support multi-­purpose experiments, providing a good level of assurance about ARMOUR approach and framework’s capabilities to support experiments with different purposes.

The specific objectives of each experiment enable the identification of security threats that are more critical for the experiment scenario. Therefore, even though multiple experiments identify their susceptibility to the same security vulnerability, its relevance in the scenario may change from experiment to experiment. This differentiation in the criticality of the security vulnerabilities associated to an experiment provides a mechanism to increase the focus of each experiment by selecting the most critical.

3.3.2 Common Technologies The implementation of the tests to evaluate the susceptibility of a scenario to a specific security vulnerability involves the implementation and configuration of different technologies, such as communication protocols, encryption algorithms, authentication mechanism, etc. These technologies are potentially implemented in several devices, with different capabilities that may run on different operating systems. In this highly

Page 13: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

13

heterogeneous scenario, the implementation of security tests can become very complex due to the quantity of technologies that need to adapted and configured to run on different devices, which increases the effort needed to implement the tests.

In an approach to reduce the implementation effort, ARMOUR consortium decided to align the technologies used in the experiments wherever possible. Some technologies are specific for the scenario and essential for the experiment and cannot be changed with other technologies, however some technologies are not critical for the scenario and can be exchanged by technologies used in other experiments. In the sequence of this reasoning, Table 2 was created to identify the technologies used by each experiment.

Table 2 -­ Common technologies used by ARMOUR experiments

Exp 1 Exp 2 Exp 3 Exp 4 Exp 5 Exp 6 Exp 7

COAP X X X X X

DTLS X X X

(ABE) crypto

X X X

PKI X X

GW X X X X X X

COSE X

RPL X X

Scalability X X

3.3.3 Synergies Between Experiments Some synergies between experiments were identified, where the aggregation of different experiments makes possible to increase the functionalities provided by each one of the experiments. Therefore, these synergies have the potential to create new functionalities and test new vulnerabilities with reduced increase in the effort since they are built upon already described and implemented experiments.

Page 14: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

14

The following synergies were identified and considered interesting for ARMOUR.

3.3.3.1 EXP1/7: End-­to-­End Scenario A security end-­to-­end testing allows us to ensure that the entire workflow between the components in a system performs correctly with respect to the identified security threats (for more details refer to D2.1.2). Exp1 – Secure Bootstrapping Procedure and Exp7 – Secure IoT Platform, complement each other for describing an end-­to-­end scenario. Thus, it makes possible to create a large-­scale scenario representing the real IoT components interactions.

More concretely, Exp1 aims at making secure a sensor communication by using encrypted channels and encrypting their messages. In addition, Exp7 aims to test the secure storage and retrieval of sensor data from an IoT platform. The combination of the two experiments for an end to end testing will allow us to test the vulnerabilities that can occur in the workflow process, but seen in a non-­isolated way.

Figure 4 gives an overview of the end-­to-­end scenario configuration in FIT IoT-­Lab.

Figure 4 -­ Security End-­to-­end testing scenario

The figure above shows the components hosted in IoT-­Lab: M3 nodes and Upper Tester (controlling the execution).

The IoT platform, implementing the oneM2M standard (part of Exp7), and the Credential Manager (CM), part of EXP1, are hosted on a cloud Virtual Machine offered by Fit Cloud. They communicate with the IoT Lab through CoAP+DTLS.

Page 15: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

15

Finally, the Test Execution Tool – TITAN is also hosted in a Fit Cloud VM and communicates with the IoT Lab Upper Tester (UP) through MQTT messages, thus controlling the execution of the test cases. For more details refer to D2.1.2.

Complementary to the previous figure, which focuses more on the infrastructure configuration, Figure 5 describes the message exchange between the entities part of the scenario. The group key is retrieved based on symmetric cryptography, allowing to establish secure connection. Once the connection is secured then the communication between the elements is possible.

Figure 5 -­ Exp1/Exp7 full message sequences

3.3.3.2 EXP5/6: Mitigating cross-­layer attacks under a common testing framework Synelixis is involved in two ARMOUR experiments, namely Experiment 5 -­ Trust aware wireless sensors networks routing and Experiment 6 -­ Secure IoT Service Discovery.

The former aims at evaluating the performance of distributed trust-­aware WSN RPL-­based solutions in real large-­scale deployments, under the presence of malicious nodes, lossy links or under adverse conditions, while the latter deals with the execution of a set of experiments proving the robustness and efficiency of secure service discovery achieved by an innovative group-­based solution combining DTLS over CoAP protocol.

As it has been described in the early deliverables of ARMOUR, in particular D1.1 and D2.1, these two experiments possess the following diverse characteristics and requirements:

1. Experiment 5 deals with vulnerabilities and security attacks on networking (RPL) layer (layer 3), while experiment 6 is concerned with issues related to transport (DTLS) and application (COAP) layer (layers 4 and 5).

2. Experiment 5 requires multi-­hop routing, while this is not a mandatory condition in experiment 6.

3. Experiment 5 would be implemented on TinyOS, while experiment 6 algorithms would be developed in ContikiOS.

Credential Manager

Page 16: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

16

4. Experiments 5 and 6 were addressing different vulnerabilities, according to oneM2M standards.

Taking into consideration the comments from the reviewers during the M6 review as well as the decision of the ARMOUR consortium to provide a common and complete (to the extent possible) Security Toolkit, it was decided that the combination of experiments 5 and 6 would offer several benefits to the project execution and outcomes (on technical level), as described below:

1. The implementation of both experiments in ContikiOS will be beneficial for the project since ContikiOS project has gained, nowadays, a larger community and support compared to TinyOS that was thought of being the state-­of-­the-­art operating system a few years ago.

2. Merging, on technical level, these two experiments would make possible the testing of mitigation of a larger set of vulnerabilities, as has been described in D1.1 and D2.1, and discussed in details in the next section of this deliverable.

3. Working on the same environment, it would save resources and time, since TTCN-­3/TITAN development and deployment that would be required per experiment will be minimized.

4. Due to the complementarities of these two experiments with respect to vulnerabilities on different layers, creating a common environment would allow for testing and validating cross-­layer attack mitigation techniques.

5. Last, but not least, creating a common set of libraries would allow other experiments to save resources and time during experimentation, testing and validation. In this respect, all experiments that demand for the set of RPL/DTLS/COAP are using the same set of libraries that only need to be extended according to the specific needs of each experiment. This is also beneficial with regard to the minimization of effort and time spent to port different versions of hardware devices available for testing and validation on FIT-­IoT Lab infrastructure.

It is highlighted that merging process of experiments 5 and 6 concerns solely technical issues, while from an administrative viewpoint these experiments must be seen as totally different implementations.

Page 17: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

17

4 Experiments Description

4.1 EXP 1: Bootstrapping and group sharing procedures

4.1.1 Objectives The task of integrating for the first time a new IoT device into the Internet infrastructure or an existing IoT system is called the bootstrapping phase. In terms of security, there are some steps that must be accomplished in order to make this integration secure, like the access to the security domain of the IoT system, the secure provision of information by the sensor to the infrastructure and even in a confidential way, or a secure mechanism to access the information of the new device.

From the point of view of the experiment description, our contribution for the bootstrapping phase comprises the establishment of a secure channel using DTLS with the corresponding exchange of credentials, as well as the distribution of a group key or token for group sharing purposes by a Credential Manager to the new IoT device which will allow it to decrypt the information transmitted to a specific group of devices.

4.1.2 Scenario Description From the complete bootstrapping scenario, in this experiment we are focusing in the authentication phase. Let us present the following figure which presents the interaction between the different entities of the experiment.

Device Gateway Attribute Authority

GroupKeyRequest(credentials)GroupKeyRequest(credentials)

RequestProcessing(credentials)

GroupKeyResponse(group;ey)GroupKeyResponse(groupKey

CoAP+DTLS

Figure 6 -­ Flow of messages for Experiment 1

According to it there are three entities relevant for the scenario:

• Device. Sensor, or generally known as smart objects or nodes (which include sensors, actuators, etc.), are usually devices with constrained capabilities in terms of processing power, memory and communication bandwidth. Such devices are susceptible to provide resources or services to be accessed by users that could be devices acting on behalf of a human (H2M communication), or directly between devices (M2M communication).

Page 18: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

18

• Gateway. To accomplish the communication with the Internet, a Gateway, or Border Router in 6LowPAN terminology is used. These devices can be also constrained, but when non-­heavily constrained devices are used in this case, management of the LowPAN at a higher level can be accomplished in terms of authentication and authorization of the devices.

• Credential Manager. This entity interacts with gateways to issue and manage the private keys associated to them.

Network Entity Name Device

Main function Ask for the group key/token

Operating System Contiki

Information Consumed

Device requests permission to obtain the group key/token to the Credential Manager.

Information Produced Group key/token to cipher and decipher published data

Communicates with Credential Manager (CM)

Gateway

Network Entity Name Gateway

Main function Forward packets between devices and CM

Operating System Contiki

Information Consumed

Packets sent by the devices and the CM

Information Produced Packet forwarding

Communicates with Credential Manager (CM)

Devices

Page 19: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

19

Network Entity Name Credential Manager (CM)

Main function Deliver the group key/token

Operating System Linux

Information Consumed

CM requests the attributes of the device asking for the group key/token (action and resource in case of ask for a token)

Information Produced Group key/token for the device to cipher and decipher published data

Communicates with Devices

Gateway

Broadly, the scenario begins with the request issued by the device asking for a group key or token to the Credential Manager (CM). Such request includes a set of attributes (and action and resource in case of the token) which will be latter used by the CM to generate the group key. When the gateway receives the request sent by the device, it is forwarded to the CM. The CM, after the validation of the request using the device’s certificate or shared key, generates the requested group key or token which will be received by the device thanks to the forwarding of the gateway.

All the aforementioned messages require a secure channel to provide authentication and confidentiality of the payload of the messages, i.e. the attributes used for the group key generation process, and the actions and resources used for the token generation process. This secure channel is provided using DTLS, which after a handshake guarantees a confidential channel between both ends. The other protocol used for the exchange of messages is CoAP, which is transmitted over DTLS. It is worth noting that, despite in the sequence diagram we have represented the response as a unique message, effectively it will require the exchange of a series of CoAP messages because of the size of the group key and token.

4.1.3 Application of Experimentation Approach 4.1.3.1 Definition of the experiment scope/focus The experiment will be focused on testing the establishment of the DTLS secure channel and the group key or token request. Figure 7 shows the detailed flow that we are going to test. The first part corresponds to the DTLS communication and the second part corresponds to the group key or token request, protected with the DTLS secure channel established before. It is worth highlighting that the example illustrated in the following figure corresponds to a specific DTLS exchange involving pre-­shared keys.

Page 20: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

20

Figure 7 -­ Experiment focus

4.1.3.2 Identification of the threats to be tested in the experiment

Table 3 – Threats addressed by EXP1

Countermeasure Description Related threats

Mutual authentication

A security association is established between the devices and software provisioning servers, which provides mutual authentication

V1, V2, V3, V4, V5, V10, V17

Replay protection The protocol includes functionality to detect if all or part of a message is an unauthorised repeat of an earlier message or part of a message

V9

Dictionary attack protection

Ensure appropriate strong standard algorithms and strong keys are used, and key management is in place.

V19

DoS attack protection

The protocol includes functionality to detect or avoid a DoS attack V15

Integrity verification

The integrity of software images received must be verified by devices

V8, V10

Confidentiality A security association is established between the devices and software provisioning servers, which provides confidentiality

V6, V7, V8

Proven resistance to man in the middle attacks

The security association between communicating entities uses protocols which are proven to resist man-­in-­the-­middle attacks

V8, V13

CM

Page 21: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

21

4.1.4 Test Patterns Design

Test pattern ID TP_ID1 (A)

Stage Bootstrapping

Protocol DTLS

Property tested Resistance to an unauthorized access, modification or deletion of keys. Authentication of the client.

Test diagram

Test description Entities:

• Credential Manager (CM). Entity in charge of establishing a secure channel with the smart object by means of DTLS protocol in order to distribute the group key in a secure way. • Attacker. Entity which wants to obtain illegally a group key, using a non-­valid PSK to perform the DTLS exchange.

High Level Model for Testing

Page 22: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

22

Steps:

1. The attacker and the Credential Manager (CM) start the DTLS exchange process with a stolen identity.

2. CM verifies the message 6 and tries to obtain the associated PSK.

If CM obtain the associated PSK, the test fails. Otherwise the test is successful.

References Vulnerability: V1, V2, V3, V4, V5 (Resistance to an unauthorized access, modification or deletion of keys) Security Objective for Labelling& Certification: Authentication

Test pattern ID TP_ID1 (B)

Stage Bootstrapping

Protocol DTLS

Property tested

Resistance to an unauthorized access, modification or deletion of keys. Authentication of the Credential Manager.

Page 23: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

23

Test diagram

Test description

Entities:

• Attacker. Server that is impersonating the real Credential Manager (CM). • Smart Object. Entity which wants to establish a secure channel by means of DTLS. High Level Model for Testing

Steps: 1. The smart object and the attacker start the bootstrapping.

Page 24: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

24

2. Smart object sends the message 6 3. The attacker tries to obtain the PSK from the PSK ID If the attacker is able to obtain the PSK, the test fails. Otherwise, the test is successful.

References Vulnerability: V1, V2, V3, V4, V5 (Resistance to an unauthorized access, modification or deletion of keys) Security Objective for Labelling& Certification: Authentication

Test pattern ID TP_ID5 (A)

Stage Bootstrapping

Protocol DTLS

Property tested Resistance to replay of requests. DTLS negotiation.

Test diagram

Page 25: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

25

Test description

Entities:

• Credential Manager (CM). Entity in charge of establishing a secure channel with the smart object by means of DTLS protocol in order to deliver the group key in a secure way. • Smart Object. Entity which wants to establish a secure channel by means of DTLS. • Attacker. Entity which wants to establish illegally a secure channel with DTLS, using a replayed message.

High Level Model for Testing

Steps: 1. The smart object and the Credential Manager (CM) start the DTLS exchange.

2. The attacker is listening the communication 3. If the attacker is not able to obtain a message, the test is successful. 4. The Attacker starts the DTLS exchange with the CM and uses the obtained message.

The CM verifies the random numbers in message 10. If the random numbers correspond to the current session the test fails. Otherwise the test is successful.

References Vulnerability: V9 (Replay attack) Security Objective for Labelling& Certification: Resistance to replay of requests

Test TP_ID5 (B)

Page 26: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

26

pattern ID

Stage Group sharing

Protocol DTLS

Property tested Resistance to replay of requests in group key request

Test diagram

Test description

Entities:

• Credential Manager (CM). Entity in charge of distributing the group key through a secure DTLS channel. • Attacker. Entity which wants to obtain illegally the group key using a replayed message. Smart Object. Entity which wants to obtain the group key through a secure DTLS channel.

High Level Model for Testing

Page 27: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

27

Steps: 1. The smart object sends the group key request to the Credential Manager (CM).

2. The attacker is listening the communication 3. If the attacker is not able to obtain a message, the test is satisfactory. 4. The Attacker sends the obtained message to the CM. The CM verifies the sequence number included in the message. If the sequence number corresponds to the session, the test fails. Otherwise the test is successful.

References Vulnerability: V9 (Replay attack) Security Objective for Labelling& Certification: Resistance to replay of requests

Test pattern ID TP_ID14

Stage Bootstrapping

Protocol DTLS

Property tested Resistance to dictionary attacks

Page 28: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

28

Test diagram

Test description

Entities:

• Credential Manager (CM). Entity in charge of establishing a secure channel with the smart object by means of DTLS protocol in order to deliver the group key in a secure way. • Smart Object. Entity which wants to establish a secure channel by means of DTLS and obtain the group key. • Attacker (Sniffer). Entity which wants to obtain the PSK (and therefore the group key) performing a dictionary attack. High Level Model for Testing

Page 29: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

29

Steps: 1. The smart object and the Credential Manager (CM) start the communication.

2. The attacker sniffs the communication and sees the message 12, which contains the group key.

3. The attacker performs a dictionary attack in order to decipher the message 12 (breaking the key K) and obtaining the group key.

If the attacker is able to obtain the group key, the test fails. Otherwise, the test is successful.

References Vulnerability: V19 (Insecure encryption and storage of information) Security Objective for Labelling& Certification: Resistance to dictionary attacks

Test pattern ID TP_ID16

Stage Bootstrapping and group sharing

Protocol DTLS

Property tested Resistance to DoS attacks

Page 30: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

30

Test diagram

Test description

Entities:

• Credential Manager (CM). Entity in charge of establish a secure channel with the smart object by means of DTLS protocol in order to deliver the group key in a secure way.

• Smart Object. Entity which wants to establish a secure channel by means of DTLS and obtain the group key.

• Attacker. Entity which accesses the CM with the intention of collapsing it. High Level Model for Testing

Page 31: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

31

Steps: 1. The attackers start DTLS exchange at the same time as the normal smart objects

2. All the attackers send message 1 at the same time to the Credential Manager (CM)

A smart object sends message 1 to the CM. If the smart object does not receive message 2 and the tests have not finished yet (timeout), the test fails. If the smart object receives message 2 and the test finishes (timeout), the test is successful.

References Vulnerability: V15 (Buffer overflows) Security Objective for Labelling& Certification: Resistance to DoS attacks

Test pattern ID TP_ID4 (A)

Stage Bootstrapping

Protocol DTLS

Property tested Resistance to alteration of requests. Integrity. DTLS negotiation.

Page 32: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

32

Test diagram

Test description

Entities:

• Credential Manager (CM). Entity in charge of establishing a secure channel with the smart object by means of DTLS protocol in order to deliver the group key in a secure way. • Smart Object. Entity which wants to establish a secure channel by means of DTLS and obtain the group key. • Attacker (Sniffer). Entity which modifies the messages of the DTLS exchange. High Level Model for Testing

Page 33: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

33

Steps: 1. The smart object and the Credential Manager (CM) start the DTLS exchange

2. The attacker sniff the communication and modifies a message of the DTLS exchange except 7 and 9

3. Both parts send messages 8 and 10 to verify the HASH, that contains all the previous messages 1 to 6

If the verification of the HASH is incorrect, the test is successful. Otherwise the test fails.

References Vulnerability: V8 (Resistance to alteration of requests) Security Objective for Labelling& Certification: Integrity

Test pattern ID TP_ID4 (B)

Stage Group sharing

Protocol DTLS

Property tested Resistance to alteration of requests. Integrity. Group key request.

Page 34: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

34

Test diagram

Test description

Entities:

• Credential Manager (CM). Entity in charge of establishing a secure channel with the smart object by means of DTLS protocol in order to deliver the group key in a secure way. • Smart Object. Entity which wants to establish a secure channel by means of DTLS and obtain the group key. • Attacker (Sniffer). Entity which modifies the messages of the group key request

High Level Model for Testing

Page 35: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

35

Steps: 1. The smart object requests the group key sending message 11 and the Credential Manager (CM) sends message 12

2. The attacker sniff the communication and modifies a field of the message (not type or version), so it can modify Nºseq, epoch, length, data, MAC

Both parts decipher the message and check the MAC for integrity If the MAC is incorrect, the test is successful. Otherwise the fails.

References Vulnerability: V8 (Resistance to alteration of requests) Security Objective for Labelling& Certification: Integrity

Test pattern ID TP_ID2

Stage Bootstrapping and group sharing

Protocol DTLS

Property tested Resistance to eaves dropping. Confidentiality. All.

Page 36: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

36

Test diagram

Test description

Entities:

1. Credential Manager (CM). Entity in charge of establishing a secure channel with the smart object by means of DTLS protocol in order to deliver the group key in a secure way.

2. Smart Object. Entity which wants to establish a secure channel by means of DTLS and obtain the group key.

3. Sniffer. Entity which is overhearing the communication channel

High Level Model for Testing

Page 37: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

37

Steps: 1. The smart object and the Credential Manager (CM) start the DTLS exchange

2. The attacker (sniffer) sniffs the communication If the sniffer is able to understand the messages between the CM and the smart object, the test fails. Otherwise, the test is successful.

References Vulnerability: V13 (Eaves dropping and man in the middle attack) Security Objective for Labelling& Certification: Integrity

Test pattern ID TP_ID15

Stage Bootstrapping and group sharing

Protocol DTLS

Property tested Resistance to man in the middle. All.

Page 38: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

38

Test diagram

Test description

Entities:

• Credential Manager (CM). Entity in charge of establish a secure channel with the smart object by means of DTLS protocol in order to deliver the group key in a secure way.

• Smart Object. Entity which wants to establish a secure channel by means of DTLS and obtain the group key.

• MITM (Sniffer). Attacker which is performing a MITM attack between the credential manager and the PAA.

High Level Model for Testing

Page 39: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

39

Steps: 1. The smart object and the Credential Manager (CM) start the DTLS exchange

2. The attacker sniff the communication and modifies a message (whichever and whatever) of the DTLS exchange or substitutes a message (whichever).

3. Credential Manager receives verifies the message 6 and tries to obtain the associated PSK

4. If the CM does not obtain the associated PSK, the test is satisfactory. 5. Both parts receive messages 8 and 10 to verify the HASH that is ciphered with the key k.

If they cannot decipher the HASH, the test is satisfactory. If the HASH is not valid the test is successful. Otherwise, the test fails.

NOTE: Remember that this communication is encrypted using CoAPS, so it is very difficult to perform this type of attack.

References Vulnerability: V13 (MITM) Security Objective for Labelling & Certification: Resistance to man in the middle attack

Page 40: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

40

4.1.5 In-­house Implementation 4.1.5.1 Infrastructure characterisation The scenario developed is shown in the next figure:

Figure 8 -­ In-­house scenario overview

For the client, we use OdinS device with a microcontroller MSP430F5438A Rev H, 256 KB of flash memory and 16KB of RAM. This client will be executed inside ContikiOS 2.7. As DTLS and CoAP protocols are required, we use Erbium and tinyDTLS library.

It is required the usage of the border router in order to be able to communicate the client inside ContikiOS with the CM outside ContikiOS.

The complete scenario will be developed inside the Armour VM, with Ubuntu 32 bits and 2GB of RAM.

4.1.5.2 Scenario setup • Bridge 1. Connect the device and change privileges

sudo chmod 777 /dev/ttyUSB0

2. Go to bridge source code, compile and upload it to the device

cd ./Odins/contiki/odins-­tools/uip6-­bridge

make uip6-­bridge-­tap.upload0

3. Execute bridge

make connect

make bridge (in another terminal)

4. Verify it is working

ping6 aaaa::1

• Credential Manager 1. Go to the source code folder

Page 41: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

41

cd ./Odins/attribute-­authority/pskCoap

2. Execute the version with DTLS

./script

• Smart object 1. Connect the device and change privileges

sudo chmod 777 /dev/ttyUSB1

2. Go to the smart object source code, compile and upload it to the device

cd ./Odins/contiki/user-­apps/psk-­client

make control-­manager.upload1

3. Debug

mspdebug -­d /dev/ttyUSB1 debug

4.1.6 Large-­Scale Testbed Implementation 4.1.6.1 Infrastructure characterisation For EXP1, clients and border routers can be hosted in constrained nodes. Moreover, the Credential Manager must be hosted on a common PC, which is able to run Linux Operating System and Java code. Clients must be able to connect with the Credential Manager to get keys and credentials for secure operation. Specifically, by considering the FIT IoT Lab environment, Figure 9 shows an overview of the required hardware devices.

Figure 9 -­ Large scale scenario overview

This way, client and border router will be hosted in FIT IoT lab, in a device M3, which is shown in Figure 10. These devices are based on a STM32 (ARM Cortex M3) micro-­controller. They have a 32-­bits processing, a new ATMEL radio interface in 2.4 GHz and a set of sensors. The OS embedded within the M3 nodes is ContikiOS.

Page 42: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

42

Figure 10 -­ M3 node overview

The M3 node sends and receives data to/from the Credential Manager, which is located in a remote VM. This communication is possible due to the border router. They use a CoAP+DTLS protocol with tinyDTLS library for DTLS and Erbium for CoAP. The devices M3 must be reserved on the IoT Lab before the test campaign and the CM must be accessible via SSH.

4.1.6.2 Scenario setup According to EXP1, for the setup of the large-­scale scenario, attackers (in the same way as legitimate nodes) could be implemented on nodes with constrained resources (acting as legitimate clients), or they could be deployed on more powerful devices, in order to simulate a malicious behaviour related to the Credential Manager. Consequently, different nodes could be updated with legitimate or malicious firmware to simulate such behaviour.

To this end, FIT IoT allows the creation of a huge number of nodes that can be used as legitimate or malicious nodes. The expected results from such as configuration are related to detection of illegitimate or bad behaviours so such results can be used as an input for measuring vulnerabilities, for a subsequent labelling and certification.

4.1.7 Data Collection In each test, we have to collect the result (if the test was successful or failed). However, other data is also important to decide the level of security. We have to collect information about the time the experiment has taken, the initial parameters it has, regarding length of keys, cryptographic suite used, specific version of the protocols involved, etc.

In some test patterns, such as protection from DoS attack, we also have to collect information about the number of attackers and the frequency of the requests.

4.1.8 Experiment Validation Given the experimentation scenario described and the corresponding implementations in both the in-­house and large-­scale implementations, no situation is foreseen that can compromise the validity of the experiment results.

4.1.9 Experiment Evaluation In this section we present the flow for the evaluation of each test.

Page 43: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

43

4.1.9.1 TP _ID1 (A) In this test pattern (TP), an attacker starts a DTLS exchange with a non-­valid PSK, so this test will be passed if the CM detects that such PSK is not a valid one. Otherwise the test fails.

4.1.9.2 TP_ID1 (B) In this test pattern (TP), the smart object starts a DTLS exchange with the Credential Manager (CM). The attacker tries to obtain the PSK from message 6. If it does so, the test fails. Otherwise, the attacker sends message 10 to the smart object. If the latter one is able to decrypt it, it means there is a security flaw and the test will fail. If the smart object is not able to decrypt it, the test is passed.

4.1.9.3 TP_ID5 (A) In this test pattern (TP), the smart object starts a DTLS exchange with the Credential Manager (CM). The attacker eavesdrops the communication and, if it is able to obtain the content of the message the TP fails. Otherwise, the Attacker starts a new DTLS exchange with the CM using the previously obtained message. The CM verifies the random numbers included in such request in order to match it to a current session. If such numbers match the session, the test fails, otherwise it is passed.

CM receives the message 6 and tries

to obtain the associated PSK

Page 44: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

44

4.1.9.4 TP_ID5 (B) In this test pattern (TP), the smart object starts a DTLS exchange with the Credential Manager (CM). The attacker eavesdrops the communication and, if it is able to obtain the content of the message the TP fails. Otherwise, the Attacker starts a new DTLS exchange with the CM using the previously obtained message. The CM verifies the random numbers included in such request in order to match it to a current session. If such numbers match the session the test fails, otherwise it is passed.

4.1.9.5 TP_ID14 This TP corresponds to a dictionary attack. In it, an attacker eavesdrops the communication in order to capture some messages and try to decipher them by means of a dictionary attack, that is, trying words of the dictionary until it hack the key. If it obtains the key, it will be able to decipher all the messages of the communication. For this reason,

The Smart object and the CM start

the DTLS exchange

The CM verifies the random numbers in

message 10

Attacker starts the DTLS exchange with the CM and uses the obtained message

The smart object sends the group key request

to the CM

The CM verifies the sequence number included in the message

The Attacker sends the obtained

message to the CM

Page 45: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

45

if the attacker is able to decipher the messages obtaining the key, the test fails, otherwise it is passed.

4.1.9.6 TP_ID17 This TP corresponds to a DoS attack. In it, a number of attackers start a DTLS exchange at the same time as the smart object with the Credential Manager (CM). If the smart object receives an answer from the CM before a timeout the test is passed. Otherwise, that is, if it does not receive it or it receives the message after the timer expires, the test fails.

4.1.9.7 TP_ID4 (A)) In this TP, the attacker captures the DTLS packets sent by the smart object to the Credential Manager. It modifies a message sending it to the CM pretending to be the smart object. If, despite such modification the DTLS exchange is established by the smart object and the Credential Manager the test fails. Otherwise, if the manipulated message is detected, the test is passed.

The Attackers start DTLS exchange at the same time as the

normal smart object

All the attackers send message 1 at the same

time to the CM

The smart object sends message 1

to the CM

Page 46: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

46

4.1.9.8 TP_ID4 (B) In this TP, the attacker captures the DTLS packets sent by the smart object to the Credential Manager. It modifies a message sending it to the CM pretending to be the smart object. If, the integrity check of the message is passed, then the test fails. Otherwise the TP is passed.

4.1.9.9 TP_ID8 In this TP, the attacker eavesdrops the DTLS packets sent between the smart object and the Credential Manager. If it is able to access to the content the test fails. Otherwise the TP is passed.

The smart object and the CM start the DTLS

exchange

The smart object requests the group key sending message 11 and the CM answers with message 12

Page 47: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

47

4.1.9.10 TP_ID16 In this TP, the attacker captures the DTLS packets send by the smart object to the Credential Manager and modifies them, even injecting new ones. The CM receives them and tries to obtain the associated PSK. If the PSK is valid, the TP fails. Otherwise, both smart object and CM verifies the HASH which is encrypted with the key. If they cannot decrypt it or the HASH is not valid, the test is passed. If any of them is valid the test finally fails.

4.2 EXP 2: Sensor node code hashing

4.2.1 Objectives The main motivation of Unparallel Innovation for the ARMOUR experiment originates from a range of products currently under development. Those products consist on highly optimised sensing boards, using analogue and digital processing, also equipped with a microcontroller that provides the product with a behaviour autonomy. Products are designed to be cheap and have low power consumption, resulting in the utilization of devices with scarce resources (very limited RAM, FLASH storage, low computation power, and low capacity batteries) and highly optimised hardware and software. Such factors

The smart object and the CM start the DTLS

exchange

CM receives and verifies message 6 and tries to obtain the associated PSK

The smart object and the CM start the DTLS

exchange

Page 48: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

48

resulted in the development of hardware specific applications without the usage of any generic operating system.

UNPARALLEL is interested in developing a remote programming framework capable of providing a security mechanism to protect devices from being programmed with malicious code and software images containing the implementation of proprietary algorithms. Such remote programming framework must also be able to support the programming of heterogeneous devices, since different hardware or sensors with different application purposes will, most likely, require different software versions.

As such, UNPARALLEL will use ARMOUR to test techniques and technologies to identify the ones that allow to ensure the required security level for the remote programming framework without compromising the performance and energy consumption of devices.

4.2.2 Scenario Description The infrastructure for distribution of software updates is mainly composed of two entities:

Network Entity Name Device

Main function • Requests a software update • Verifies authenticity of software update images • Installs software updates

Operating System • No generic operating system. Usage of application

specific software

Information Consumed

• New Software Announcements • Software Update Images

Information Produced • Software update request

Communicates with • Software Provisioning Server

Network Entity Name Software Provisioning Server

Main function • Storage and distribution of software updates

Operating System • Generic operating system (e.g. Linux)

Information Consumed • Software update request

Page 49: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

49

Information Produced • New Software available announcement • Software Update Image to be installed

Communicates with • Device

Devices connect to Software Provisioning Servers using Bluetooth technology. Bluetooth allows the creation of direct links between two entities in a Master-­Slave approach, where Devices connect as slaves to the nearest Software Provisioning Servers that act as master. Therefore, the topology of the software update network assumes an extended star topology, as represented in Figure 11, where each device is connected to one and only one Software Provisioning Server and those are connected to remote servers where developers publish software images to be programmed onto devices.

Figure 11 – Software update network topology

Bluetooth technologies implement confidentiality, authentication and key derivation mechanisms. Communication packages are encrypted using E0 stream cipher, based on a shared cryptographic secret generated during the paring phase. However, security of Bluetooth technology is under constant evaluation which resulted in the discovery of vulnerabilities and weaknesses of some mechanisms under specific attack conditions. Namely, some experimenters identified that E0 cypher is susceptive to specific attacks, which make it less robust than it should theoretically be 1,2. In a similar way, Bluetooth pairing mechanism, where the cryptographic keys to be used on the encryption of communication messages are agreed, is also vulnerable to some specific attacks where

1 Lu, Yi;; Willi Meier;; Serge Vaudenay (2005). "Advances in Cryptology – CRYPTO 2005". Crypto 2005. Lecture Notes in Computer Science. Santa Barbara, California, USA. 3621: 97–117. doi:10.1007/11535218_7. ISBN 978-­3-­540-­28114-­6. Available at: http://lasecwww.epfl.ch/pub/lasec/doc/YV04a.pdf 2 Fluhrer, Scott. "Improved key recovery of level 1 of the Bluetooth Encryption". Cisco Systems, Inc. Available at: http://eprint.iacr.org/2002/068.ps

SoftwareProvisioning

Server

Device

Device

DeviceDevice

DeviceDevice

Device

SoftwareProvisioning

Server

Device

Developers

Page 50: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

50

the keys used and generates in this phase may be discovered. The vulnerability of this stage depends on the version of the standard used on method used for the pairing.

Therefore, even though Bluetooth provides encryption of communications, there are some cases where this security measure can be broken, given enough time and effort. As such, other security mechanisms should be used alongside with the ones provided by Bluetooth to increase the security of the software update mechanism.

UNPARALLEL designed a messaging protocol to be used alongside hashing and encryption algorithms to minimise the potential vulnerabilities presented by Bluetooth. This messaging protocol aims to provide mutual authentication between Devices and Software Provisioning Servers to guarantee the authenticity of software update. This protocol was designed while having in mind the goal of maintaining a small energy consumption footprint by using as few messages as possible and by reducing the usage of encryption algorithms. The designed messaging protocol is represented in Figure 12.

Figure 12 – EXP2 message sequence

When a Software Provisioning Server receives a new software image, it announces it to all devices nearby. If a device identifies that it is a target for the software update, then it sends a request (3) for that update where it identifies the version of software that is currently using. This information is crucial for the whole software update process since the hash of the software currently used will be a fundamental key for the hashing and encryption functions. When provisioning server receives a software request, it starts the authentication process by sending challenge (4) composed by a random number “X” codified and by the result of hashing this number with the software hash. The device responds by generating a new random number “Y” followed by a hash resulting for the combination of the software hash plus “X” and “Y”.

After a successful computation and comparison of the hashes, the mutual authentication is complete and the transfer of the requested software image can start. The software update image is encrypted to provide protection against attacks where attackers have access to the messages used to transfer software update images. This encryption algorithm uses as encryption key a combination between the hash code of the currently installed software and the result of a mathematical function that uses both “X” and “Y” values. Such mechanism requires attackers to know the hash associated to the software currently installed, to have collected the random numbers generated by device and provisioning

Page 51: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

51

server during the mutual authentication phase, and to know the mathematical function used to combine those random numbers.

4.2.3 Application of Experimentation Approach 4.2.3.1 Definition of the experiment scope/focus The experiment will not take into consideration the usage of Bluetooth communication and therefore, ARMOUR testing framework will not be used to study Bluetooth security vulnerabilities. The experiment will assume as initial conditions that either the attacker had access to the pairing codes and was able to successfully create a Bluetooth link with a legitimate entity, or that attackers were somehow able to discover the encryption keys used to encrypt the Bluetooth session.

EXP2 is intended to test the remote software upgrade mechanism in an environment composed by heterogeneous devices, both in hardware and in application purpose, thus running different software and receiving different software updates. This scenario will allow to test the collision resistance of the hashing algorithms, since software provisioning servers will store different software images and different versions of the software, where each one will have different hash codes.

The large number of different versions of software images allow to identify the occurrence of repetitions of the hash code for different versions of software. Such situations can be exploited for the installation of unauthorized software, either on purpose or by mistake. Moreover, such situations can potentially increase the probability of success for attacks where messages captured during a previous authentication process are used in authentication attempts. Also, given the large number of different software images stored in the software provisioning server, brute force attacks can be used as attempts to discover a usable hash code that would allow the attacker to successfully authenticate against the provisioning server.

4.2.3.2 Identification of the threats to be tested in the experiment In deliverable D1.1 it was identified that EXP2 would address 8 vulnerabilities. However, vulnerability “V14 -­ Transfer of keys via independent security element” was dropped since, in the context of EXP2, this vulnerability refers to the Bluetooth module and to the pairing keys stored in it. Since Bluetooth technology will not be used in ARMOUR, and it is assumed in the initial conditions that the attacker has successfully bypassed Bluetooth pairing mechanism, then the study of this vulnerability is outside of the experiment scope.

Table 4 – Threats addressed by EXP2

Countermeasure Description Related threats

Mutual authentication

Attackers or corrupted entities try to exploit legitimate entities. E.g. Have access to legitimate software, install corrupted software on devices. Protection can be assured by establishing a security association between the devices and software provisioning servers, which provides mutual authentication

V1, V10

Replay protection Attacker tries to successfully authenticates against legitimate entities by playing messages copied from previous authentication processes. Protection can be using a protocol that provides functionalities

V9

Page 52: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

52

4.2.4 Test Patterns Design The execution of all test patterns will involve the testing of different encryption and hashing algorithms to determine the security limitations and the energy efficiency of each algorithm and implementation. It is also planned to test both the usage and absence of encryption functions during the transfer of the software update, to test both the energy consumption impact caused by those algorithms and its impact in the overall security provided by the algorithm.

Several hashing algorithms are intended to be tested in the experiments. The initial set includes 3 cryptographic hash functions, but some more may be added during the setup of the experiment. These hashing algorithms are: SHA-­23, Blake4, and Skein-­2565. Aside from these function, performance and reliability of checksum functions, such as CRC-­326 and Fletcher-­327, will also be tested for the computation of the hash associated to software images.

Regarding the encryption functions, an initial set of 2 algorithms will be tested: XXTEA8 and AES9.

Test Pattern ID TP_ID1

Stage Entities Authentication

3 https://web.archive.org/web/20130526224224/http://csrc.nist.gov/groups/STM/cavp/documents/shs/sha256-­384-­512.pdf 4 https://131002.net/blake/#fi 5 http://www.skein-­hash.info/sites/default/files/skein1.3.pdf 6 Peterson, W. W.;; Brown, D. T. (January 1961). "Cyclic Codes for Error Detection". Proceedings of the IRE. 49 (1): 228–235. doi:10.1109/JRPROC.1961.287814 7 Fletcher, J. G. (January 1982). "An Arithmetic Checksum for Serial Transmissions". IEEE Transactions on Communications. COM-­30 (1): 247–252. doi:10.1109/tcom.1982.1095369. 8 Elias Yarrkov (2010),” Cryptanalysis of XXTEA”, Cryptology ePrint Archive, Report 2010/254. Available at: http://eprint.iacr.org/2010/254.pdf 9 FIPS PUB 197, Advanced Encryption Standard (AES), National Institute of Standards and Technology, U.S. Department of Commerce, November 2001. http://csrc.nist.gov/publications/fips/fips197/fips-­197.pdf

to detect if all or part of a message is an unauthorised repeat of an earlier message or part of a message

Integrity verification

The integrity of software images received must be verified by devices to detect messaging alterations

V8, V10

Confidentiality A security association is established between the devices and software provisioning servers, which provides software image protection from eavesdropping and alteration attempts

V7, V8, V19

Proven resistance to man in the middle attacks

Attackers use messaging relays and/or alterations to impersonate legitimate entities. The security association between communicating entities uses protocols that provide mechanisms to resist man-­in-­the-­middle attacks

V8, V13, V9

Page 53: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

53

Test diagram

Test Description

Entities:

• Malicious Device. 3rd party have somehow gained access to a hash code corresponding to a legitimate software image and tries to have access a legitimate software update

• Software Provisioning Server. Server storing software versions to update legitimate devices

Steps:

1. Malicious device discovers the hash code from legitimate software image;;

2. Malicious device starts authentication attempt by sending a Software Access Request message to the Provisioning Server using the software version ID and generating the corresponding hash;;

3. Provisioning server sends an authentication challenge;; 4. Malicious device generates a random number and computes the reply to the challenge;;

5. Provision Server accepts or rejects the access request.

In this test, several hashing and encryption algorithms are tested.

Steps 2, 3, 4, and 5 are continually executed until the malicious device successfully authenticates with the Provisioning Server and can read the software image

The test is considered successful if the authentication of the malicious device was rejected, or if the malicious device is unable read the software image using the discovered hash code.

References Vulnerability: V1 (Discovery of Security Keys) Security Objective for Labelling & Certification: Authentication

Page 54: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

54

Test Pattern ID TP_ID3, TP_ID8, TP_ID13

Stage Software update distribution

Test diagram

Test Description

Entities:

• Legitimate Sensor. Device authenticates with the Software Provisioning Server and receives a software update

• Software Provisioning Server. Server storing software versions to update legitimate devices

• Malicious Entity. Entity able to eavesdrop the communication between a Software Provisioning Server and a legitimate sensor and wants to have access to a software image

Steps:

1. Legitimate sensor sends a software request to the software provisioning server and proceeds with the authentication process;;

2. Both the software provisioning server and the malicious entity receives the software request and the authentication information;;

3. Malicious entity tries to read information from the request and authentication messaging;;

4. The software provisioning server encrypts the requested software image and sends it to the legitimate sensor;;

5. Both the legitimate sensor and the malicious entity receive the encrypted software image;;

6. The malicious entity tries to decrypt the software, using the information previously acquired.

These steps will be executed multiple times to test the performance and energy consumption on the sensor for different encryption algorithms.

The test is considered successful if the Malicious Device was not able to decrypt the software, and will fail otherwise.

Page 55: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

55

References Vulnerability: V7 and V19 (Software eavesdropping and Detection of insecure encryption) Security Objective for Labelling & Certification: Confidentiality

Test Pattern ID TP_ID4

Stage Entities Authentication

Test diagram

Test Description

Entities:

• Legitimate Sensor. Device attempting to authenticate to have access to software updates

• Software Provisioning Server. Server storing software versions to update legitimate devices

• Malicious Entity. Entity able to eavesdrop the communication between a Software Provisioning Server and a legitimate sensor and wants to have access to a software image

Steps:

1. Legitimate Sensor sends a software request to a software provisioning server;;

2. Malicious Device intercepts the request from the Legitimate Sensor and changes the Software ID;;

3. Malicious Device sends the altered message to a software provisioning server to start the authentication process;;

4. Software provisioning server starts authentication process with the legitimate sensor;;

5. Legitimate sensor receives and installs a software update different from the one it requested, becoming broken.

The test is considered successful if the alteration of the message was detected or the wrong software was not installed. Otherwise, the test will fail.

References Vulnerability: V8, V13 (Alteration of Messaging between Entities and Messaging Eavesdropping)

Page 56: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

56

Security Objective for Labelling & Certification: Integrity

Test Pattern ID TP_ID5

Stage Entities Authentication

Test diagram

Test Description

Entities:

• Software Provisioning Server. Server storing software versions to update legitimate devices

• Malicious Device. Entity that tries to replay messages sniffed during the authentication phase of a legitimate device to successfully authenticate itself against a Software Provisioning Server

• Legitimate Sensor. Device attempting to have access to software updates

Steps:

1. Legitimate sensor performs the messaging exchange to authenticate against a software provisioning server, while a malicious entity sniffs all the messages sent by the legitimate sensor;;

2. After the authentication of the legitimate sensor, the malicious device uses the messages sniffed to start the authentication process (with the same or a different Software provisioning server) and to reply to the authentication challenge;;

The test is considered successful if the authentication of the malicious device was reject, otherwise it will fail.

Page 57: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

57

References Vulnerability: V9 (Replay of Messaging between Entities) Security Objective for Labelling & Certification: Replay Protection

Test Pattern ID

TP_ID6

Stage Authentication and software distribution

Test diagram

Test Description

Entities:

• Malicious Provisioning Server. Malicious entity that impersonates a legitimate provisioning server that tries to mislead legitimate sensors to install malicious software

• Legitimate Sensor. Device attempting to have access to software updates

Steps:

1. The Malicious Provisioning Server announces the existence of a new software to lure Legitimate Sensors to initiate the software update process;;

2. Malicious Provisioning Server tries to mutual authenticate with the legitimate sensor through the challenge exchange;;

3. Malicious Provisioning Server accepts the authentication and sends a malicious software image to the Legitimate Sensor.

The test is considered successful if the authentication of the malicious device was reject or if the legitimate sensor detects that the software image is not legitimate, and will fail otherwise.

References Vulnerability: V10 (Unauthorized or corrupted software in devices/servers) Security Objective for Labelling & Certification: Integrity, Authentication

Page 58: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

58

Test Pattern ID TP_ID16

Stage New software announcement and authentication

Test diagram

Test Description

Entities:

• Software Provisioning Server. Server storing software versions to update legitimate devices

• Malicious Device. Acts as man in the middle between a Software Provisioning server and a Legitimate Sensor

• Legitimate Sensor. Device attempting to have access to software updates

Steps:

1. The Malicious Device announces the existence of a new software version to lure Legitimate Sensors to request software updates;;

2. A legitimate sensor sends a software access request to the Malicious device, which uses this message to start an authentication process with a legitimate Software Provisioning Server;;

3. The Malicious device uses the Challenge messaging sent by the Software Provisioning Server to authenticate against the Legitimate Sensor;;

4. After a successfully authentication, the Malicious Device sends a malicious software image to the Legitimate Sensor.

The test is considered successful if the authentication of the malicious device was reject or if the legitimate sensor detects that the software image is not legitimate, and will fail otherwise.

References Vulnerability: V13, V8, V9 (Man in the Middle Attack, Messaging alteration and replay) Security Objective for Labelling & Certification: Resistance to Man In

Page 59: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

59

the Middle attacks

4.2.5 In-­house Implementation 4.2.5.1 Infrastructure characterisation In-­house testing will be composed by two entities: one Device implemented using an UNPARALLEL’s prototype, represented in Figure 13, and a PC implementing the role of Software Provisioning Server. Devices will be powered by the 8/16 bit AVR processors with up to 128KB of storage capacity and up to 8KB of SRAM. No operating system will be deployed in the Device, being deployed as hardware specific application. C libraries will be used to implement hashing and encryption algorithms:

• Encryption Libraries: o XXTEA: https://github.com/xxtea/xxtea-­c o AES-­128: https://github.com/kokke/tiny-­AES128-­C

• Hashing Libraries: o Blake-­32: https://github.com/BLAKE2/BLAKE2 o SHA-­2: http://www.ti.com/tool/sha-­256 o Skein-­256: https://github.com/wernerd/Skein3Fish

• Non-­Cryptographic Libraries o CRC32: https://barrgroup.com/Embedded-­Systems/How-­To/CRC-­Calculation-­C-­Code

o Fletcher-­32: https://github.com/mathias/perpend/blob/master/src/perpend/fletcher-­32.c

These libraries are not definitive and may switched by more efficient that may be found later.

Figure 13 – Example of a prototype developed by UNPARALLEL used as Device

The Device will be powered by an external power supplier capable of log the power consumption of the Device over the time.

4.2.5.2 Scenario setup The setup scenario is simple, where the Device connects directly to the PC implementing the Software Provisioning Server. Malicious Entities will also be deployed as applications in the PC that is running the Software Provisioning Server. Applications playing the role of Malicious Entities can read, alter or inject the messages in the connection between the

Page 60: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

60

Software Provisioning Server and the Device to allow the implementation of the test patterns described in 0.

4.2.6 Large-­Scale Testbed Implementation 4.2.6.1 Infrastructure characterisation In a large-­scale testbed, EXP2 requires the existence of resource-­constrained sensor devices for the implementation of Devices entities and the availability of equipment capable of running Linux operating system to perform the job of Software Provisioning Server. Resource-­constrained hardware used to implement Devices will need to be connected to energy measurement devices to support the measurement of the energy efficiency of different security mechanisms.

As in the case of the in-­house implementation, no generic operating system will be deployed in Devices. Instead, it will be deployed a hardware specific application that will use the libraries identified in the In-­house characterisation.

4.2.6.2 Scenario setup For the setup of the large-­scale scenario, Malicious Entities should be implemented in hardware with more computation capabilities than Devices, preferably to with the same capabilities of Software Provisioning Servers, i.e. capable of running Linux operating system.

As identified in 4.2 Scenario Description, the network topology assumes an extended star topology, where a few number of Software Provisioning Servers, with a replicated database, provide software updates to a large number of Devices. Devices will be divided in several groups, where each group will receive different software updates. To allow the creation of setups with some degree of randomness, namely at the level of the number and types of attacks, devices capable of implementing Malicious Entities must be deployed in any connection between Devices and Software Provisioning Servers. The behaviour of Malicious Entities will depend on the result of the random configuration, where it can be either in a dormant state, simply relaying messages between Devices and Software Provisioning Servers, or taking a more active behaviour by executing specific attack patterns. This setup allows easily change the number of attackers and types of attack without the need to change the network topology.

Each Software Provisioning Server will generate several software update images, where some are exclusive to each group of devices. Separation of code per group allows to test different security mechanisms (hash and/or encryption functions) in each group.

4.2.7 Data Collection The main objective of the experiment is to identify the vulnerabilities of the software update protocol while determining the security algorithms that provide the best energy efficiency for this specific scenario. As such, the main indicators that need to be collected are:

• Energy consumption of Devices over time;; • Information about the version of the installed software;; • Information about the number of software images read by Malicious Entities.

Apart for these objective-­oriented parameters, other parameters related with network statistics are also important indicators to give insight of overall security of the scenario. Example of these indicators are:

Page 61: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

61

• Number of legitimate messages sent;; • Number of legitimate messages sniffed;; • Malicious Entities Log of actions/attacks;; • Number of software image hash collisions on the Software Provisioning Server.

Such indicators will be processed to determine more complex information, such as:

• Number of successfully legitimate software updates per device group;; • Number of successfully malicious software updates per device group;; • Number of malicious attempts of software update per device group;; • Number of malicious authentication attempts;; • Number of successfully malicious authentications.

4.2.8 Experiment Validation The introduction of random factors in the configuration of the experiment execution may produce experiment deployments that will not study all the defined security aspects. As such, some conditions must be verified to provide some assurance about the accuracy of the experiment’s results. These conditions are:

• When different security algorithms are tested in different device groups, then it must be assured that all test patterns are applied to each device group;;

• To properly test the limitations of the functions to generate the software hashing codes, the generated software updates images must:

o Vary in size, from small to large (few to several KB);; o Some updates should introduce code changes without changing the code size;;

4.2.9 Experiment Evaluation The test patterns described are mainly used to test two types of situations: a malicious device has access and can read software images, and device is deceived to install a malicious software update.

Test patterns TP_ID1, TP_ID3, TP_ID5, TP_ID8, TP_ID13 represent attacks where malicious entities try different techniques and methodologies to try to have access decrypted software images. These tests are evaluated in:

• Fail, when malicious entity has success in getting access to a decrypted software image;;

• Pass, when malicious entity receives the software image but is unable to decrypt the image or never receives a message with the software update.

Test patterns TP_ID4, TP_ID6, TP_ID16 correspond to attacks where legitimate devices are deceived into installing corrupted or malicious software. The results of these tests are classified in:

• Fail, when the device installs the malicious software;; • Pass, when the device executes validation procedures over the software update message and identifies that the software is not a legitimate one or when the device receives the software update message but rejects it without the execution of validation algorithms (encryption and integrity check).

Page 62: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

62

4.3 EXP 3: Secured bootstrapping/join for the IoT

4.3.1 Objectives The objective of Experiment 3 is to prototype and test the secure bootstrapping protocol in 6TiSCH networks. This is referred to as the join protocol.

The IETF 6TiSCH protocol stack uses the IPv6 protocol on top of the IEEE 802.15.4 Time-­slotted Channel Hopping (TSCH) mode to provide deterministic properties on wireless networks. It targets 99,999% end-­to-­end reliability which is achieved by using robust channel hopping mechanisms that cope with multipath fading. The IETF 6TiSCH working group standardizes mechanisms that are missing when IPv6 is applied on top of TSCH. In particular, the 6TiSCH standardization group works on distributed scheduling for TSCH that dynamically adapts the link bandwidth between peers.

In the scope of the 6TiSCH working group, a security design team has been formed in order to define the overall security architecture and work on securing the join process in 6TiSCH networks. The ARMOUR project plays a key role as it allows that design team to assess the “secure” nature of the solution during the standardization phase. This close collaboration between ARMOUR and the IETF 6TiSCH security design team is a unique opportunity, and contributes to the impact of the ARMOUR project.

The goal of the join protocol being defined in IETF 6TiSCH is to securely admit a new device in the network. The device authenticates to the network using a pre-­configured credential and, as the outcome of the join protocol, expects to be configured with link-­layer keys that will allow to communicate with its peers.

Experiment 3 will therefore test security and scalability of the join protocol, as defined by IETF 6TiSCH.

4.3.2 Scenario Description The entities participating in the 6TiSCH join protocol are10:

• JN: Joining Node -­ the device attempting to join a particular 6TiSCH network. • JCE: Join Coordination Entity -­ central entity responsible for authentication and authorization of joining nodes.

• JA: Join Assistant -­ the device within radio range of the JN that generates Enhanced Beacons (EBs) and facilitates end-­to-­end communications between the JN and JCE.

Network Entity Name Joining Node (JN)

Main function • Attempts to join the network

Operating System • OpenWSN firmware stack

10 Minimal Security Framework for 6TiSCH. draft-­ietf-­6tisch-­minimal-­security-­00.

Page 63: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

63

Information Consumed

• Join response from JCE • Link-­layer keys

Information Produced • Join request destined to JCE

Communicates with

• Join Assistant • JCE (through Join Assistant)

Network Entity Name Join Coordinating Entity (JCE)

Main function • Arbitrates the joining process • Provides link-­layer keys to nodes authorized to join the

network

Operating System • Linux/Windows • OpenWSN software stack

Information Consumed • Join requests from JNs

Information Produced • Join requests destined to JCE

Communicates with

• Join Assistant • Joining Node (through Join Assistant)

Network Entity Name Join Assistant (JA)

Main function • Facilitates the communication between JN and JCE by

relaying join requests and join responses • Emits Enhanced Beacons for JNs to synchronize to the

network

Operating System • OpenWSN firmware stack

Information Consumed • None

Page 64: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

64

Information Produced • Enhanced Beacons

Communicates with

• Joining Node (JN) • Join Coordinating Entity (JCE)

We describe here the steps taken by a Joining Node (JN) in a 6TiSCH network (cf. Figure 14). When a previously unknown device seeks admission to a 6TiSCH network, the following exchanges occur:

1. The JN listens for an Enhanced Beacon (EB) frame. This frame provides network synchronization information, and tells the device when it can send a frame to the node sending the beacons, which plays the role of Join Assistant (JA) for the JN, and when it can expect to receive a frame.

2. The JN configures its link-­local IPv6 address and advertises it to JA. 3. The JN sends packets to the JA device in order to securely identify itself to the network. These packets are directed to the Join Coordination Entity (JCE).

4. The JN receives one or more packets from JCE (via the JA) that sets up one or more link-­layer keys used to authenticate subsequent transmissions to peers.

From the joining node's perspective, minimal joining is a local phenomenon -­ the JN only interacts with the JA, and needs not to know how far it is from the DAG root, or how to route to the JCE. Only after establishing one or more link-­layer keys does it need to know about the particulars of a 6TiSCH network.

Figure 14 -­ Message sequence for 6TiSCH join protocol

The details of each step in Figure 14 are described in the following,

4.3.2.1 Step 1 – Enhanced Beacon The JN hears an EB from the JA and synchronizes itself to the joining schedule using the cells contained in the EB. At this point the JN may proceed to step 2, or continue to listen for additional EBs. If more than one EB is heard, the JN may use a metric based on DAG

Page 65: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

65

rank and received signal level of the EB, or other factors to decide which JA to use for the security handshake in step 3.

4.3.2.2 Step 2 – Neighbour Discovery At this point, the JN forms its link-­local IPv6 address based on its EUI64, and may further follow the IPv6 Neighbour Discovery (ND) optimised for 6LoWPANs.

4.3.2.3 Step 3 – Security Handshake The security handshake between the JN and the JCE optionally uses Ephemeral Diffie-­Hellman over COSE (EDHOC) to establish the shared secret used to encrypt the join request and join response.

The security handshake step is optional in case pre-­shared keys (PSKs) are used, while it is required for asymmetric keys. In case the handshake step is omitted, the shared secret used for protection of the join request and join response in the next step is the PSK.

4.3.2.4 Step 4 – Join Request The join request is sent as a CoAP request from the JN to the JA. JA is designated by JN as a CoAP proxy and it forwards the request to the JCE. The join request is authenticated/encrypted end-­to-­end between JN and JCE using AES-­CCM-­16-­64-­128 algorithm from COSE and a key derived from the shared secret from step 3.

4.3.2.5 Step 5 – Join Response The join response is sent from the JCE to the JN through JA, that serves as a CoAP proxy. Packet containing the join response travels on the path from JCE to JA using pre-­established routes in the network. The JA delivers it to the JN using the slot information from the EB. JA does not keep any state to relay the message. It uses information sent in the clear within the join response to decide where to forward to.

The join response is authenticated/encrypted using the AES-­CCM-­16-­64-­128 algorithm from COSE and a key derived from the shared secret from step 3.

The join response contains one or more (per-­peer) link-­layer key(s) that the JN will use for subsequent communication. It optionally also contains an IEEE 802.15.4 short-­address assigned to JN by JCE.

4.3.3 Application of Experimentation Approach 4.3.3.1 Definition of the experiment scope/focus In the scope of Experiment 3, we will focus on the execution of the join protocol where JN and JCE are configured with pre-­shared keys. This represents the most efficient variant of the join protocol as security handshake step is not necessary. We depict in Figure 15 the example exchange of the join protocol when PSKs are used.

Page 66: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

66

Figure 15: Example of a join protocol exchange with PSKs.

As can be seen from Figure 15, the join protocol then consists of a single round trip exchange, a join request and a join response embedded into CoAP payload and encoded with CBOR/COSE.

4.3.3.2 Identification of the threats to be tested in the experiment Table 3 lists the threats that will be tested as part of Experiment 3. In respect do D1.1 we decided to omit the testing of V14 that relates to hardware security elements because the hardware platforms we plan to use for deployment of Experiment 3 do not contain security elements.

Table 5 – Threats addressed by EXP3

Countermeasure Description Related threats

Integrity verification

Alteration of join requests and responses by an attacker is a plausible threat as the response contains key values used in the rest of the network. Therefore, an attacker may attempt to alter the fields sent in clear within the join request. Since the fields are authenticated by COSE, it should not be feasible to do so due to a failure in Message Integrity Code (MIC) check.

V8

Replay protection The threat of an attacker recording a conversation between a JN and JCE and replaying it a later time to attempt to gain access to the network is well present. The join protocol uses sequence numbers to protect against such a case. The sequence numbers are initialized to zero at both JCE and JN

V9

Page 67: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

67

4.3.4 Test Patterns Design In the following, we provide a description of how each of the threats listed above will be tested in our scenario.

4.3.4.1 TP_ID4 -­ Resistance to alteration of requests Test Pattern ID TP_ID4

Stage Bootstrapping

Test diagram

Test Description

The entities involved:

• Joining Node (JN). Node that attempts to join by issuing join requests.

• Join Assistant (JA). Man in the middle attacker and radio neighbour of JN. Alters join requests.

• Join Coordinating Entity (JCE). Arbitrates join process, issues

and every attempt to join by a JN should have a monotonically increasing sequence number present in the request.

Confidentiality Passive eavesdropping of the communication between JN and JCE is a likely threat, given that the exchange happens on the radio medium. During the protocol execution, both requests and responses are encrypted by COSE so that a passive eavesdropper should not benefit from the recorded exchange. However, some fields must be sent in clear for practical reasons.

V13

Mutual Authentication

Finally, impersonation of JN by the attacker in an attempt to join the network is a likely threat. Only the devices with a correct pre-­shared key should be authorized to join the network. During the execution of the join protocol, JCE checks the MIC and whether it was generated under the correct key. Similarly, Interception of the join response containing link-­layer keys should reveal no information to the attacker because it is encrypted under the key known only to the JN.

V1

Page 68: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

68

join response to legitimate nodes.

The steps taken:

1. JN issues a join request to JA and calculates the Message Integrity Code (MIC) using AES-­CCM-­16-­64-­128 algorithm from COSE.

2. JA alters random bits in the request and forwards the request to JCE

3. JCE verifies the MIC of the request

The test is considered successful if JCE responds with a 4.01 Unauthorized error (i.e. MIC check failed) and unsuccessful if the JCE responds with a join response.

References Vulnerability: V8 (Messaging alteration) Security Objective for Labelling & Certification: Integrity

4.3.4.2 TP_ID5 -­ Resistance to replay of requests Test Pattern ID TP_ID5

Stage Bootstrapping

Test diagram

Test Description

The entities involved:

• Joining Node (JN). Node that attempts to join by issuing join requests.

Page 69: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

69

• Join Assistant (JA). Man in the middle attacker and radio neighbour of JN. Replays join requests.

• Join Coordinating Entity (JCE). Arbitrates join process, issues join response to legitimate nodes.

The steps taken:

1. JN issues a join request to JA and includes in the request a monotonically increasing sequence number.

2. JA makes a copy of the request and forwards it unmodified to JCE. 3. JCE verifies the request and responds with a join response. 4. JA forwards the join response to JN. 5. JN decrypts the response. 6. JA sends the recorded request to JCE

The test is considered successful if JCE silently drops the request due to the obsolete sequence number and unsuccessful if the JCE responds with a join response.

References Vulnerability: V9 (Replay) Security Objective for Labelling & Certification: Resistance to replay attacks

4.3.4.3 TP_ID8 -­ Resistance to eaves dropping and man in the middle Test Pattern ID TP_ID8

Stage Bootstrapping

Test diagram

Test Description

The entities involved:

• Joining Node (JN). Node that attempts to join by issuing join

Page 70: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

70

requests. • Join Assistant (JA). Man in the middle attacker and radio neighbour of JN. Performs payload inspection of join responses.

• Join Coordinating Entity (JCE). Arbitrates join process, issues join response to legitimate nodes.

The steps taken:

1. JN issues a join request to JA. 2. JA forwards the request unmodified to JCE. 3. JCE verifies the request responds to JA with a join response. 4. JA performs payload inspection of join response and attempts to parse it.

The test is considered successful if JA does not succeed in parsing the join response and unsuccessful in case it obtains the link-­layer key(s) from the response.

References Vulnerability: V13 (Eavesdropping, man in the middle) Security Objective for Labelling & Certification: Confidentiality

4.3.4.4 TP_ID11 -­ Detection of flaws in the authentication and in the session management Test Pattern ID TP_ID11

Stage Bootstrapping

Test diagram

Test Description

The entities involved:

• Attacker. A malicious entity that tries join with an incorrect PSK. • Join Assistant (JA). Radio neighbour of JN and of the attacker. Forwards packet to and from the JCE.

• Join Coordinating Entity (JCE). Arbitrates join process, issues

Page 71: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

71

join response to legitimate nodes.

The steps taken:

1. Attacker issues a join request generated under a random key and a valid node identifier.

2. JA forwards the request to JCE. 3. JCE verifies the MIC of the request.

The test is considered successful if JCE responds with a 4.01 Unauthorized error and unsuccessful if it responds with a join response.

References Vulnerability: V1 (Discovery of long-­term keys) Security Objective for Labelling & Certification: Authentication

4.3.5 In-­house Implementation 4.3.5.1 Infrastructure characterisation As a first step, we will implement the join protocol for the latest OpenWSN stack (version 1.9.0). We will first use OpenMote-­CC2538 hardware platform as the in-­house prototype before porting the implementation to IoT-­Lab hardware platforms. This phase of testing will consist of only two constrained nodes and a PC, that each implement one role of a join protocol: JN, JA and JCE.

The implementation will use an existing cn-­cbor library for CBOR and we will implement ourselves the protection of CoAP messages by using COSE11. The implementation will be released open-­source which will allow the reproducibility of our experiment.

4.3.5.2 Scenario setup Communication-­level threats will be experimented by using JA node as the man in the middle that can also eavesdrop on the communication and alter messages. We will perform the tests by having JN attempt to join a network by communicating with JCE through JA, that is either malicious or benevolent. Therefore, two OpenMote-­CC2538 nodes will be used as JN and JA, as well as a central PC that will act as a JCE. In each case, JA and JN are direct radio neighbours, while JCE can be multiple hops away or residing in the Cloud.

4.3.6 Large-­Scale Testbed Implementation 4.3.6.1 Infrastructure characterisation As a second step, we plan to port the in-­house implementation to a large-­scale testbed such as IoT-­Lab in order to test the scalability of the join protocol. Therefore, same libraries and code that are used for in-­house implementation will run on the large-­scale testbed. Since the join protocol implementation is expected to be hardware independent, we expect that the port to a different hardware platform will introduce a limited overhead. In case of IoT-­Lab, OpenWSN low-­level primitives are already ported to M3 Open Node platform therefore requiring minimal overhead in the transition from in-­house implementation to large-­scale testbed. For a different testbed, depending on whether the

11 Object Security of CoAP (OSCOAP). draft-­ietf-­core-­object-­security-­00.

Page 72: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

72

hardware platform in use is supported by OpenWSN, it would be necessary to additionally implement low-­level hardware primitives (board support package).

4.3.6.2 Scenario setup Since the vulnerability to threats will be experimented upon during in-­house testing, large-­scale experimentation will mainly consist of performance evaluation of the solution in terms of scalability. In that context, we will experiment how long does it take for a network of N nodes to form and what components are contributors to the overall process duration. For instance, it is important to identify separately the contribution of the RPL DODAG formation process from the duration due to the execution of the join protocol exchange.

The setup will consist of N nodes that are on and listening on a given channel, with the DAG root being turned on last. Then, the overall duration of network formation is the time it takes for the last node to have joined.

In order to test how the protocol performs in case of malicious nodes, we will add one node with invalid PSK and have it attempt to join the network. Then, we will explore the effect of such a node on the overall duration of the join process.

4.3.7 Data Collection The main data we are interested in during large-­scale testing is the duration of network formation with N nodes. Apart from the number of nodes in the network, we will vary parameters such as the length of a slotframe in 6TiSCH and the network topology, as well as the number of exchanges in the protocol, in order to see how the solution from Figure 15 compares with other proposals.

To collect data, every node in OpenWSN periodically prints on its serial port some statistics, such as the schedule it uses, its DAG rank, synchronization status, status of the queue, etc. To these stats we will add a field that corresponds to the join process, and the value of the field shall be set to the timestamp of the instant when the node was admitted into the network.

These stats are encapsulated into HDLC protocol frames and are parsed on the PC side by a Python program. We will add an extension to the Python side of OpenWSN in order to parse the data we are interested in and generate meaningful network-­wide statistics for the join process.

4.3.8 Experiment Validation To determine whether the experiment is valid, a simple approach based on a PSK will be used. JCE keeps track of every node that is authorized to join the network. All nodes that are preconfigured with the correct PSK should at the end of the experiment be in the log of joined nodes. On the contrary, the node that acts as a malicious entity should not have joined the network and be in possession of link-­layer keys that are distributed as part of the join process.

4.3.9 Experiment Evaluation In terms of the duration of the join process, preliminary simulation runs without any malicious entities estimate the duration with 30 nodes to be on the order of ten minutes with 11 slots in a slotframe. Therefore, it should not be surprising that the overall process takes on the order of an hour in case longer slotframe is used and the network is composed of up to 100 nodes.

Page 73: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

73

When it comes to security, the experiment is considered as secure if only the nodes with the correct PSK successfully join the network. Malicious entities should be rejected by the JCE.

4.4 EXP 4: Secured OS / Over the air updates

4.4.1 Objectives The main objective of the experiment is to test, prototype and provide secured OTA (Over the air) updates for RIOT. Over-­the-­air programming (OTA) refers to various methods of distributing new software, configuration settings (e.g. credentials) to devices.

RIOT powers the Internet of Things like Linux powers the Internet. RIOT is a free, open source operating system. RIOT implements all relevant open standards supporting an Internet of Things that is connected, secure, durable, and privacy-­friendly. Since consumers expect their devices to stay up to date with the latest features and performance improvements, firmware OTA is now a standard required feature for connected devices.

There is a crucial need to get OTA functionalities into RIOT in order to sustain a high level of security:

• separate update code from main current code • download binary with high level protocol over encrypted channel • signed images roll-­back mechanism if new image is not functional

ARMOUR will be used to test the implementation of the above functionalities in RIOT, allowing to verify that the proper security levels are provided through the testing of critical and relevant scenarios.

4.4.2 Scenario Description The entities involved in the OTA scenarios are the following:

Network Entity Name Device

Main function • Requests information about available updates • Requests one software update • Verifies authenticity of received software update • Installs software update • Restarts;; then its bootloader boots only the latest image

that pass the integrity check (skipping others)

Operating System • RIOT with support for OTA updates

Information Consumed

• Availability of software updates • Software updates

Information Produced • Request for availability of software updates • Request for software updates

Page 74: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

74

Communicates with • Software Distribution Entity (transparently through the border router)

Network Entity Name Border Router

Main function • The border router is bridging (interconnecting) the

network consisting of wireless IoT devices (with IPv6 in the adapted form of 6LoWPAN) and the global wired Internet IPv6.

Operating System • Dual component: RIOT with support for border router

connected through serial port to software running on a generic operating system (e.g. Linux)

Information Consumed • IPv6 packets (IPv6 / 6LoWPAN)

Information Produced • IPv6 packets (IPv6 / 6LoWPAN)

Communicates with • Device, Software Distribution Entity (transparently)

Network Entity Name Software Distribution Entity

Main function • Storage, maintenance and distribution of software

updates

Operating System • Generic operating system (e.g. Linux)

Information Consumed

• Availability of software updates from Software Generation/Authority Entity

• Upstream software updates (from Software Generation/Authority Entity)

Information Produced • Software updates

Communicates with • Device (transparently through Border Router) • Software Generation/Authority Entity

Network Entity Name Software Generation/Authority Entity

Page 75: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

75

Main function • Generation of software updates, and signing

Operating System • Generic operating system (e.g. Linux)

Information Consumed

• Software update request • “Manual” generation of software

Information Produced • Software update with authentication

Communicates with • Software Distribution Entity

The generic OTA scenario is the following (functional Firmware Scenario):

1. The end-­device node queries the Software Distribution Entity for potential updates. 2. The Software Distribution Entity reply indicates availability of a new update. 3. The node retrieves a firmware update via its radio interface, over UDP and high-­level encrypted channel.

4. The node stores the new firmware and checks the signature of this firmware. 5. The signature checks right. 6. The node boots the new firmware and verifies its functionality. 7. The new firmware functionality is confirmed to be correct. 8. Expected result: the node has successfully updated its firmware

In this exchange sequence, the communication between the end-­device node and the Software Distribution Entity transparently circulate through one border router (constrained device) and the associated bridging/tunnelling software (on Linux, ethos). The scenario is represented in Figure 16.

Page 76: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

76

Figure 16 -­ OTA scenario for RIOT

4.4.3 Application of Experimentation Approach 4.4.3.1 Definition of the experiment scope/focus The main vulnerability addressed is the fact that when an IoT software vulnerability is detected, the vulnerability stays until a patch is provided and the software updated on the IoT device. The mechanism enables vulnerabilities to be patched on the fly, by dynamically modifying the firmware at work on IoT devices. Only signed/authorized code is updated on the IoT devices. However, when introducing OTA, care should be taken for not introducing additional vulnerabilities.

Figure 17 -­ Architecture proposal. Example of a double internal flash and relocation. The idea here is to relocate on-­device to avoid having multiple images.

Device

Border Router

(ethos,Linux)

Query software updates

(COAP DTLS) (TCP+UART)

Border Router (M3)

Software Distribution

Entity

(COAP DTLS)

Software update(s) availability

Request software update

Transmit software update

Page 77: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

77

4.4.3.2 Identification of the threats to be tested in the experiment Table 6 -­ Threats addressed by EXP3

4.4.4 Test Patterns Design Test Pattern ID TP_ID3

Stage Software update distribution

Test Diagram

Test Description

The test addresses how to check that a third-­party entity cannot read the content of software updates exchange between an entity and the software provisioning entity whether the communication is well encrypted.

OTA for RIOT operates as follows for the final software distribution step:

• The device connects to the software distribution server to check for the potential updates

• The device downloads the most appropriate update from the software distribution server.

An attacker would get some packets in this exchange and attempt to collect confidential information from this.

Device

Border Router

(ethos,Linux)

Query software updates

(COAP DTLS) (TCP+UART)

Border Router (M3)

Software Distribution

Entity

(COAP DTLS)

Software update(s) availability

Request software update

Transmit software update

Attacker

(sniffing)

Countermeasure Description Related threats

Alteration of messages

A mechanism is established between the communicating devices, which provides resistance to alteration requests

V8

Replay protection The protocol includes functionality to detect if all or part of a message is an unauthorised replay of an earlier message

V9

Unauthorized software

Ensure appropriate mechanisms to mitigate unauthorized software attack

V10

Context awareness The protocol includes functionality to identify context awareness and act accordingly

V12

Resistance to man in the middle attacks

The security association between communicating entities uses protocols which are proven to resist man-­in-­the-­middle attacks

V13

Page 78: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

78

References Vulnerability: V7 (General Eavesdropping)

Security Objective for Labelling & Certification: Confidentiality

Test Pattern ID TP_ID4

Stage Software update distribution

Test Diagram

Test Description

Test procedure:

1. An entity sends a request about a credential, like a token or a key, to credential manager.

2. An attacker intercepts the request and replaces the information used to generate the credential.

3. The attacker forwards the request to credential manager. The credential manager sends the credential (with wrong information) to the entity.

This test pattern is applied by considering that the “private element” is the “firmware/software update”.

Two varieties of attacks can be envisioned:

• A fake software generation entity (i.e. an attacker impersonating the top server in the 3-­tier distribution architecture)

• A compromised distribution server attempting to distribute unauthorized firmware update.

The focus is on the second case. Thus, in the test pattern, the distribution server is playing the role of the attacker. The modified private element is the firmware update. There are again

Device

Border Router

(ethos,Linux)

Query software updates

(COAP DTLS) (TCP+UART)

Border Router (M3)

Software Distribution

Entity

(COAP DTLS)

Software update(s) availability

Request software update

Transmit corrupted/invalid software update

Page 79: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

79

two sub-­variations described below:

Malicious Firmware Scenario

1. The tested node receives a malicious firmware via it’s radio interface, over UDP and high-­level encrypted channel.

2. The node stores the firmware and checks the signature of this firmware.

3. The signature checks wrong. 4. Expected result: the node does not install the received firmware.

Corrupt Firmware Scenario:

1. The tested node receives a new firmware via it’s radio interface, over UDP and high-­level encrypted channel.

2. The node stores the new firmware and checks the signature of this firmware.

3. The signature checks right. 4. The node boots the new firmware and verifies its functionality.

5. The new firmware’s functionality cannot be confirmed. 6. Expected result: the node rolls back to the old firmware.

References Vulnerabilities: Alteration of messaging between devices (V8)

Security Objective for Labelling & Certification: Integrity

Page 80: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

80

Test Pattern ID TP_ID5

Stage Software update distribution

Test Diagram

Test Description

In the same way as the TP_ID3 is a specialization of TP_ID2 from software distribution, the TP_ID5 test pattern can by apply with:

• The private element requested is the “firmware update”

The attacker attempts to replay the “firmware update” (and force, for instance, the installation of obsolete firmware).

References Vulnerabilities: Replay of messaging between devices Countermeasures (V9)

Security Objective for Labelling & Certification: Replay Protection

Device

Border Router

(ethos,Linux)

Query software updates

(COAP DTLS) (TCP+UART)

Border Router (M3)

Software Distribution

Entity

(COAP DTLS)

Software update(s) availability

Request software update

Transmit software update

Software Distribution

Entity Attacker

(sniffing)

Query software updates

Replayed Software update(s) availability

Request software update

Transmit replayed software update

Get copy of software update

Page 81: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

81

Test Pattern ID TP_ID6

Stage Software update distribution

Test Diagram

Test Description

Test procedure

1. Consider that an entity has downloaded unauthorized software from a given URL using a determined command.

2. The entity runs the downloaded unauthorized software. 3. The software may reveal sensitive data, such as cryptographic material of the entity.

OTA over riot will implement: separate update code from main current code, download binary with high level protocol (COAP) over encrypted channel (e.g., 6TISCH), signed images, roll-­back mechanism if new image is not functional. Thus, in this test pattern the experiment will focus on verifying if an update with an invalid signature is possible to install on the node.

References

Vulnerabilities: Unauthorized or corrupted applications or software in sensors or gateways (V10)

Security Objective for Labelling & Certification: Authentication and Integrity

Device

Border Router

(ethos,Linux)

Query software updates

(COAP DTLS) (TCP+UART)

Border Router (M3)

Software Distribution

Entity

(COAP DTLS)

Software update(s) availability

Request software update

Transmit corrupted/invalid software update

Page 82: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

82

Test Pattern ID TP_ID10

Stage Software update distribution

Test Diagram

Test Description

Test procedure

1. Identify all protocol exchanges of the Software Distribution Entity used to get inputs from the external world, including kind of data potentially exchanged through the interfaces.

2. For each of the identified interfaces create protocol exchange template likely to be interpreted by the Software Distribution Entity.

Use each of the input elements created in step 2 as input to the Software Distribution Entity and for each of those check that the Software Distribution Entity does not give positive response.

An attacker could play the role of end-­devices and could potentially disrupt the behaviour of the server through wireless communication (e.g. COAP server).

An similar set of tests can be launched replacing the Software Distribution Entity by the Device

References Vulnerabilities: Injection (V16)

Security Objective for Labelling & Certification: Confidentiality, Integrity, Authentication

Attacker Software

Distribution Entity

Attacks (through wireless)

Page 83: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

83

Test Pattern ID

TP_ID11

Stage Software update distribution

Test Diagram

Test Description

Test procedure template

1. An entity requests authorization to access a resource to the credential manager.

2. The credential manager sends the authorization to the entity. 3. An attacker intercepts the authorization and it can access the resource.

A device sends two types of authenticated requests to the software distribution entity: to query for potential updates, and to effectively get an update. An attacker should not be able to intercept the authorization and access the associated resources.

References Vulnerabilities: Session management and broken authentication (V17)

Security Objective for Labelling & Certification: Authentication

Device

Border Router

(ethos,Linux)

Query software updates

(COAP DTLS) (TCP+UART)

Border Router (M3)

Software Distribution

Entity

(COAP DTLS)

Software update(s) availability

Request software update

Transmit software update

Attacker

(sniffing)

Request software update

?

(COAP DTLS)

Page 84: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

84

Test Pattern ID TP_ID12

Stage Software update distribution

Test Diagram

Test Description

Test procedure:

1. A device requests authorization to access a software update to the Software Distribution Entity

2. From the server side: the Software Distribution Entity always authorizes any entity to access the resource.

The sub-­pattern 2.a should be checked: authentication should reject unauthorized clients.

References Vulnerabilities: Security misconfiguration (V18)

Security Objective for Labelling & Certification: Confidentiality, DoS, Authentication

Attacker Software

Distribution Entity

Request software update

Query software update

REJECT

REJECT

Page 85: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

85

Test Pattern ID TP_ID14

Stage Software update distribution

Test Diagram

Test Description

Both the IoT devices and the software distribution entity will have wireless communication (as it is their means of software update exchange): they should be resilient to “invalid data” in the form of invalid data packets.

Test procedure:

1. Identify protocol exchanges of the Software Distribution Entity used to get inputs from the external world meant to access to functionality or change privileges.

2. For each of the protocol exchanges, create messages of protocol exchanges that includes invalid data in order to access to unintended functionality.

Use each of the input elements created in step 2 as input to the Software Distribution Entity.

References Vulnerabilities: Invalid Input Data (V20)

Security Objective for Labelling & Certification: Confidentiality, DoS, Authentication

Attacker Software

Distribution Entity

Attacks (through wireless)

Page 86: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

86

Test Pattern ID TP_ID17

Stage Software update distribution

Test Diagram

Test Description

Test procedure template

1. A very large number of attackers send requests to the credential manager.

2. The credential manager collapses. A client tries to communicate with the credential manager but it does not receive a response.

This is related to the customization of TP_ID13 for EXP IV, with the difference that here the test checks whether the system is resilient to a large volume of data (requests) rather than the content of data (invalid data in TP_ID13).

References Vulnerabilities: Buffer Overflow (V15)

Security Objective for Labelling & Certification: Confidentiality, DoS, Authentication

4.4.5 In-­house Implementation 4.4.5.1 Infrastructure characterisation The scenario can proceed in several steps to fully met the scenarios described previously.

It is based on an implementation for RIOT in a module which enables firmware swapping on stm21f103re, based boards. Theoretically it can also be easily ported for any cortex-­m3.

Its functionality is to provide a way to boot different firmware images flashed in the internal ROM, since each image can be compiled to embed metadata that can be used to verify the internal images. Verification is done using SHA256 over the whole firmware, which is checked before swapping between firmware images.

The internals are quite simple:

Device

Border Router

(ethos,Linux)

Query software updates

(COAP DTLS) (TCP+UART)

Border Router (M3)

Software Distribution

Entity

(COAP DTLS)

Software update(s) availability

Request software update

Transmit software update

Attacker

DoS Attacks (through wireless)

Page 87: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

87

• The bootloader is compiled to fit into the first 16K of the ROM;;

• Any firmware can be compiled using proper compile flags to indicate the slots;;

• An image compiled with these flags will be linked to fit in the slot indicated, and the metadata is generated automatically using the provided metadata generator which will hash the firmware, then adding these metadata to the first 256 bytes;;

• The bootloader + the firmware with metadata can be merged into one hex file that can be flashed into the ROM of a stm32f103re CPU.

On top of this firmware swap mechanism, a full software distribution OTA solution is built with: separate update code from main current code, download binary with high level protocol over encrypted channel, signed images and roll-­back mechanism if new image is not functional.

4.4.5.2 Scenario setup It consists in 1) flashing all the nodes of the test with proper RIOT firmware (device, gateway) and then setting up the nodes:

• Software compilation (initial and updates software) and signing • Software updates configuration, and starting the software distribution server • Border router configuration and starting it • Flashing IoT device with initial software and restarting it

Attacks can proceed in several ways in the scenarios:

• Malicious entities will impersonate devices (resource constrained devices) in some tests: for accessing the software updates (attempting to break confidentiality), performing denial of service, attempting to get software resources without being properly authenticated and attacking the Software Distribution Server interface (crafting messages to crash or gain unauthorized access).

• Malicious entities will impersonate Software Distribution Updates for attempting to deceive legitimate devices: corrupted software distribution, replay attack of software distribution, etc. These attacks will be performed on more powerful devices.

4.4.6 Large-­Scale Testbed Implementation 4.4.6.1 Infrastructure characterisation The infrastructure for the large scale remains the same as described in the previous section. The only difference is the number of nodes comprising the testing environment.

4.4.6.2 Scenario setup The scenario setup is the same as previously except that some intermediate entities might be present in the actual test (e.g. serial port redirection, etc.), but would still be functionally transparent.

4.4.7 Data Collection The whole set-­up of the experiments, and then a chronological logging of the events is expected to be collected.

Page 88: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

88

4.4.8 Experiment Validation Functional firmware, properly signed by authorized entity, is successfully received, verified, installed and booted on an IoT device in the IoT-­LAB testbed running RIOT. If the received firmware is either malicious, or non-­functional, the IoT device continues to run the previous firmware. This is validated by test checks.

Most of experiment validation is directly derived from the test implementation, which describes how the malicious node(s) will proceed for attacking. The attacks should fail, which is clearly identified by explicit action (rejecting connection, updates, etc.) or lack of action (not granting access, etc.) in all cases, except for one: the test on denial of service. The denial of service is more susceptible to quantitative evaluation (e.g. on the impact of the DoS on legitimate requests – e.g. legitimate request loss rate with respect to number of DoS requests performed).

The general condition for having trustable results, is that the scenario was properly executed: all the nodes executed all the steps;; and in particular, it should be checked that packets were not lost: e.g. not responding to an unauthorized request because the request was lost, does not constitute in a proper test. Ensuring proper execution of all steps is done by proper logging, and then proper log analysis.

4.4.9 Experiment Evaluation Functional firmware, properly signed by authorized entity, is successfully received, verified, installed and booted on an IoT device in the IoT-­LAB testbed running RIOT. If the received firmware is either malicious, or non-­functional, the IoT device continues to run the previous firmware. This is validated by test checks.

The RIOT OTA process is secure if all generated tests executed on the platform under test have a pass result. Thus, the collected data is the pass/fail result of the tests executed.

4.5 EXP 5: Trust aware wireless sensors networks routing

4.5.1 Objectives As described in Deliverable 1.1, the main objective of this experiment is the performance evaluation related to several distributed trust-­aware WSN routing solutions in real large-­scale deployments. The purpose is to validate the trust-­aware solutions in the presence of malicious nodes, lossy links or under adverse conditions. The trust solutions under test must, on one hand, be able to tolerate several types of attacks and, on the other hand, achieve optimal routing and application goals based on high-­end theoretical frameworks.

For the purposes of ARMOUR experimentation approach and plan, several network topologies and configurations will be tested and validated, including different number of sensors comprising the WSN, different type of malicious nodes (black hole attacks, grey hole attacks, replay attacks, modification attacks, etc.) as well as different percentage of malicious nodes per network and experiment.

For the purpose of experiment validation, a large number of metrics and values are collected and processed, leading to a clear verification and labelling process, in line with the overall approach described in WP4 Deliverables and especially D4.1 and D4.2.

Page 89: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

89

4.5.2 Scenario Description This scenario focuses on the integration and demonstration of the trust-­aware routing protocol in a security-­related context. In particular, we consider a large-­scale WSN deployment within FIT-­IoT and in the presence of different type of malicious nodes (grey or black nodes that drop part of or all of their incoming traffic, replay attackers that re-­send previous messages and integrity attacks from nodes altering message data) and re-­route the data traffic around them. In this respect, the ability of the routing protocol to adapt to threats and attacks taking into consideration the use case context will be validated for different network set-­ups and categorized according to the ARMOUR trust labelling scheme.

The entities utilized in this experiment are:

Network Entity Name Device

Main function • Sensor nodes comprising the WSN under test, whose

number and compiled code differ from one test to another in order to test trust-­aware algorithms under different configurations and network deployments

• In general, the benevolent devices: reply to COAP messages, participate in WSN routing, update and calculate routing metrics and select parent node for forwarding.

Operating System • ContikiOS

Information Consumed

• Information on sensing data • Routing metrics values

Information Produced • Routing path towards Border Router • Request for messages

Communicates with • Devices, Border Router, External application (client)

Network Entity Name Border Router

Main function • The border router is bridging (interconnecting) the

network consisting of wireless devices and the external applications through Internet. Border router also sets up the WSN tree.

Operating System • Dual component: ContikiOS with support for border

router connected through serial port to software running on a generic operating system (e.g. Linux)

Information • • COAP messages over IPv6

Page 90: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

90

Consumed

Information Produced • COAP messages over IPv6

Communicates with • Device and external application

Network Entity Name External application (service client)

Main function • A service that has been developed for the purposes of this experiment that generates traffic outside the FIT-­IoT Lab sensor network and communicates with the Border Router of the WSN under test.

Operating System • Linux/Java

Information Consumed • COAP messages over IPv6

Information Produced • COAP messages over IPv6

Communicates with • Device and external application

Moreover, for the test execution and verification, the following entities are needed:

1. a sniffer that overhears the messages exchanged among the sensor nodes within a given area of interest in the WSN,

2. a software module that collects the data captured by the sniffer and processes them accordingly to check the validation of the test procedure and the level of security and trust, related to the ARMOUR labelling scheme.

It is highlighted that the developed service that generates traffic as well as the software module that collects and processes data from the sniffer are hosted in a cloud Virtual Machine (VM) provided by Fit Cloud, while the sensor nodes and the sniffer node are hosted within the IoT Lab. As described in section 3.3.3.2, we have tried to create common network topologies and testing configurations that would allow us to test both experiment 5 and experiment 6, in an effort to create synergies between the experiments and handle the effort during the project lifetime.

First, the service that generates traffic requests the network topology and services provided by the Border Router. After receiving this information, the service (implemented in TTCN-­3/TITAN) requests (either through COAP or simple ICMP messages) a response from a sensor node. A special condition that must be met in this case considers that one of

Page 91: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

91

the sensor nodes on the uplink direction is malicious (the type of malicious node depends on the testing purpose and can be either back hole, grey hole, replay attacker, modification attacker). After one or more tries (depends on several factors, such as the number of parent nodes, the percentage of malicious nodes, the exact type of attack, the configuration of the trust-­aware protocol, etc.) the benevolent node must identify the malicious behaviour and modify the routing path to bypass this node.

Figure 18 -­ Trust-­aware routing scenario and demonstration.

4.5.3 Application of Experimentation Approach 4.5.3.1 Definition of the experiment scope/focus The main focus of the experiment, as for the ARMOUR project as a whole, is to create a security toolkit that can provide certification labelling services in an as much as possible automated manner that would make it simple to service developers to utilize it.

It is also worth mentioning that experiment 5 focuses on security (and in particular trust) issues related to Layer 3 (routing protocols), complementing experiment 6 in a manner that contributes towards the development of the aforementioned ARMOUR security toolkit.

The definition of the experiment scope focuses on validating different Objective Functions, as formally defined in IETF RFCs, under the presence of different types of attackers and define a set of labelling categories.

The Objective Functions under test will include: Hop-­Count (HC), Expected Transmissions (ETX), Remaining Energy (RE), Trust Routing (TR) as well as their combinations. Furthermore, types of attacks to be mitigated include: black-­hole, grey-­hole, integrity and replay attacks. Finally, several configuration parameters with respect to the network set-­up will be randomly selected. Consequently, by modifying the aforementioned parameters,

Page 92: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

92

many scenarios will be created and validated, leading to the proper characterization of the algorithms under test, connected to the labelling scheme defined and followed in ARMOUR.

4.5.3.2 Identification of the threats to be tested in the experiment

Table 7 – Threats addressed by EXP5

4.5.4 Test Patterns Design With respect to the considered vulnerabilities for the IoT platforms under test, the experiment will focus on the following test patterns, for which more details can also be found in D2.1 and D2.2:

• TP_ID4 -­ Resistance to alteration of messages • TP_ID5 -­ Resistance to replay of messages • TP_ID6 -­ Run unauthorized software • TP_ID7 -­ Identifying security needs depending on the M2M operational context awareness

• TP_ID8 -­ Resistance to eaves dropping and man in the middle attack

Test pattern ID TP_ID4

Stage Under normal network operation

Protocol RPL, ICMP, COAP

Property tested Resistance to alteration of messages.

Countermeasure Description Related threats

Alteration of messages

A mechanism is established between the communicating devices, which provides resistance to alteration requests

V8

Replay protection The protocol includes functionality to detect if all or part of a message is an unauthorised replay of an earlier message

V9

Unauthorized software

Ensure appropriate mechanisms to mitigate unauthorized software attak

V10

Context awareness The protocol includes functionality to identify context awareness and act accordingly

V12

Resistance to man in the middle attacks

The security association between communicating entities uses protocols which are proven to resist man-­in-­the-­middle attacks

V13

Page 93: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

93

Test diagram

Test description

Entities:

• Server node -­ Device that receives and replies to requests. • Malicious node -­ Device that modifies the data sent by the Server node before forwarding them to the Client node.

• Intermediate node -­ Legitimate node that replaces the malicious node in the routing path once it is detected by the Server node.

• Client node -­ Device that requests data from the Server node.

Steps:

Note: The Malicious node has taken part in the RPL routing procedure and was identified as the next hop (most-­Trusted) for Server node to follow towards the RPL tree root.

1. Client node sends Request 1 to Server node. 2. Server node replies to Request 1 with Message 1 using the Malicious node as the next hop.

3. Malicious node modifies Message 1 and forwards it to the Client node. 4. Server node overhears the channel, detects Message 1 forgery by the Malicious node, and adjusts its routing table.

5. Client node sends Request 2 to Server node. 6. Server node replies to Request 2 with Message 2 using the Intermediate node as next hop.

The test is considered successful if Server node adjusts its routing table so that after the first request and reply to Client node it detects the Malicious node and excludes it from its routing path.

Page 94: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

94

References Vulnerability: V8 (Resistance to alteration of messages) Security Objective for Labelling & Certification: Integrity

Test pattern ID TP_ID5

Stage Under normal network operation

Protocol RPL, ICMP, COAP

Property tested Resistance to replay of messages

Test diagram

Test description

Entities:

• Server node -­ Device that receives and replies to requests. • Malicious node -­ Device that overhears messages sent by the Server node and sends them to the Client node to perform a replay attack.

• Intermediate node -­ Legitimate node that replaces the malicious node in the routing path once it is detected by the Server node.

• Client node -­ Device that requests data from the Server node.

Steps:

Note: The Malicious node has taken part in the RPL routing procedure and was identified as the next hop (most-­Trusted) for Server node to follow towards the RPL tree root.

Page 95: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

95

1. Client node sends Request 1 to Server node. 2. Server node replies to Request 1 with Message 1 using the Malicious node as the next hop.

3. After some time, Malicious node retransmits Message 1 to Client node. 4. Server node overhears the channel, detects Message 1 retransmission by the Malicious node, and adjusts its routing table accordingly.

5. Client node sends Request 2 to Server node. 6. Server node replies to Request 2 with Message 2 using the Intermediate node as next hop.

The test is considered successful if Server node adjusts its routing table so that after the retransmission of an older message to Client node it detects the Malicious node and excludes it from its routing path.

References Vulnerability: V9 (Resistance to replay of messages) Security Objective for Labelling & Certification: Replay attack

Test pattern ID TP_ID6

Stage Under normal network operation

Protocol RPL, ICMP, COAP

Property tested Run unauthorized software

Test diagram

Test description

Entities:

• Server node -­ Device that receives and replies to requests.

Page 96: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

96

• Malicious node -­ Device that receives messages sent by the Server node to the Client node and discards them instead of forwarding them (black-­hole, grey-­hole attack).

• Intermediate node -­ Legitimate node that replaces the malicious node in the routing path once it is detected by the Server node.

• Client node -­ Device that requests data from the Server node.

Steps:

Note: The Malicious node has taken part in the RPL routing procedure and was identified as the next hop (most-­Trusted) for Server node to follow towards the RPL tree root.

1. Client node sends Request 1 to Server node. 2. Server node replies to Request 1 with Message 1 using the Malicious node as the next hop.

3. Malicious node receives Message 1 and discards it. 4. Server node overhears the channel, detects Message 1 forwarding failure and adjusts its routing table.

5. Client node sends Request 2 to Server node. 6. Server node replies to Request 2 with Message 2 using the Intermediate node as next hop.

The test is considered successful if Server node adjusts its routing table so that after the failed forwarding of a message to Client node by the Malicious node results in its exclusion from the routing path towards the Client node.

References Vulnerability: V10 (Run unauthorized software) Security Objective for Labelling & Certification: Unauthorized access

Test pattern ID TP_ID7

Stage Under normal network operation

Protocol RPL, ICMP, COAP

Property tested

Identifying security needs depending on the M2M operational context awareness

Test diagram

Page 97: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

97

Test description

Entities:

• Server node – Device that receives and replies to requests. • Malicious node -­ Device that receives messages sent by the Server node to the Client node and discards them instead of forwarding them (black-­hole, grey-­hole attack).

• Intermediate node -­ Legitimate node that replaces the malicious node in the routing path once it is detected by the Server node.

• Client node -­ Device that requests data from the Server node.

Steps:

Note: Contexts A and B are two different setups where the metrics with which the RPL tree is constructed differ. In both setups, the Malicious node has taken part in the RPL routing procedure and was identified as the next hop (most-­Trusted) for Server node to follow towards the RPL tree root.

1. Context A is set up. 2. Client node sends Request 1 to Server node. 3. Server node sends Message 1 to the Malicious node.

Page 98: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

98

4. Malicious node discards Message 1. 5. Server node overhears the channel, detects Message 1 discarding by the Malicious node, and adjusts its routing table.

6. Client node sends Request 2 to Server node. 7. Server node replies to Request 2 with Message 2 using the Intermediate node as next hop.

8. Context B is set up. 9. Client node sends Request 1 to Server node. 10. Server node sends Message 1 to the Malicious node. 11. Malicious node discards Message 1. 12. Server node overhears the channel but ignores Message 1 discarding by the Malicious node.

13. Client node sends Request 2 to Server node. 14. Server node sends Message 2 to the Malicious node. 15. Malicious node discards Message 2. 16. Server node overhears the channel, detects Message 2 discarding by the Malicious node, and adjusts its routing table.

17. Client node sends Request 3 to Server node. 18. Server node replies to Request 3 with Message 3 using the Intermediate node as next hop.

The test is considered successful if Server node adjusts its routing table, to exclude a Malicious node from the path to the root, depending on the Context in which it operates, i.e., within Context A the Malicious node is immediately detected and excluded from future Message transmissions, and within Context B, there is a tolerance for Malicious node’s behaviour.

References Vulnerability: V12 (Context aware security) Security Objective for Labelling & Certification: Level of security based on context

Test pattern ID TP_ID8

Stage Under normal network operation

Protocol RPL, ICMP, COAP

Property tested Resistance to eaves dropping and man in the middle attack

Test diagram

Page 99: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

99

Test description

Entities:

• Server node -­ Device that receives and replies to requests. • Malicious node -­ Device that had acquired valid credentials and sends stored data requests to Server node

Steps:

Note: The Malicious node had in some way acquired valid credentials to authenticate itself as a valid node and is within transmission range of Server node.

1. Server node encrypts and stores data. 2. Malicious node requests stored data from Server node. 3. Server node initiates the authentication handshake. 4. Malicious node replies with valid authentication credentials. 5. Server node validates the authentication credentials and responds with the stored data.

6. Malicious node receives the stored data and attempts to decrypt them.

The test is considered successful if the Malicious node fails to decrypt and read the data even though the authentication handshake was successful.

References Vulnerability: V13 (Mitigating eavesdropping and MITM attack)

Page 100: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

100

Security Objective for Labelling & Certification: Confidentiality

4.5.5 In-­house Implementation 4.5.5.1 Infrastructure characterisation Experiment 5 consists of 1) sensor nodes from IoT Lab (including benevolent and malicious nodes as well as a node acting as sniffer) running ContikiOS and 2) software modules and services running on a Virtual Machine on FIT Cloud, running Ubuntu 16.04 LTE (traffic generator for injecting messages to WSN through the Border Router, software implementation of the processing engine with regard to the results of the experiment).

With respect to the sensor nodes, the devices (M3 open node) comprising the network are based on an STM32 (ARM Cortex M3) 32-­bits micro-­controller, utilizing a new ATMEL radio interface at 2.4 GHz. The OS embedded within the M3 nodes is ContikiOS 2.7.

Figure 19 -­ M3 node overview

The VM hosting the software modules and services for the experiment include a service for creating the wireless sensor network in the sense of compiling the appropriate version of source code on the nodes comprising the network, setting up the tree topology, generating traffic in the wireless sensor network, sniffing messages exchanges within the network and collecting/processing respective data. For those services, TITAN test tool, being the TTCN-­3 execution tool used by communicating through MQTT protocol with the System Under Test (UT) within the IoT-­Lab. The test tool (TITAN) is accessible by SSH in order to launch automated test scripts.

4.5.5.2 Scenario setup The testing scenarios have been analytically described in the previous paragraphs along with their execution steps. In general, the software module (TITAN) requests the list of nodes and services within the network from the Border Router. Upon request of the Border Router response, the service module sends a message to one of the sensor nodes deployed within the wireless sensor network. Then, sniffer collects messages and data (the scenario specificities depend on each experiment configuration) and sends the data to another software module (TITAN) that processes (and stores) the data, leading to the verification (or not) of the test, where it can be evaluated as PASS/FAIL.

In this experiment, a percentage of nodes consisting the WSN under test will be programmed as misbehaving nodes that must be detected by benevolent nodes and by-­

Page 101: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

101

passed thanks to the trust-­aware routing module, by properly applying the countermeasures for each type of attack.

Prior to the test execution, the following steps must be performed in order to properly set up the network where the testing will take place:

• Configuration and pre-­installation

1. Configure FIT-­IOT

https://www.iot-­lab.info/tutorials/configure-­your-­ssh-­access/

https://www.iot-­lab.info/tutorials/contiki-­compilation/ (steps 1 and 2)

2. Copy the files of the experiment to your local machine

cd ~/

git clone http://83.235.169.221/gitlab/pkarkazis/armour.git

3. Access to the folder of contiki border router and compile it for devices M3 of FIT IoT-­lab

cd /armour/trust_exp/contiki/examples/ipv6/rpl-­border-­router

make TARGET=iotlab-­m3

cp border-­router.iotlab-­m3 ../../../bins/ border-­router.iotlab-­m3

4. Access to the folder of coap server and compile it for devices M3 of FIT IoT-­lab

cd /armour/trust_exp/contiki/examples/iotlab/07-­er-­coap-­server

make TARGET=iotlab-­m3

cp er-­coap-­server.iotlab-­m3 ../../../bins/er-­coap-­server.iotlab-­m3

4. Access to the folder of coap server and compile it, as malicious node, for devices M3 of FIT IoT-­lab

cd /armour/trust_exp/contiki/examples/iotlab/07-­er-­coap-­server

make TARGET=iotlab-­m3 WITH_BLACK_HOLE=1

cp er-­coap-­server.iotlab-­m3 ../../../bins/black_hole_attaker.iotlab-­m3

• Execution

1. Access to FIT IoT-­lab (https://www.iot-­lab.info) to execute the experiment

Page 102: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

102

-­ Click on "New Experiment"

-­ Select name and time (at least 20 minutes)

-­ Select the following 8 nodes (m3:at86rf231) from Grenoble site 206, 210, 214, 222, 226, 246, 250 "Next".

-­ Set profile for node m3-­214.grenoble.iot-­lab.info to sniffer and “Add Association”, "Submit" and "Yes".

2. Click on the experiment execution to know its identity (m3-­210, m3-­214, ...)

3. Establish the forwarding of ports between the border router and your local machine

ssh [email protected]­lab.info -­N -­L 2000:m3-­206:20000 &

4. Establish the tunnel over the previous port (in this case, 20000). Let this terminal opened, it is necessary for the tunnel.

cd ~/armour/trust_exp/contiki/tools

sudo ./tunslip6 aaaa::1/64 -­L -­a localhost -­p 2000

5. Open another terminal and install the firmware in the devices.

cd ~/armour/trust_exp/contiki/bins

auth-­cli -­u user

node-­cli -­-­update er-­coap-­server.iotlab-­m3 -­l grenoble,m3,210+214+222+246 +250

node-­cli -­-­update er-­coap-­server.iotlab-­m3 -­l grenoble,m3,226

node-­cli -­l grenoble,m3,226 –sto

node-­cli -­-­update border-­router.iotlab-­m3 -­l grenoble,m3,210

4.5.6 Large-­Scale Testbed Implementation 4.5.6.1 Infrastructure characterisation The infrastructure for the large scale remains the same as described in the previous section 4.5.5.1. The only difference is the number of nodes comprising the testing environment.

4.5.6.2 Scenario setup The scenario set up for large-­scale experimentation remains the same as described in the previous section (4.5.5.2), while only the configuration of the tests is changing (number of nodes, number of malicious nodes, etc.).

Page 103: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

103

4.5.7 Data Collection Apart from the metric that reveals whether the test execution has been successful or not, there are other metrics that can be collected and validated, leading to useful results with regard to the certification process and the level of security obtained in each case (test configuration).

In this experiment, the following metrics will be collected, processed and stored per test execution:

• Number of nodes comprising the network;; • Number of malicious nodes;; • Type of malicious nodes;; • Routing metrics utilized for mitigating the attack;; • Number (or percentage) of failed/unsuccessful communications;; • Time of test execution;; • Frequency of message exchange;; • Messages from different layers (ICMP, COAP);; • Successful/unsuccessful testing execution.

4.5.8 Experiment Validation This experiment consists of several test patterns, each of which has slightly different validation process, as described above. In general, the validation process includes the following steps: a WSN is set up with an automated process (bash script), the software module (TITAN) requests the list of nodes and services within the network from the Border Router, the external application requests data (CoAP) from one or more nodes (depends on the specific scenario), the sniffer collects messages and data;; sends the data to another software module (TITAN) that processes (and stores) the data. Then these metrics are processed, leading either to the acceptance or failure of the experimentation validation. Finally, as discussed already, the test runs several times with different configurations (number of benevolent nodes, number and position of malicious nodes, type of attacks, etc.).

4.5.9 Experiment Evaluation An IoT Platform is secure if all generated tests executed on the platform under test have a pass result (values beyond predefined thresholds). Thus, the important collected data is the pass/fail result of the tests executed on the platform.

In some cases, the evaluation of the experiment might not be possible to be characterized as pass or fail, but must be expressed in terms of a gradation. As an example, one can evaluate the result judging by the percentage of dropped packets and in such occasions, thresholds must have been put in place along with the characterization of the experiment result (e.g. fail, success, excellent).

4.6 EXP 6: Secure IoT Service Discovery

4.6.1 Objectives The main objective of this experiment is the definition of the process of the execution of a set of experiments towards certifying the robustness and efficiency of secure service discovery achieved by innovative solutions combining DTLS over CoAP protocol. The validation and certification process will include several threats as well as countermeasures

Page 104: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

104

to be applied and validate their efficiency towards mitigating these vulnerabilities in large-­scale experimentation infrastructures.

From the point of view of the experiment description, there is a need for utilizing specially developed software modules (TTCN-­3, TITAN) to deal with the specificities of the experiment execution and, most importantly, with the collection of useful data for the characterization of the test execution result (PASS/FAIL).

4.6.2 Scenario Description The scenario will include the registration, key exchange over secure channel and attempts to break security measures on a lightweight software library that provides a datagram server with DTLS support for use in constrained IoT devices, as depicted in Figure 20.

Figure 20 -­ Secure Service Discovery scenario and demonstration.

The wireless sensor networks will consist of several nodes connected to Border Router. In such an environment, we will demonstrate the performance and validate the efficiency of security solutions imposed by the ARMOUR security toolkit by testing different encryption and integrity methods and libraries installed to nodes and gateways. Thus, the scenario will validate service and resource discovery based on CoAP server enhanced with DTLS protocol. Moreover, as discussed in section 3.3.3.2, the utilization of encryption protocols will be included as a metric in the trust-­aware routing scenario, integrating both experiments and therefore, addressing more vulnerabilities.

The entities utilized in this experiment are:

Page 105: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

105

Network Entity Name Device

Main function • Sensor nodes comprising the WSN under test, using keys to set up secure communication channels.

Operating System • ContikiOS

Information Consumed

• Information on sensing data • Secure DTLS channel establishment

Information Produced • Information on sensing data • Secure DTLS channel establishment

Communicates with • Border Router, External application (client), devices

Network Entity Name Border Router

Main function • The border router is bridging (interconnecting) the

network consisting of wireless devices and the external applications through Internet. Border router also sets up the WSN tree.

Operating System • Dual component: ContikiOS with support for border

router connected through serial port to software running on a generic operating system (e.g. Linux)

Information Consumed • COAP messages over DTLS

Information Produced • DTLS/COAP messages over DTLS

Communicates with • Device and external application

Network Entity Name External application (service client)

Main function • A service that has been developed for the purposes of

this experiment that generates traffic outside the FIT-­IoT Lab sensor network and communicates with the Border Router of the WSN under test.

Page 106: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

106

Operating System • Linux/Java

Information Consumed • COAP messages over IPv6

Information Produced • COAP messages over IPv6

Communicates with • Device and external application

Moreover, for the test execution and verification, the following entities are needed:

1. a sniffer that overhears the messages exchanged among the sensor nodes within a given area of interest in the WSN;;

2. a software module that collects the data captured by the sniffer and processes them accordingly to check the validation of the test procedure and the level of security, with respect to the ARMOUR labelling scheme.

It is highlighted that the developed service that generates traffic as well as the software module that collects and processes data from the sniffer are hosted in a cloud Virtual Machine (VM) provided by Fit Cloud, while the sensor nodes and the sniffer node are hosted within the IoT Lab. As described in section 3.3.3.2, we have tried to create common network topologies and testing configurations that would allow us to test both experiment 5 and experiment 6, in an effort to create synergies between the experiments and handle the effort during the project lifetime.

The main scenario includes the attempt from nodes acting maliciously either to overhear the encrypted messages, try to reveal the original characters or pretend an authenticated node to gain access or destroy the network routing tree or steal stored information (keys) in the benevolent nodes.

4.6.3 Application of Experimentation Approach 4.6.3.1 Definition of the experiment scope/focus The main focus of this experiment is to validate the security hardening gained through the establishment of a DTLS secure channel with COAP.

As it is clear from the standardized protocols that this experiment deals with, it focuses on encryption techniques on Layer 4 and 5 applied in constrained devices with the purpose to harden the level of security and its implications with respect to confidentiality, integrity and availability. It is highlighted that experiment 6 is complementary to experiment 5 with respect to the development of the ARMOUR security toolkit.

4.6.3.2 Identification of the threats to be tested in the experiment

Table 8 – Threats addressed by EXP6

Page 107: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

107

4.6.4 Test Patterns Design With respect to the considered vulnerabilities for the IoT platforms under test, the experiment will focus on the following test patterns, for which more details can also be found in D2.1 and D2.2:

• TP_ID1 -­ Resistance to unauthorized access, modification or deletion of keys • TP_ID4 -­ Resistance to alteration of requests • TP_ID5 -­ Resistance to replay of requests • TP_ID6 -­ Run unauthorized software • TP_ID8 -­ Resistance to eaves dropping and man in the middle • TP_ID13 -­ Detection of insecure encryption and storage of information

Test pattern ID TP_ID1

Stage Under normal network operation

Protocol DTLS, COAP

Property tested Resistance to an unauthorized access, modification or deletion of keys

Test diagram

Countermeasure Description Related threats

Mutual authentication A security association is established between servers and clients, providing mutual authentication and strong encryption

V1-­V5

Alteration of messages A mechanism is established between the communicating devices, which provides resistance to alteration requests

V8

Replay protection The protocol includes functionality to detect if all or part of a message is an unauthorised repeat of an earlier message or part of a message

V9

Integrity verification The integrity of software images received must be verified by devices

V10

Proven resistance to man in the middle attacks

The security association between communicating entities uses protocols which are proven to resist man-­in-­the-­middle attacks

V13

Page 108: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

108

Test description

Entities:

• Server node -­ Device that receives and replies to requests. • Malicious node -­ Device that issues key deletion/replacement requests to Server node.

Steps:

Note: The Malicious node is within transmission range of the Server node.

1. Malicious node sends a key deletion/replacement request to Server node.

2. Server node receives the request and initiates the Authentication handshake procedure.

3. Server node requests authentication credentials from the Malicious node.

4. Malicious node replies with generated credentials. 5. Server node validates the authentication credentials.

The test is considered successful if the Malicious node fails to pass the authentication procedure.

References

Vulnerability: V1-­V5 (Resistance to an unauthorized access, modification or deletion of keys) Security Objective for Labelling & Certification: Confidentiality, Integrity, Availability

Page 109: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

109

Test pattern ID TP_ID4

Stage Under normal network operation

Protocol DTLS, COAP

Property tested Resistance to alteration of requests

Test diagram

Test description

Entities:

• Server node -­ Device that receives and replies to requests. • Malicious node -­ Device that overhears messages sent by the Server node.

• Client node -­ Device that requests data from the Server node.

Steps:

Note: The Malicious node is overhearing the channel and is within transmission range of Server and Client node.

1. Malicious node starts overhearing the channel 2. Client node sends Request 1 to Server node. 3. Server node replies to Request 1 with Message 1. 4. Malicious node overhears Message 1 and modifies it. 5. Malicious node forwards modified Message 1 to the Client mode. 6. Client node initiates authentication procedure.

The test is considered successful if the Malicious node fails to pass the

Page 110: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

110

authentication procedure.

References Vulnerability: V8 (Resistance to alteration of requests) Security Objective for Labelling & Certification: Confidentiality, Integrity, Availability

Test pattern ID TP_ID5

Stage Under normal network operation

Protocol DTLS, COAP

Property tested Resistance to replay of messages

Test diagram

Test description

Entities:

• Server node -­ Device that receives and replies to requests. • Malicious node -­ Device that overhears messages sent by the Server node.

• Client node -­ Device that requests data from the Server node.

Steps:

Note: The Malicious node is overhearing the channel and is within transmission range of Server and Client node.

1. Malicious node starts overhearing the channel. 2. Client node sends Request 1 to Server node.

Page 111: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

111

3. Server node replies to Request 1 with Message 1. 4. Malicious node overhears Message 1 and retransmits it to Client node. 5. Client node initiates authentication procedure.

The test is considered successful if the Malicious node fails to pass the authentication procedure.

References Vulnerability: V9 (Resistance to replay of messages) Security Objective for Labelling & Certification: Replay attack

Test pattern ID TP_ID6

Stage Under normal network operation

Protocol DTLS, COAP

Property tested Run unauthorized software

Test diagram

Test description

Entities:

1. Server node – Device that receives and replies to requests. 2. Malicious node -­ Device that issues software update requests to Server node.

Page 112: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

112

Steps:

Note: The Malicious node is within transmission range of Server node.

1. Malicious node sends a software request to Server node. 2. Server node receives the request and initiates the Authentication handshake procedure.

3. Server node requests authentication credentials from the Malicious node.

4. Malicious node replies with generated credentials. 5. Server node processes the authentication credentials.

The test is considered successful if the Malicious node fails to pass the authentication procedure.

References Vulnerability: V10 (Run unauthorized software) Security Objective for Labelling & Certification: Unauthorized access

Test pattern ID TP_ID8

Stage Under normal network operation

Protocol DTLS, COAP

Property tested Resistance to eaves dropping and man in the middle attack

Test diagram

Page 113: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

113

Test description

Entities:

• Server node -­ Device that receives and replies to requests. • Malicious node -­ Device that overhears messages sent by the Server node. and attempts to decrypt them.

• Client node -­ Device that requests data from the Server node.

Steps:

Note: The Malicious node is overhearing the channel and is within transmission range of Server and Client node.

1. Malicious node starts overhearing the channel. 2. Client node sends Request 1 to Server node. 3. Server node replies to Request 1 with Message 1. 4. Malicious node overhears Message 1. 5. Malicious node attempts to decrypt Message 1.

The test is considered successful if the Malicious node cannot read the contents of Message 1.

References Vulnerability: V13 (Mitigating eavesdropping and MITM attack) Security Objective for Labelling & Certification: Confidentiality

4.6.5 In-­house Implementation 4.6.5.1 Infrastructure characterisation This experiment consists of 1) sensor nodes (including benevolent and malicious nodes as well as a node acting as sniffer) running ContikiOS and 2) software modules and services running on a Virtual Machine on FIT Cloud, running Ubuntu 16.04 LTE (traffic generator for injecting messages to WSN through the Border Router, software implementation of the processing engine with regard to the results of the experiment).

With respect to the sensor nodes, the devices (M3 open node) comprising the network are based on an STM32 (ARM Cortex M3) 32-­bits micro-­controller, utilizing a new ATMEL radio interface at 2.4 GHz. The OS embedded within the M3 nodes is ContikiOS 2.7.

Page 114: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

114

Figure 21 -­ M3 node overview

The VM hosting the software modules and services for the experiment include a service for creating the wireless sensor network in the sense of compiling the appropriate version of source code on the nodes comprising the network, generating traffic in the wireless sensor network, sniffing messages exchanges within the network and collecting/processing respective data. For those services, TITAN test tool, being the TTCN-­3 execution tool used by communicating through MQTT protocol with the System Under Test (UT) within the IoT-­Lab. The test tool (TITAN) is accessible by SSH in order to launch automated test scripts.

4.6.5.2 Scenario setup The testing scenarios include the verification of encryption algorithms based on DTLS and COAP implementation. Based on this principle, the set up consists several nodes that communicate securely through the enforcement of DTLS protocol, while malicious nodes are trying to take advantage of several vulnerabilities, as described above, to gain access to the nodes or destroy the communication between benevolent nodes.

As also described in experiment 5, a sniffer collects messages and data on a predefined area of the network (the scenario specificities depend on each experiment configuration) and sends the data to another software module (TITAN) that processes (and stores) the data, leading to the verification (or not) of the test, where it can be evaluated as PASS/FAIL.

Prior to the test execution, the following steps must be performed in order to properly set up the network where the testing will take place:

• Configuration and pre-­installation

1. Configure FIT-­IOT

https://www.iot-­lab.info/tutorials/configure-­your-­ssh-­access/

https://www.iot-­lab.info/tutorials/contiki-­compilation/ (steps 1 and 2)

2. Copy the files of the experiment to your local machine

cd ~/

git clone http://83.235.169.221/gitlab/pkarkazis/armour.git

Page 115: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

115

3. Access to the folder of contiki border router and compile it for devices M3 of FIT IoT-­lab

cd armour/sdis_exp/contiki/examples/sdis/rpl-­border-­router

make TARGET=iotlab-­m3

cp border-­router.iotlab-­m3 ../../../bins/ border-­router.iotlab-­m3

4. Access to the folder of secure coap client and compile it for devices M3 of FIT IoT-­lab

cd armour/sdis_exp/contiki/examples/sdis/psk-­client

make TARGET=iotlab-­m3

cp psk-­client.iotlab-­m3 ../../../bins/psk-­client.iotlab-­m3

4. Access to the folder of secure coap client and compile it, as malicious node, for devices M3 of FIT IoT-­lab

cd armour/sdis_exp/contiki/examples/sdis/psk-­client

make TARGET=iotlab-­m3 WITH_RAMDOM_KEY=1

cp psk-­client.iotlab-­m3 ../../../bins/malicious-­psk-­client.iotlab-­m3

5. Access to the folder of secure coap server and compile it for devices M3 of FIT IoT-­lab

cd armour/sdis_exp/contiki/examples/sdis/psk-­server

make TARGET=iotlab-­m3

cp psk-­server.iotlab-­m3 ../../../bins/psk-­server.iotlab-­m3

4.6.6 Large-­Scale Testbed Implementation 4.6.6.1 Infrastructure characterisation The infrastructure for the large scale remains the same as described in the previous section 4.6.5.1. The only difference is the number of nodes comprising the testing environment.

4.6.6.2 Scenario setup The scenario set up for large-­scale experimentation remains the same as described in the previous section (4.6.5.2), while only the configuration of the tests is changing (number of nodes, number of malicious nodes, different length of encryption keys, different encryption algorithms, etc.).

4.6.7 Data Collection In each test, we have to collect the result (if the test has been satisfactory or not). However, other data is also important to decide the level of security. We have to collect information about the time the experiment has taken, the initial parameters it has, regarding length of keys, cryptographic suite used, specific version of the protocols involved, etc. and the vulnerabilities relevant for an IoT Platform (likelihood and impact).

Page 116: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

116

In some test patterns, such as DoS attack protection, we also have to collect information about the number of attackers and the frequency of the requests.

4.6.8 Experiment Validation This experiment consists of several test patterns, each of which has slightly different validation process, as described above. In general, the validation process includes the following steps: a WSN is set up with an automated process (bash script), the software module (TITAN) requests the list of nodes and services within the network from the Border Router, the external application tries to establish DTLS connection with one or more nodes (depends on the specific scenario) in order to exchange sensing data, the sniffer collects messages and data;; sends the data to another software module (TITAN) that processes (and stores) the data. Then the metrics are processed, leading either to the acceptance or failure of the experimentation validation. Finally, as discussed already, the test runs several times with different configurations (number of nodes, different encryption algorithms, key length, etc.).

4.6.9 Experiment Evaluation An IoT Platform is secure if all generated tests executed on the platform under test have a pass result. Thus, the important collected data is the pass/fail result of the tests executed on the platform. The most frequently used validation process in this experiment will be the establishment (or not) of secure DTLS channels between unauthorized devices that have either overheard the key or broke the encryption code.

4.7 EXP 7: Secure IoT platforms

4.7.1 Objectives Experiment 7 addresses security issues on IoT Platforms implementing a given standard, for instance oneM2M. It has two objectives:

(i) approach for testing of security threats in an IoT Platform and (ii) compliance testing of security requirements issues from the standard security

specifications.

4.7.2 Scenario Description With the fast grow of IoT, many IoT platforms are emerging. OneM2M is being more and more adopted around the world (238 participating partners and members), with a will to standardise and make IoT interoperable. This is one of the main reasons that we chose oneM2M as the reference standard for our IoT Platforms. In Exp7, our IoT platform will be mentioned as “oneM2M implementation”. OM2M provides a horizontal M2M service platform for developing services independently of the underlying network, with the aim to facilitate the deployment of vertical applications and heterogeneous devices.

Page 117: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

117

Figure 22 -­ EXP7 Scenario description

The preceding figure shows the general architecture and component of the IoT setup that will be used as the basis for the security experiment. We can see that the IoT setup is composed of two main components: (i) the sensors, and (ii) the IoT platform.

Network Entity Name Application Entity (AE)

Main function The sensor can be a producer that sends data to an IoT platform. The sensor can be a consumer that receives data from the IoT Platform with respect to its authorisation rights.

Operating System Any OS

Information Consumed The sensor should have permissions to publish data on the IoT Platform. If necessary authorization procedure established between sensors and/or IoT Platform.

Information Produced The entity can send requests to the IoT Platform with respect to the oneM2M standard.

Communicates with • Gateway (IoT Platform) • (optional) Credential Manager

Page 118: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

118

The devices (sensors) chosen for EXP7 are hosted in the IoT Lab. The oneM2M IoT platform is hosted in a Virtual Machine (VM) provided by Fit Cloud as IoT Lab nodes (M3 and A8) have too constrained resources to host. The devices, which are data producers or consumers, communicate with the IoT platform through CoAP+DTLS protocol. The payload of the messages exchanged in the process are standardized by the oneM2M standard. A device can send its latest data (for example temperature value) to the IoT platform and it can ask for some data (for example the temperature value of another device). The security test scenario must setup a precise order on what is going to be tested first (data production/consumption).

4.7.3 Application of Experimentation Approach 4.7.3.1 Definition of the experiment scope/focus EXP7 focuses on testing IoT platforms implementing the oneM2M standards. While FIWARE was considered as an alternative at the beginning, it appears that FIWARE, through its evolution at the ETSI as ISG CIM, is focusing on the pure data layer with reduced IoT positioning. oneM2M is the more relevant in the ARMOUR IoT context. Currently, two open-­source IoT Platforms of focus are developed as compliant to oneM2M, which we consider as platforms under test. The platforms are Mobius (http://iotmobius.com/) and OM2M (http://www.eclipse.org/om2m/).

Mobius is an IoT Platform developed by KETI. Mobius Platform is a system that allows communication network to run smoothly between things. It makes the communication of devices and applications easier. Mobius Platform establishes an ecosystem, where anyone can make and use IoT services in open development environment.

The second platform is OM2M. The Eclipse OM2M project, initiated by LAAS-­CNRS, is an open source implementation of oneM2M and SmartM2M standard. It provides a horizontal M2M service platform for developing services independently of the underlying network,

Network Entity Name IoT Platform

Main function The IoT Platform stores data received by sensors and allows access to authorized application entities (e.g. consumer sensors)

Operating System N/A

Information Consumed The Iot Platform has defined access control policies for the sensors with it interacts.

Information Produced The entity can receive requests from the AE with respect to the oneM2M standard.

Communicates with • Sensor (through gateway)

Page 119: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

119

with the aim to facilitate the deployment of vertical applications and heterogeneous devices.

Hence, none of the platforms developed have all the security functions related to most of the vulnerabilities. Thus, in addition to the considered vulnerabilities, the experiment will focus on security functions such as access control policies, security association establishment, etc., as defined by the oneM2M security documents.

4.7.3.2 Identification of the threats to be tested in the experiment Experiment 7 in D1.1 identified a set of threats that could be of high importance from a security point of view. With respect to their development status, we reduce the experiment scope to two vulnerabilities that are not dependent the platform evolution. We specifically consider, injection and replay of request, as listed in the table below.

Table 9 – Threat addressed by EXP7

4.7.4 Test Patterns Design With respect to the considered vulnerabilities for the IoT platforms under test, the experiment will focus on the following test patterns, for which more details could be found in D2.1:

• TP_ID5 -­ Resistance to replay of requests • TP_ID10 -­ Resistance to Injection Attacks

Test Pattern ID TP_ID5

Stage Request message

Entities Platforms (Servers), Application Entities (Sensors)

Protocol SAEF -­ (D)TLS

Property tested Resistance to replay of any request from non-­authorised entities.

Countermeasure Description Related threats

Replay protection The IoT Platform should implement secure protocol on the (D)TLS basis called Security Association Establishment Framework (SAEF) to protect the communication between the two entities from replay attacks

V9

Alteration of messages A mechanism is established in the IoT Platform, which provides resistance to alteration requests using SQL Injection

V16

Page 120: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

120

Test diagram

Test description

The communication is between a Smart object and oneM2M implementation (IUT – IoT Platform under test).

The oneM2M standard has developed secure protocol on the (D)TLS basis called Security Association Establishment Framework (SAEF) to protect the communication between the two entities from replay attacks. The test procedure will focus on verifying if a such feature is well implemented in the IoT platform, in our case Mobius or OM2M.

References Vulnerability: V9 (Resistance to replay of messages) Security Objective for Labelling & Certification: Resistance to Replay attacks

Test Pattern ID TP_ID10

Stage Request message

Entities Platforms (Servers), Application Entities (Sensors)

Protocol N/A

Property tested Resistance to SQL injection.

Page 121: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

121

Test diagram

Test Description

The communication is between an attacker and oneM2M CSE implementation.

This is dependent on the oneM2M implementation, not the standard itself. The experiment considers SQL Injection as valuable for the IoT Platforms under test.

References Vulnerability: V16 (Injection) Security Objective for Labelling & Certification Confidentiality, Integrity, Authorization, Authentication

4.7.5 In-­house Implementation Within the scope of the ARMOUR project Exp7 created an in-­house implementation of the testbed for oneM2M. Using on the Large-­Scale testing framework security test cases are created to cover security requirements extracted from the oneM2M security specification (for more details refer to D1.1, Section 8.3). Based on the formalization of the requirements in the form of test purposes, the generated test cases are contributed to the

Page 122: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

122

oneM2M standardisation group. In order the tests to be accepted in oneM2M, they must go through a long procedure. Every test must be formally written in a oneM2M TPLan format and linked to a requirement from the oneM2M standard. Then, the oneM2M TST Working Group must accept the test. Because the tests produced in EXP7 are security tests, they must be accepted by the SEC Working group too. The TTCN-­3 tests, too, are validated by the oneM2M Task Force 001.

The tests are produced using Smartesting CertifyIt and UML model as input. The model contains all information needed to produce TPLan and TTCN-­3 tests in the same time, using the corresponding HTML and TTCN-­3 publishers. A TPLan test in Figure 23 shows all information needed for a specific test: which requirement is it referring to, system configuration, conditions needed to run the test and the actual test messages.

TP Id TP/oneM2M/CSE/SEC/ACP/BV/002 Test objective Check that the IUT accepts the creation of a <accessControlPolicy> resource with

selfPrivileges attribute having multiple access control rules Reference TS-­0001 9.6.2-­2 & TS-­0001 10.2.21 Config Id CF01

PICS Selection PICS_CSE Initial conditions with

the IUT being in the "initial state" and the IUT having registered the AE and the AE having privileges to perform CREATE operation on the resource TARGET_RESOURCE_ADDRESS

Expected behaviour Test events Direction when the IUT receives a valid CREATE request containing To set to TARGET_RESOURCE_ADDRESS and From set to AE_ID and primitiveContent containing <accessControlPolicy> resource containing selfPrivileges attribute containing accessControlRule attribute containing ACCESS_CONTROL_RULE_1 and ACCESS_CONTROL_RULE_2

IUT ß AE

then the IUT sends a Response message containing Response Status Code set to 2001 (CREATED)

IUT à AE

Figure 23 -­ OneM2M TPlan for Access Control Policy

The TTCN-­3 messages are generated from the same model like TPLan tests. They contain calls to already prepared functions in TTCN-­3 used for the preconditions (as described in the corresponding Test Purpose) and to prepare the main message to be sent. The rest of the test is the sending the message and waiting for the corresponding answer to be evaluated. We illustrate the Test Pattern for this scenario in the table below.

TP ID TP/oneM2M/CSE/SEC/ACP/BV/002

Entities IoT Platforms (Servers, Gateways), Application Entities (Sensors)

Page 123: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

123

Property tested The IUT accepts the creation of a <accessControlPolicy> resource with selfPrivileges attribute having multiple access control rules

Reference TS-­0001 9.6.2-­2 & TS-­0001 10.2.21

Test Description

with the IUT being in the "initial state" and the IUT having registered the AE and the AE having privileges to perform CREATE operation on the resource TARGET_RESOURCE_ADDRESS

when the IUT receives a valid CREATE request containing To set to TARGET_RESOURCE_ADDRESS and From set to AE_ID and primitiveContent containing <accessControlPolicy> resource containing selfPrivileges attribute containing accessControlRule attribute containing ACCESS_CONTROL_RULE_1 and ACCESS_CONTROL_RULE_2

then the IUT sends a Response message containing Response Status Code set to 2001 (CREATED)

4.7.5.1 Infrastructure characterisation The Experiment 7 in-­house infrastructure is based in a local network gathering TITAN (the test execution tool) and the oneM2M implementations (IoT plarforms implementing the oneM2M standard). We illustrate it in Figure 24.

Figure 24 In-­house infrastructure for EXP 7

Page 124: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

124

The oneM2M implementation is installed on a local server and on an in-­house node, for instance a sensor. Both implementations are IoT Platforms under test, which allows to ensure the genericity of the in-­house testbed, meaning to make possible that the test cases can be executed on any oneM2M implementation. The test system TITAN is connected to the oneM2M implementation through two different connections. The interface being tested is using either HTTP, CoAP or MQTT protocols. The Upper Tester interface is used to control the implementation, if necessary.

4.7.5.2 Scenario setup The testing scenario is executed as follows: TITAN sends a request primitive to the IoT Platform Under Test (IUT) by simulating the device possible requests. The IUT receives the request and processes it. The IUT then transmits the result back to TITAN where it can be evaluated as pass/fail/inconclusive as described in 4.7.9.

4.7.6 Large-­Scale Testbed Implementation 4.7.6.1 Infrastructure characterisation EXP7 is run over two different infrastructures: IoT Lab and on Fit Cloud. The IoT Lab is where all the devices (consumer/producer) are hosted. We will find on Fit Cloud two Virtual Machines running Ubuntu 16.04 LTE where we installed the IoT platform: OM2M (oneM2M implementation) on one machine and the test tool TITAN on the other.

Figure 25 -­ EXP7 Large Scale infrastructure

TITAN test tool is a TTCN3 execution tool. All tests are executed through this cloud hosted tool. TITAN will communicate using MQTT protocol to an Upper Tester (UT) within the IoT-­Lab. The Upper Tester we use in our experiment is a Python implementation of MQTT client (Paho-­mqtt https://pypi.python.org/pypi/paho-­mqtt/1.1). The MQTT protocol is a machine-­to-­machine (M2M)/”Internet of Things” connectivity protocol. Designed as an

Page 125: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

125

extremely lightweight publish/subscribe messaging transport, it is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium.

The devices hosted in the IoT Lab infrastructure are highly constrained. They are named M3 open node and are based on a STM32 (ARM Cortex M3) micro-­controller. They have a 32-­bits processing, a new ATMEL radio interface in 2.4 Hz and a set of sensors. The OS embedded within the M3 nodes is ContikiOS.

Figure 26 -­ M3 node overview

Within the IoT Lab, the communication between the UT and the M3 node is established by a serial/radio connectivity.

The M3 node sends and receives data to/from the oneM2M implementation via a CoAP+DTLS protocol and uses the oneM2M standard for data structure.

The required configuration on Fit Cloud machines hosting the test tool TITAN and the oneM2M implementation are basically a public IP address to be able to communicate with them as they are cloud hosted. The IoT platform should be running and accessible on its cloud machine. The test tool (TITAN) must be accessible by SSH to launch automated test scripts. For the devices (M3 nodes), they must be reserved on the IoT Lab before the test campaign.

In addition, the infrastructure for the large scale remains for most of the components the same as described above. The difference is the increase of devices tested. We consider large scale testing with approximately 50 ~ 200 nodes tested at once.

4.7.6.2 Scenario setup The testing scenario is executed as follows: TITAN sends a request primitive to the UT. The UT receives the request and transmits it to the device. The device will execute the request and answer back to the UT with results. The UT then transmits the result back to TITAN where it can be evaluated as pass/fail/inconclusive, which have been described in 4.7.9. For large-­scale testing the tests will be executed for each node. The execution can be done simultaneously, which could eventually target DoS attacks on an IoT Platform.

4.7.7 Data Collection Experiment 7 deals with several elements that will allow to draw conclusions about the security evaluation of an IoT Platforms:

Page 126: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

126

• Vulnerabilities relevant for an IoT Platform • Security Test Patterns describing the procedures on how to test the vulnerabilities • MBT model for automated test generation • Security Abstract Test Suite • Security Test Suite concretized in TTCN3

More specifically, it identifies set of vulnerabilities relevant for an IoT Platform and a set of security test requirements based on a oneM2M specification documents (as described in details in D1.1).

In addition, it concretizes a set of test patterns that will allow testing an IoT Platform for the vulnerabilities the patterns cover.

Using the ARMOUR approach an MBT model for vulnerability and security-­compliance testing based on the oneM2M standard was created, which could be used for any IoT Platform implementing the oneM2M standard.

4.7.8 Experiment Validation The MBT Model refers to the oneM2M standard specifications. Nevertheless, threats to validity may exists in respect to our comprehension of some elements. The model has been revised by oneM2M security experts and we believe that this will not harm the validity of the experiment.

In addition, as described in D1.1 and D2.1, we have identified a meaningful set of vulnerabilities and defined the corresponding test patterns for an IoT Platform, based on our experience in security testing and oneM2M expertise. This might limit the number of chosen vulnerabilities for an IoT Platform implementing another standard, but certainly not invalidate the choices made. We believe that this cannot invalidate the experiment’s objectives.

A last threat to validity is that the generated tests and concretized in TTCN3 could not be executed on the IoT Platforms under test, for instance MOBIUS and OM2M. Part of the generated tests have been executed on both platforms. However, by the time of writing, none of the platforms implement all security solutions from the oneM2M standard. However, we think that this does not lower the validity of the experiment and contacts are established with the platforms developers to get forthcoming versions with security functions integrated.

4.7.9 Experiment Evaluation The executed tests can have three possible results:

• pass: if the test assertions did not reveal inconsistencies between the expected and the returned result

• fail: if the test assertion revealed inconsistency between the expected and the returned result

• inconclusive: if for instance for external events (e.g., a time out) the request has not been sent/received by the IUT.

An IoT Platform is secure if all generated tests executed on the platform under test have a pass result. Thus, the collected data is the pass/fail/inconclusive result of the tests

Page 127: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

127

executed on the platform. In addition, the level of security of the platform will be further defined by the labelling process proposed in ARMOUR with respect to the tackled security labelling objectives: Confidentiality, Integrity, Authentication, Authorization. More details about the labelling process could be found in D4.1.

Page 128: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

128

5 Results Report In this section a structure to report the results of ARMOUR experiments is presented. These reports should be able to present different types of information about experiments. This information may be related with the identification of the experiment, allowing to describe aspects like the purpose, date, location of the experiment, etc.;; or be related with the data collected during the execution of the experiment. Results report should not describe only the raw data collected during the execution of the experiment, but also provide some conclusions or evaluations produced by analysing the data. Therefore, this report aims to provide a global overview of an experiment, identifying:

• The purpose and configuration of the experiment;; • Identification of the tests being executed;; • Data collected during execution;; • Evaluation/conclusions about the execution of the tests.

5.1 Background analysis

IEEE organisation defined a Standard for Software and System Test Documentation -­ IEEE 829-­200812. This standard specifies a set of documents to represent and describe different stages of the live-­cycle of software and system testing. The standard is generic to cover all types of testing. It allows the documents to be tailored to each situation. Thus, the proposed structures can be either partially implemented or extended to meet the requirements of representation capabilities. The set of 10 documents proposed by the standard is:

• Master Test Plan (MTP): Document that defines a global test planning and test management document to a project or multiple plans.

• Level Test Plan (LTP): Defines the scope, approach, resource and schedule of the testing activities, defining the features to be tested and the tasks to be performed.

• Level Test Design (LTD): Provides details about the execution of each test, identifying the expected test results and defines the pass/fail criteria.

• Level Test Case (LTC): Specifies that datasets used as input for each test. • Level Test Procedure (LTPr): Provides details on how each test will run, identifying the preconditions and the steps to be executed.

• Level Test Log (LTL): Provides the chronological record of relevant details about the execution of tests. This document includes references to the case and procedures specifications;; identifies the date/time of the experiment, the tester and describes the test environment. This document specifies a section dedicated to record activities and events, identifying the time/duration of execution of each experiment execution, the information collected during the execution and the result (pass/fail) of each executed test.

• Anomaly Report (AR): Used to document any event that occurs during the testing process that requires further investigation.

12 IEEE Standard for Software and System Test Documentation," in IEEE Std 829-­2008, vol., no., pp.1-­150, July 18 2008. DOI: 10.1109/IEEESTD.2008.4578383

Page 129: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

129

• Level Interim Test Status Report (LITSR): Defines the representation of a summary of the results of the designed testing activities and optionally to provide evaluations and recommendations based on the results.

• Level Test Report (LTR): Is used to summarize the results of the designated testing activities and to provide evaluations and recommendations based on the results after test execution.

• Master Test Report (MTR): Allows to summarize the results of the levels of the designated testing activities and to provide evaluations based on these results.

These documents refer to either the “master” or “level”. While documents with the term “master” refer the information that covers the testing of the overall features/aspects of an object, documents with the “level” term refer to properties being tested (e.g. security, stress, performance, maintenance, etc.).

Based on an analysis of the types of documents specified in the IEEE 829 standard, one possible approach for the application of this standard to ARMOUR case consists in the assignment of the “level” documents the test of each of the security properties identified in Deliverable D4.1 – “Definition of the large-­scale IoT Security & Trust benchmarking methodology”, and “master” documents may be used to summarise and aggregate information relates with the tests of the applicable security properties. Thus, ARMOUR Results Report will be based in three types of document:

• Security Property Test Log – Based on the Level Test Log, it aims to aggregate all data collect during the execution of a test pattern implemented and executed to test a specific security property. This report also identifies the pass/fail result for the test pattern execution.

• Security Property Test Reports – Based on the Level Test Report, it collects the results of all Security Property Test Log executed over the same scenario configuration to assess a specific security property and reports the corresponding benchmark.

• Master Test Report – Summarizes all the benchmarks for all the security properties tested in the context of one experiment, giving a general overview of the security aspects of tested scenario.

5.2 Report Structures

5.2.1 Security Property Test Log 5.2.1.1 Test Identification This section is used to provide general information that allow to identify the purpose of the tests. It is composed by several fields:

• Report ID – Unique identifier of the log report to support its reference;;

• Date (Testing period) – Identification of the starting date of the test and its duration;;

• Location – Indication of the location or infrastructure where the test was performed;;

• Author – Identification of the individual or the entity that executed the test;;

Page 130: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

130

• Security Property – Identification of the security property meant to be benchmarked with the results of the test (Authentication, Resistance to replay attacks, Resistance to dictionary attacks, Resistance to DoS attacks, Integrity, Confidentiality, Resistance to MITM attacks, Authorization, Availability, Fault tolerance, Anonymization);;

• General Description – Text field destined to provide additional information about the test execution. E.g. identification of the Experiment that contextualise the text execution, additional information about specific conditions of the scenario.

5.2.1.2 Test Log This section identifies the test pattern executed, presents the data collected during its execution and identifies test result. It is composed by:

• Test Description – Provides a reference to the test pattern description (Test Model, TPLan, TTCN3, description of the test);;

• Test purpose summary – Brief description of the purpose of the test pattern;;

• Configuration profiles – Reference to the configurations required/performed to execute the test pattern;;

• Input Data – Description of the datasets used as input for the execution of the test pattern;;

• Test Data – List of references to the data sets produced during the test execution;;

• Test Result – Identification of the results of the test pattern execution (Fail/Pass).

5.2.2 Security Property Report 5.2.2.1 Test Identification This section is used to provide general information that allow to identify the purpose of the tests. It is composed by several fields:

• Report ID – Unique identifier of the report to support its reference;;

• Date – Identification of the starting date of the test;;

• Security Property – Identification of the security property meant to be benchmarked with the results of the test (Authentication, Resistance to replay attacks, Resistance to dictionary attacks, Resistance to DoS attacks, Integrity, Confidentiality, Resistance to MITM attacks, Authorization, Availability, Fault tolerance, Anonymization);;

• General Description – Text field destined to provide additional information about the test execution. E.g. identification of the Experiment that contextualise

Page 131: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

131

the text execution, additional information about specific conditions of the scenario.

5.2.2.2 Test Report This section presents the results of all the test patterns executed in the context of the security property assessed in this report.

• Report Logs – List of references to the Test Logs reporting the results of the test patterns used to benchmark that security property;;

• Security Property Benchmark – Result of the benchmarking of the security property.

5.2.3 Master Test Report 5.2.3.1 Test Identification This section is used to provide general information that allow to identify the purpose of the tests. It is composed by several fields:

• Report ID – Unique identifier of the report to support its reference;;

• Date – Identification of the starting date of the test;;

• Security Property – Identification of the security property meant to be benchmarked with the results of the test (Authentication, Resistance to replay attacks, Resistance to dictionary attacks, Resistance to DoS attacks, Integrity, Confidentiality, Resistance to MITM attacks, Authorization, Availability, Fault tolerance, Anonymization);;

• General Description – Text field destined to provide additional information about the test execution. E.g. identification of the Experiment that contextualise the text execution, additional information about specific conditions of the scenario.

5.2.3.2 Certification/Labelling This section is used to aggregate the benchmarking of all the security properties associated to an experiment.

• Benchmark reports – List of references to the reports containing the benchmarks for all the security properties related with the experiment associate with this report;;

• Security Label – Result of the Certification/Labelling process. As defined in Deliverable D4.1, the labelling is calculated based on security properties using the association identified in Table 10. At the same time, the context must also be considered in the labelling process. This is realised by computation another parameter – Risk – calculated by Likelihood*Impact*Ptest, determined using Table 11 to determine the Likelihood and Impact values and Ptest corresponds to the marks obtained in the assessment of each security property (Table 10).

Table 10 -­ Association between security properties and marks based on metrics

Page 132: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

132

AuthN c/s

0 mutual and strong

1 strong server, weak or without authN client

2 strong client weak or without authN server

3 weak/without authN

Resistance to Replay attacks

0 Protected

1 Non-­protected but a valid message cannot be obtained

2 Non-­protected and a valid message can be obtained with difficulty/weak protection

3 Non-­protected, it can be obtained easily

Resistance to Dictionary attacks

0 Non-­applicable

1 Strong key

2 Weak key

Integrity

0 Total

1 Partial

2 None

Resistance to DoS attacks

0 Minimum state

1 Big state

Confidentiality

0 Total with secure encryption

1 Partial with secure encryption

2 Total with insecure encryption

3 Partial with insecure encryption

4 None

Resistance to MITM attacks

0 Detectable

1 Non-­detectable

Table 11 -­ Marks for likelihood and impact parameters

Likelihood

0 Null benefit

1 Medium benefit

2 High benefit

Page 133: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

133

Impact

0 Little damage and recoverable

1 Limited damage (Scope, monetary losses, sensible data...)

2 High damage

Page 134: ARMOUR Deliverable D1.2 v1.0...Deliverable!D1.2! ARMOUR&Experimentation&approach&and& plans & Version& Version!1.0! LeadPartner& UNPARALLEL! Date& 14/02/2017! Project&Name& ARMOUR!–!Large@Scale

134

6 Conclusion This document, in the context of defining the ARMOUR experimentation approach, presented an analysis and comparison between the 7 different experiments described in Deliverable D1.1. From these analyses, it was concluded that many experiments could be implemented over common technologies, allowing to reduce the overall effort required to implement the experiments without compromising their contribution to ARMOUR security framework. Moreover, two opportunities for synergies among experiments were identified, namely between experiments 1 and 7 and between experiments 5 and 6, which allow to add new dynamics and characteristics to ARMOUR experiments without a significant increase in the effort needed.

The 7 experiments were described, by updating and increasing the level of detail of the descriptions provided in previous deliverables. Namely, in the detail of the test patterns to be executed to test the vulnerabilities identified by each experiment scenario. Moreover, this deliverable identified and described the infrastructures needed for running the experiments, either in a limited and development-­oriented (in-­house), or in stress-­based (large-­scale) testbed environment. Moreover, within this deliverable, the consortium identified the data to be collected by each experiment during the tests execution and how to determine their success and validity.

This deliverable also proposed three report structures to present the results from the experiments, inspired by IEEE 829-­2008 standard -­ Standard for Software and System Test Documentation. These three structures were named Security Property Test Log, Security Property Test Reports, and Master Test Report. The first is intended to present all the data collected during the execution of a test pattern and classify its success. The second aggregates all the results of several the test patterns executed under the context of the same security property to assess its mark. The last, collects the mark of all security properties associated with an experiment so support the labelling process of the experiment. The navigation between reports is made through references using the unique ID of each report.