full text 01

205
Cooperative Control of Ground and Aerial Vehicles ALEJANDRO MARZINOTTO Master’s Degree Project Stockholm, Sweden August 22, 2012 XR-EE-RT 2012:018 Version: 1.0

Upload: hamzaoui612

Post on 28-Dec-2015

30 views

Category:

Documents


8 download

TRANSCRIPT

  • Cooperative Control ofGround and Aerial Vehicles

    ALEJANDRO MARZINOTTO

    Masters Degree ProjectStockholm, Sweden

    August 22, 2012

    XR-EE-RT 2012:018Version: 1.0

  • January 2012 to August 2012

    Cooperative Control of Ground and Aerial Vehicles Diploma Thesis

    Alejandro Marzinotto1

    Automatic Control LaboratorySchool of Electrical Engineering, KTH Royal Institute of Technology, Sweden

    SupervisorsJose Araujo & Meng GuoKTH Stockholm

    ExaminerDr. D. V. DimarogonasKTH Stockholm

    Stockholm, August 22, 2012

    [email protected]

  • iAbstract

    Recent developments in the theoretical field of multi-agent systems andcooperative control have driven a growing interest towards the area of au-tonomous transportation, surveillance and other applications. This thesis isan attempt to close the gap between the fields of theoretical and experimentalcontrol systems.

    To close this gap it is essential to create a reliable software and hardwareinfrastructure that can be used to test the applicability and performance ofthe controllers developed in theory under artificial constrains. In this thesiswe will present a feasible implementation of an experimental testbed, coveringboth the hardware and software components of it.

    To build this testbed and show its operability, scenarios of multi-agent sys-tems such as platooning and surveillance were implemented using scale modelsof Scania trucks and quadrotors. In this thesis we study the problem startingwith the theoretical analysis of the vehicle dynamics, performing simulationsand finally carrying out the experiments in the testbed.

    The result of this thesis is a reliable and versatile testbed that can beused to perform demonstrations of multi-agent robotic systems. The core ofthe program is developed in LabVIEW, the wireless communication is doneusing NesC and TinyOS, the simulations are developed in Simulink and theVisualization Engine is written in C++.

    The path taken to build this testbed has proven to be successful, allowingus to control multiple vehicles simultaneously under the intuitive LabVIEWprogramming environment. This document serves as a guide for those whowish to carry out experiments using the infrastructure developed here or thosewho wish to improve upon the existing work.

  • Acknowledgements

    Above all I want to thank my family and specially my mother Laura for encouragingme to study science.

    I want to give special thanks to my examiner Dimos Dimarogonas for givingme the opportunity to carry out this project and to my supervisors Meng Guo andJose Araujo for helping me with the technical and theoretical aspects.

    Lastly, I want to thank Axel Klingenstein, Sara Khosravi, Patrik Almstrm andeveryone who was directly or indirectly involved with this project.

    Alejandro MarzinottoAugust 22, 2012

    iii

  • Contents

    Acknowledgements iii

    Contents iv

    List of Figures ix

    List of Tables xiii

    Programming Code xvi

    Acronyms xviii

    Notation xixGround Vehicle Variable Definition . . . . . . . . . . . . . . . . . . . . . . xixAerial Vehicle Variable Definition . . . . . . . . . . . . . . . . . . . . . . . xx

    1 Introduction 11.1 Real World Applications . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.1.1 Platooning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.2 Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1.3 Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.1.4 Transportation . . . . . . . . . . . . . . . . . . . . . . . . . . 61.1.5 Rescue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    1.2 Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.1 Platooning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.2 Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.3.1 Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . 111.3.2 Chapter 2 Hardware . . . . . . . . . . . . . . . . . . . . . . 111.3.3 Chapter 3 Positioning Systems . . . . . . . . . . . . . . . . 11

    iv

  • CONTENTS v

    1.3.4 Chapter 4 Testbed . . . . . . . . . . . . . . . . . . . . . . . 121.3.5 Chapter 5 Ground Vehicles . . . . . . . . . . . . . . . . . . 121.3.6 Chapter 6 Aerial Vehicles . . . . . . . . . . . . . . . . . . . 121.3.7 Chapter 7 LabVIEW Implementation . . . . . . . . . . . . 121.3.8 Chapter 8 Simulations . . . . . . . . . . . . . . . . . . . . 121.3.9 Chapter 9 Experimental Results . . . . . . . . . . . . . . . 121.3.10 Chapter 10 Conclusions and Future Work . . . . . . . . . . 131.3.11 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    2 Hardware 152.1 Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    2.1.1 Tamiya Truck . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.1.2 Tamiya Car . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.1.3 DIYdrones ArduCopter . . . . . . . . . . . . . . . . . . . . . 18

    2.2 Electronics, Sensors and Actuators . . . . . . . . . . . . . . . . . . . 202.2.1 ArduPilot Mega CPU Board . . . . . . . . . . . . . . . . . . 202.2.2 ArduPilot Mega IMU Board . . . . . . . . . . . . . . . . . . . 212.2.3 Tmote Sky . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.2.4 Triple Axis Magnetometer . . . . . . . . . . . . . . . . . . . . 242.2.5 Sonar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2.6 Short Range IR Sensor . . . . . . . . . . . . . . . . . . . . . . 262.2.7 Long Range IR Sensor . . . . . . . . . . . . . . . . . . . . . . 272.2.8 Pololu Micro Serial Servo Controller . . . . . . . . . . . . . . 282.2.9 Futaba Servo . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    3 Positioning Systems 313.1 Ubisense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    3.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.1.2 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.1.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.1.4 Data Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . 323.1.5 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.1.6 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    3.2 Qualisys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.2.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.2.3 Data Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2.4 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.2.5 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    3.3 Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . 423.3.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423.3.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 453.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

  • vi CONTENTS

    4 Testbed 514.1 Testbed Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    4.1.1 Positioning Systems . . . . . . . . . . . . . . . . . . . . . . . 514.1.2 Closed Loop Control . . . . . . . . . . . . . . . . . . . . . . . 524.1.3 Onboard Sensors . . . . . . . . . . . . . . . . . . . . . . . . . 554.1.4 Onboard Actuators . . . . . . . . . . . . . . . . . . . . . . . . 56

    4.2 Layered Controller Architecture . . . . . . . . . . . . . . . . . . . . . 564.2.1 Motion Planning . . . . . . . . . . . . . . . . . . . . . . . . . 584.2.2 Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . . 584.2.3 Mission Planning . . . . . . . . . . . . . . . . . . . . . . . . . 59

    4.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    5 Ground Vehicles 615.1 Ground Vehicle Variable Definition . . . . . . . . . . . . . . . . . . . 615.2 Mathematical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 625.3 Theoretical Controller Design . . . . . . . . . . . . . . . . . . . . . . 64

    5.3.1 Layer 1 Motion Planning . . . . . . . . . . . . . . . . . . . 645.3.2 Layer 2 Coordination . . . . . . . . . . . . . . . . . . . . . 685.3.3 Layer 3 Mission Planning . . . . . . . . . . . . . . . . . . . 68

    5.4 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . 695.4.1 Variable Calculation . . . . . . . . . . . . . . . . . . . . . . . 695.4.2 Three Dimensional Model . . . . . . . . . . . . . . . . . . . . 72

    6 Aerial Vehicles 756.1 Aerial Vehicle Variable Definition . . . . . . . . . . . . . . . . . . . . 756.2 Mathematical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 766.3 Theoretical Controller Design . . . . . . . . . . . . . . . . . . . . . . 80

    6.3.1 Layer 1 Motion Planning . . . . . . . . . . . . . . . . . . . 806.3.2 Layer 2 Coordination . . . . . . . . . . . . . . . . . . . . . 856.3.3 Layer 3 Mission Planning . . . . . . . . . . . . . . . . . . . 85

    6.4 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . 856.4.1 Variable Calculation . . . . . . . . . . . . . . . . . . . . . . . 856.4.2 Three Dimensional Model . . . . . . . . . . . . . . . . . . . . 87

    7 LabVIEW Implementation 897.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

    7.1.1 Ubisense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897.1.2 Qualisys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

    7.2 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907.2.1 Open Serial Forwarder Connections . . . . . . . . . . . . . . 907.2.2 Start Ubisense Positioning System . . . . . . . . . . . . . . . 92

    7.3 Main Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927.3.1 Truck Control Algorithms . . . . . . . . . . . . . . . . . . . . 927.3.2 Quadrotor Control Algorithms . . . . . . . . . . . . . . . . . 96

  • CONTENTS vii

    7.3.3 Sending Actuator Commands . . . . . . . . . . . . . . . . . . 997.3.4 Receiving Sensor Data . . . . . . . . . . . . . . . . . . . . . . 1037.3.5 Start Qualisys Track Manager . . . . . . . . . . . . . . . . . . 1067.3.6 Mission Planner . . . . . . . . . . . . . . . . . . . . . . . . . 107

    7.4 Finalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087.4.1 Close Serial Forwarder Connections . . . . . . . . . . . . . . 108

    7.5 Data Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097.6 Global Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

    8 Simulations 1138.1 Ground Vehicle Simulations . . . . . . . . . . . . . . . . . . . . . . . 114

    8.1.1 Platooning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148.1.2 Formation I (Double Column) . . . . . . . . . . . . . . . . . . 1168.1.3 Formation II (Triangle) . . . . . . . . . . . . . . . . . . . . . 1178.1.4 Row Formation . . . . . . . . . . . . . . . . . . . . . . . . . . 1188.1.5 Defensive Formation . . . . . . . . . . . . . . . . . . . . . . . 119

    8.2 Aerial Vehicle Simulations . . . . . . . . . . . . . . . . . . . . . . . . 1208.2.1 Single Flight 1 Quadrotor . . . . . . . . . . . . . . . . . . . . 1208.2.2 Circular Motion 1 Quadrotor . . . . . . . . . . . . . . . . . . 1228.2.3 Circular Motion 2 Quadrotors . . . . . . . . . . . . . . . . . . 1248.2.4 Circular Motion 3 Quadrotors . . . . . . . . . . . . . . . . . . 1258.2.5 Circular Motion 4 Quadrotors . . . . . . . . . . . . . . . . . . 1268.2.6 Elliptical Motion 4 Quadrotors . . . . . . . . . . . . . . . . . 1278.2.7 Circular Motion 4 Quadrotors [2 radii] . . . . . . . . . . . . . 1288.2.8 Circular Motion 4 Quadrotors [sub-rotation] . . . . . . . . . . 1298.2.9 Circular Motion 4 Quadrotors [variable radius] . . . . . . . . 1308.2.10 Circular Motion 4 Quadrotors [2 altitudes] . . . . . . . . . . . 1318.2.11 Circular Motion 4 Quadrotors [variable altitude] . . . . . . . 1328.2.12 Circular Motion 4 Quadrotors [vertical] . . . . . . . . . . . . 1338.2.13 Circular Motion 4 Quadrotors [horizontal & vertical] . . . . . 1348.2.14 Circular Motion 4 Quadrotors [triple rotation] . . . . . . . . . 135

    8.3 Cooperative Ground & Aerial Vehicles Simulations . . . . . . . . . . 1368.3.1 Simple Surveillance 1 Quadrotor . . . . . . . . . . . . . . . . 1368.3.2 Circular Surveillance 1 Quadrotor . . . . . . . . . . . . . . . 1378.3.3 Circular Surveillance 2 Quadrotors . . . . . . . . . . . . . . . 1388.3.4 Circular Surveillance 3 Quadrotors . . . . . . . . . . . . . . . 1398.3.5 Circular Surveillance 4 Quadrotors [2 altitudes, 2 radii] . . . 1408.3.6 Circular Surveillance 4 Quadrotors [front & back] . . . . . . . 1418.3.7 Circular Surveillance 4 Quadrotors [vertical] . . . . . . . . . . 1428.3.8 Circular Surveillance 4 Quadrotors [multiple platoons] . . . . 143

    8.4 Control Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448.4.1 Platoon Leader . . . . . . . . . . . . . . . . . . . . . . . . . . 1448.4.2 Platoon First Follower . . . . . . . . . . . . . . . . . . . . . . 1458.4.3 Quadrotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

  • viii CONTENTS

    8.5 3D Visualization Engine . . . . . . . . . . . . . . . . . . . . . . . . . 1488.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1488.5.2 Code Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . 148

    9 Experimental Results 1559.1 Hardware and Experimental Performance . . . . . . . . . . . . . . . 155

    9.1.1 Battery Charge Level . . . . . . . . . . . . . . . . . . . . . . 1559.1.2 Truck Speed Control . . . . . . . . . . . . . . . . . . . . . . . 1569.1.3 Quadrotor Throttle Control . . . . . . . . . . . . . . . . . . . 1569.1.4 Number of Controllable Vehicles . . . . . . . . . . . . . . . . 156

    10 Conclusions and Future Work 15910.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15910.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

    Appendix A 161MATLAB Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161C/C++ Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

    Appendix B 175Actuator Interface Board . . . . . . . . . . . . . . . . . . . . . . . . . . . 175IR Sensor Interface Board . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

    Appendix C 177Photos Ground Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177Photos Aerial Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179Photos Ground and Aerial Vehicles . . . . . . . . . . . . . . . . . . . . . . 180

    References 181

  • List of Figures

    1.1 Platoon formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Surveillance robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Point cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4 Transportation robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.5 Rescue robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.6 3D model of platoon formation . . . . . . . . . . . . . . . . . . . . . . . 81.7 3D model of removing vehicle from platoon . . . . . . . . . . . . . . . . 91.8 3D model of inserting vehicle to platoon . . . . . . . . . . . . . . . . . . 91.9 3D model of quadrotor tracking reference . . . . . . . . . . . . . . . . . 101.10 3D model of quadrotor following platoon . . . . . . . . . . . . . . . . . . 101.11 3D model of two quadrotors circling platoon . . . . . . . . . . . . . . . . 11

    2.1 Tamiya Scania truck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.2 Tamiya Honda car . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.3 DIYdrones quadrotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.4 ArduPilot Mega CPU board . . . . . . . . . . . . . . . . . . . . . . . . . 202.5 ArduPilot Mega IMU board . . . . . . . . . . . . . . . . . . . . . . . . . 212.6 Tmote Sky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.7 Tmote Sky parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.8 Triple axis magnetometer . . . . . . . . . . . . . . . . . . . . . . . . . . 242.9 Sonar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.10 Short range IR sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.11 Long range IR sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.12 Pololu micro serial servo controller . . . . . . . . . . . . . . . . . . . . . 282.13 Futaba servo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    3.1 Ubisense logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.2 LabVIEW .NET invoke node . . . . . . . . . . . . . . . . . . . . . . . . 343.3 Qualisys logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.4 LabVIEW QLC VI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    ix

  • x List of Figures

    3.5 LabVIEW Q Command VI . . . . . . . . . . . . . . . . . . . . . . . . . 403.6 LabVIEW Q6D Euler VI . . . . . . . . . . . . . . . . . . . . . . . . . . 413.7 EKF detailed diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.8 EKF simplified diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.9 EKF Simulink block diagram . . . . . . . . . . . . . . . . . . . . . . . . 463.10 EKF Simulink performance evaluator . . . . . . . . . . . . . . . . . . . 473.11 EKF performance evaluation results . . . . . . . . . . . . . . . . . . . . 483.12 EKF performance time comparison . . . . . . . . . . . . . . . . . . . . . 49

    4.1 Feedback loop diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.2 Testbed Ubisense connection diagram . . . . . . . . . . . . . . . . . . . 534.3 Testbed Qualisys connection diagram . . . . . . . . . . . . . . . . . . . 544.4 Layered control diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    5.1 Ground vehicle graphical variable representation . . . . . . . . . . . . . 635.2 Ground vehicle speed controller . . . . . . . . . . . . . . . . . . . . . . . 655.3 Ground vehicle steering controller . . . . . . . . . . . . . . . . . . . . . 665.4 Ground vehicle safety controller . . . . . . . . . . . . . . . . . . . . . . . 675.5 FIFO queue graphical representation . . . . . . . . . . . . . . . . . . . . 705.6 Distance to WP graphical representation . . . . . . . . . . . . . . . . . . 715.7 Vehicle to WP Displacement graphical representation . . . . . . . . . . 725.8 Ground vehicle 3D model with electronics . . . . . . . . . . . . . . . . . 73

    6.1 Aerial vehicle graphical variable representation (yaw) . . . . . . . . . . . 766.2 Aerial vehicle graphical variable representation (pitch, roll) . . . . . . . 776.3 Aerial vehicle throttle controller . . . . . . . . . . . . . . . . . . . . . . 816.4 Aerial vehicle pitch and roll controller . . . . . . . . . . . . . . . . . . . 826.5 Aerial vehicle yaw controller . . . . . . . . . . . . . . . . . . . . . . . . . 836.6 Aerial vehicle safety controller . . . . . . . . . . . . . . . . . . . . . . . . 846.7 Quadrotor heading graphical representation . . . . . . . . . . . . . . . . 866.8 Distance to WP graphical representation . . . . . . . . . . . . . . . . . . 876.9 Aerial vehicle 3D model with electronics . . . . . . . . . . . . . . . . . . 87

    7.1 Ubisense main.vi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897.2 Ubisense main.vi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907.3 OpenSF_basic.vi Common . . . . . . . . . . . . . . . . . . . . . . . . 917.4 OpenSF_double.vi Common . . . . . . . . . . . . . . . . . . . . . . . 917.5 Ubisense_Structure_Start.vi Ubisense Only . . . . . . . . . . . . . . . 927.6 Truck position data retrieval Ubisense Only . . . . . . . . . . . . . . . 937.7 Truck 6 DOF data retrieval Qualisys Only . . . . . . . . . . . . . . . . 937.8 Displacement.vi Common . . . . . . . . . . . . . . . . . . . . . . . . . 947.9 Truck control signals (leader) Common . . . . . . . . . . . . . . . . . 957.10 Truck control signals (followers) Common . . . . . . . . . . . . . . . . 957.11 Truck control signals update Common . . . . . . . . . . . . . . . . . . 967.12 Quadrotor position data retrieval Ubisense Only . . . . . . . . . . . . 97

  • List of Figures xi

    7.13 Quadrotor 6 DOF data retrieval Qualisys Only . . . . . . . . . . . . . 977.14 Quadrotor control signals Common . . . . . . . . . . . . . . . . . . . . 987.15 Quadrotor control signals update Common . . . . . . . . . . . . . . . 997.16 Truck actuator loop Common . . . . . . . . . . . . . . . . . . . . . . . 1007.17 Quadrotor actuator loop Common . . . . . . . . . . . . . . . . . . . . 1017.18 WriteSF_basic_global.vi Common . . . . . . . . . . . . . . . . . . . . 1027.19 IR_sensor_complex_global.vi Common . . . . . . . . . . . . . . . . . 1047.20 ReadSF_complex_global.vi (part 1) Common . . . . . . . . . . . . . 1047.21 ReadSF_complex_global.vi (part 2) Common . . . . . . . . . . . . . 1057.22 QTM.vi Qualisys Only . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067.23 Coordinator.vi Common . . . . . . . . . . . . . . . . . . . . . . . . . . 1077.24 CloseSF_basic.vi Common . . . . . . . . . . . . . . . . . . . . . . . . 1087.25 CloseSF_double.vi Common . . . . . . . . . . . . . . . . . . . . . . . 1097.26 Semaphore labels Common . . . . . . . . . . . . . . . . . . . . . . . . 1107.27 Semaphore critical section Common . . . . . . . . . . . . . . . . . . . 1107.28 Global variables Common . . . . . . . . . . . . . . . . . . . . . . . . . 111

    8.1 Platooning simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1158.2 Formation I (double column) simulation . . . . . . . . . . . . . . . . . . 1168.3 Formation II (triangle) simulation . . . . . . . . . . . . . . . . . . . . . 1178.4 Row formation simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 1188.5 Defensive formation simulation . . . . . . . . . . . . . . . . . . . . . . . 1198.6 Single flight simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218.7 Simple rotation of one aerial vehicle . . . . . . . . . . . . . . . . . . . . 1228.8 Circular motion 1 quadrotor simulation . . . . . . . . . . . . . . . . . . 1238.9 Circular motion 2 quadrotors simulation . . . . . . . . . . . . . . . . . . 1248.10 Circular motion 3 quadrotors simulation . . . . . . . . . . . . . . . . . . 1258.11 Circular motion 4 quadrotors simulation . . . . . . . . . . . . . . . . . . 1268.12 Elliptical motion 4 quadrotors simulation . . . . . . . . . . . . . . . . . 1278.13 Circular motion 4 quadrotors 2 radii simulation . . . . . . . . . . . . . 1288.14 Circular motion 4 quadrotors sub-rotation simulation . . . . . . . . . 1298.15 Circular motion 4 quadrotors variable radius simulation . . . . . . . . 1308.16 Circular motion 4 quadrotors 2 altitudes simulation . . . . . . . . . . 1318.17 Circular motion 4 quadrotors variable altitude simulation . . . . . . . 1328.18 Circular motion 4 quadrotors vertical simulation . . . . . . . . . . . . 1338.19 Circular motion 4 quadrotors horizontal & vertical simulation . . . . . 1348.20 Circular motion 4 quadrotors triple rotation simulation . . . . . . . . 1358.21 Simple surveillance 1 quadrotor simulation . . . . . . . . . . . . . . . . . 1368.22 Circular surveillance 1 quadrotor simulation . . . . . . . . . . . . . . . . 1378.23 Circular surveillance 2 quadrotors simulation . . . . . . . . . . . . . . . 1388.24 Circular surveillance 3 quadrotors simulation . . . . . . . . . . . . . . . 1398.25 Circular surveillance 4 quadrotors 2 altitudes, 2 radii simulation . . . 1408.26 Circular surveillance 4 quadrotors front & back simulation . . . . . . . 1418.27 Circular surveillance 4 quadrotors vertical simulation . . . . . . . . . . 142

  • xii List of Figures

    8.28 Circular surveillance 4 quadrotors multiple platoons simulation . . . . 1438.29 Platoon leader control signals time plot . . . . . . . . . . . . . . . . . 1448.30 Platoon second vehicle control signals time plot . . . . . . . . . . . . . 1458.31 Quadrotor control signals time plot (pitch & roll) . . . . . . . . . . . . 1468.32 Quadrotor control signals time plot (yaw & throttle) . . . . . . . . . . 147

    1 Actuator interface board . . . . . . . . . . . . . . . . . . . . . . . . . . . 1752 IR sensors interface board . . . . . . . . . . . . . . . . . . . . . . . . . . 176

    3 Photo Arduino-Mote Custom Serial Connection Board . . . . . . . . . . 1774 Photo IR Sensor & Actuator Interface Boards . . . . . . . . . . . . . . . 1785 Photo truck connections . . . . . . . . . . . . . . . . . . . . . . . . . . . 1786 Photo actuator & sensor motes . . . . . . . . . . . . . . . . . . . . . . . 1797 Photo Pololu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1798 Photo Ubisense Tag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1809 Photo collection of vehicles . . . . . . . . . . . . . . . . . . . . . . . . . 180

  • List of Tables

    1 Ground vehicles standard variables . . . . . . . . . . . . . . . . . . . . xix2 Aerial vehicles standard variables . . . . . . . . . . . . . . . . . . . . . xx

    2.1 Tamiya truck specifications . . . . . . . . . . . . . . . . . . . . . . . . . 162.2 Tamiya truck features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3 Tamiya truck dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . 162.4 Tamiya car specifications . . . . . . . . . . . . . . . . . . . . . . . . . . 172.5 Tamiya car features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.6 Tamiya car dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.7 DIYdrones quadrotor features . . . . . . . . . . . . . . . . . . . . . . . . 192.8 DIYdrones quadrotor flight times . . . . . . . . . . . . . . . . . . . . . . 192.9 DIYdrones quadrotor dimensions and weight . . . . . . . . . . . . . . . 192.10 ArduPilot Mega CPU board features . . . . . . . . . . . . . . . . . . . . 212.11 ArduPilot Mega IMU board features . . . . . . . . . . . . . . . . . . . . 222.12 Tmote Sky features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.13 Triple axis magnetometer features . . . . . . . . . . . . . . . . . . . . . 242.14 Sonar features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.15 Short range IR sensor features . . . . . . . . . . . . . . . . . . . . . . . 262.16 Long range IR sensor features . . . . . . . . . . . . . . . . . . . . . . . . 272.17 Pololu micro serial servo controller features . . . . . . . . . . . . . . . . 282.18 Futaba servo features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    3.1 EKF variable meaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.2 EKF corrector step variable meaning . . . . . . . . . . . . . . . . . . . . 45

    5.1 Ground vehicles standard variables . . . . . . . . . . . . . . . . . . . . 615.2 Ground vehicle diagram variable meaning . . . . . . . . . . . . . . . . . 62

    6.1 Aerial vehicles standard variables . . . . . . . . . . . . . . . . . . . . . 75

    xiii

  • List of Algorithms

    1 Algorithm run (function) Ubisense . . . . . . . . . . . . . . . . . . 332 Algorithm tag_update (event) Ubisense . . . . . . . . . . . . . . . 333 Algorithm get_position (function) Ubisense . . . . . . . . . . . . 34

    4 Algorithm main (function) Visualization Engine . . . . . . . . . . 148

    xv

  • Programming Code

    7.1 Lookup table calibration data (MATLAB script) . . . . . . . . . . . 1038.1 Video driver (C++ code) . . . . . . . . . . . . . . . . . . . . . . . . 1488.2 Video dimensions (C++ code) . . . . . . . . . . . . . . . . . . . . . 1498.3 Scene manager (C++ code) . . . . . . . . . . . . . . . . . . . . . . . 1498.4 Open ground vehicles text files (C++ code) . . . . . . . . . . . . . . 1498.5 Open aerial vehicles text files (C++ code) . . . . . . . . . . . . . . . 1498.6 Define global variables (C++ code) . . . . . . . . . . . . . . . . . . . 1508.7 Load ground vehicles 3D models (C++ code) . . . . . . . . . . . . . 1508.8 Load aerial vehicles 3D models (C++ code) . . . . . . . . . . . . . . 1508.9 Load testbed 3D model (C++ code) . . . . . . . . . . . . . . . . . . 1518.10 Set camera position, orientation and navigation mode (C++ code) . 1518.11 Read ground vehicle positions from .txt files (C++ code) . . . . . . 1518.12 Read aerial vehicle positions from .txt files (C++ code) . . . . . . . 1528.13 Update vehicle position variables (C++ code) . . . . . . . . . . . . . 1528.14 Draw scene and vehicles (C++ code) . . . . . . . . . . . . . . . . . . 1528.15 Drop rendering device and close .txt files (C++ code) . . . . . . . . 1531 Extended Kalman filter (MATLAB script) . . . . . . . . . . . . . . . 1612 Automatic IR lookup table generator (MATLAB script) . . . . . . . 1623 Truck control signals plotting (MATLAB script) . . . . . . . . . . . 1634 Quadrotor control signals plotting (MATLAB script) . . . . . . . . . 1645 Vehicles and trajectories 3D plotting (MATLAB script) . . . . . . . 1656 Simulink signal extraction with subsampling (MATLAB script) . . . 1677 ArduCopter sensor data sending loop (C code) . . . . . . . . . . . . 1688 Visualization Engine (C++ code) . . . . . . . . . . . . . . . . . . . . 170

    xvi

  • xvii

  • xviii ACRONYMS

    Acronyms

    VI Virtual Instrument

    WSN Wireless Sensor Network

    CPU Central Processing Unit

    IMU Inertial Measurement Unit

    SLAM Simultaneous Localization and Mapping

    UAV Unmanned Aerial Vehicle

    EKF Extended Kalman Filter

    KF Kalman Filter

    DLL Dynamic Link Library

    PWM Pulse Width Modulation

    QTM Qualisys Track Manager

    QLC Qualisys LabVIEW Client

    RT Real Time

    RTLS Real Time Localization System

    DOF Degrees Of Freedom

    RF Radio Frequency

    RC Radio Control

    IR Infrared

    IDE Integrated Development Environment

    API Application Programming Interface

    WP Waypoint

    SP Surveillance Point

    FIFO First In First Out

    SF Serial Forwarder

    TCP/IP Transmission Control Protocol / Internet Protocol

    PCB Printed Circuit Board

    GUI Graphical User Interface

  • Notation

    In this thesis there are two different notations: one for the ground vehicles and onefor the aerial vehicles, since both of them use the same symbols we must keep in mindthe context in which the symbols appear in order to interpret them accordingly.

    Ground Vehicle Variable Definition

    Symbol Meaning Theoretical Range

    x} y } z Cartesian coordinate system positions (,)} (,)} [0]x} y } z Cartesian coordinate system velocities (,)} (,)} [0]x} y } z Cartesian coordinate system accelerations (,)} (,)} [0]

    t Time [0,) Vehicle Orientation [0, 2pi)

    Vehicle Steering (pi2 , pi2 ) Vehicle to WP Displacement (pi, pi]d Distance to WP [0,)Table 1: table showing the standard variables used for the ground vehicles.

    xix

  • xx NOTATION

    Aerial Vehicle Variable Definition

    Symbol Meaning Theoretical Range

    x} y } z Cartesian coordinate system positions (,)} (,)} [0,)x} y } z Cartesian coordinate system velocities (,)} (,)} (,)x} y } z Cartesian coordinate system accelerations (,)} (,)} (,)

    t Time [0,) Vehicle Roll angle (pi2 , pi2 ) Vehicle Pitch angle (pi2 , pi2 ) Vehicle Yaw angle (pi, pi]d Distance to WP [0,)g Gravity 9.81 m/s2

    m Vehicle Mass 1.5 kg

    Table 2: table showing the standard variables used for the aerial vehicles.

  • Chapter 1Introduction

    The usage of autonomous robots has increased drastically over the past few yearsin areas such as surveillance, transportation, exploration and rescue. Most of theapplications require sharing information between the robots either by direct com-munication between them or through a central station that takes care of processingall the data.

    Many applications require the usage of multiple robots to achieve a common goal;this is normally referred to as multi-agent systems. Agents are autonomous entitiesthat exhibit behaviors and perform certain actions depending on their internal state,the perception of the environment and the messages received from other agents.

    The most straightforward implementation of agents is the reflex agent whereperceptions are translated into actions using logical conditional rules. More complexagents, not treated in this thesis, called learning agents are able to modify their wayto interact with the environment based on rewards and other indirect measurementsof the performance of their actions.

    In multi-agent systems, communication is essential to achieve coordination be-tween robots. Wireless transmission is normally used as the standard due to itsreliability and adequacy to different environments. Those advantages coupled withthe recent development of ultra-low power consumption wireless devices have in-creased the interest in this type of communication.

    The most common protocol for wireless communications is the IEEE 802.11which is widely used for the Internet and allows multiple devices to be connectedseamlessly. However since we require our devices to be low-powered, as they willrun mainly on batteries, it is convenient to use a protocol which was specificallydesigned to work on low-power consumption devices such as the IEEE 802.15.4standard for WSN.

    When dealing with motion control of autonomous vehicles it is advantageous tohave the position of each robot. In centralized scenarios such as the ones treated

    1

  • 2 CHAPTER 1. INTRODUCTION

    in this thesis, this is achieved using a local positioning system which gathers thedata in a central computer that takes care of processing it and sending appropriateactuation commands to each robot according to the controllers output.

    Decentralized scenarios, not treated in this thesis, occur when no central po-sitioning system is available. In these cases the robots must use their sensors togather data and share it between vehicles just like it is done between the nodes ofa WSN so that each robot can calculate the most appropriate actuation commandsbased on the available information of neighboring agents.

    When multiple types of robots are present, the design of the WSN involves thecreation of hierarchical structures where the purpose and importance of each typeof vehicle is reflected. In this thesis we will only use aerial and ground vehicles, how-ever, the problem formulation and hierarchical design remains the same in scenarioswhere there are more than two types of robots involved.

    In this chapter we present five real world applications where the ideas developedin this thesis can be applied. We describe the specific goals of this project in termsof algorithms and simulations to be implemented. Lastly, we outline the contentsof each chapter in order to give the reader an idea of the overall scope of this thesis.

    1.1 Real World Applications1.1.1 Platooning

    Consists of grouping vehicles into platoons allowing them to travel closely and yetsafely. Grouping vehicles this way saves a considerable amount of space and thusdecreases the traffic congestion, it is estimated that the efficiency in vehicles perlane per hour will duplicate even under non optimal platoon configurations.

    Platooning also reduces drag forces created by the air. This reduction is trans-lated into less fuel consumption which yields to less pollution. Drag reductionreaches the optimal point when each vehicle is placed at a distance of 2.5 m fromthe preceding member of the platoon, at this distance the reduction in drag is ap-proximately 50 % which yields to 25 % reduction in fuel consumption. Driving at2.5 m distance between cars at high speeds is not safe for human pilots; however itis possible with automated navigation systems.

    There are two main controllers to be implemented in platooning, these are:distance control and alignment control. The first takes care of holding the distancebetween the vehicles in the convoy whereas the second takes care of keeping themaligned in a row. This is achieved controlling the speed and the steering of eachvehicle like human drivers would do in regular driving conditions.

    Each vehicle implements an agent that takes care of the following tasks: com-municate his own data to a subset of the other cars in the convoy, receive the data

  • 1.1. REAL WORLD APPLICATIONS 3

    being transmitted from other agents, and perform actuation commands based onthe output of the steering and speed control algorithms.

    The leader of the platoon does not follow any vehicle, for this reason specificdriving directions have to be specified upfront. In reality this is achieved using ahuman pilot to lead the convoy, however, in this thesis a set of coordinate WPs andnavigation algorithms is used to guide the leader of the platoon in replacement ofthe human driver.

    Vehicles are allowed to enter and exit the platoon dynamically; this is achievedthrough wireless communication between agents. These communication channelsallow them to cooperate and transmit information such as sensor readings, intention,destination, position, speed, size, etc.

    Figure 1.1: standard platoon formation with a truck as the leader.

    1.1.2 Surveillance

    Consists of the joint effort of several robots to patrol a certain area or to trackmoving objects. These robots are able to achieve different formations to maximizethe patrolled area or other desired parameters. The robots are able to communicatewith each other and transmit relevant data such as alarms or video to a centralstation. Both aerial and ground vehicles can be used for surveillance tasks dependingon the situation.

    Many different commercial surveillance robots can be found online, most ofthem are built to be very resistant to physical damage and to be usable in extremeweather conditions. Some of them are designed to be able to accomplish certaintasks such as going up or down stairs, whereas others have built-in weapons torespond automatically in case of alarm.

  • 4 CHAPTER 1. INTRODUCTION

    These robots can be used in a variety of scenarios ranging from surveillance inmuseums, banks, war zones, etc. Replacing humans with autonomous robots toperform dangerous and repetitive tasks is one of the main goals in this field. Sincesurveillance vehicles can be equipped with a variety of sensors, it is to be expectedthat soon these robots will outperform humans in surveillance tasks as well.

    This is yet another clear example of the usage of WSN, especially in situationswhere the patrolled area is too large to maintain direct communication link betweeneach robot and a central station. In situations like these, the agents function likenodes in a sensor network; retransmitting incoming data from other agents so thatit reaches the central station or gateway.

    Surveillance can also be done between agents, this means that not necessarilythe object subject to supervision belongs to the environment. Sometimes, coopera-tive control can be achieved through aerial vehicles guiding ground vehicles or viceversa. In these situations there could be a unidirectional or multidirectional flow ofinformation between the different types of robots.

    Figure 1.2: surveillance robot developed at Darmstadt University.

  • 1.1. REAL WORLD APPLICATIONS 5

    1.1.3 ExplorationConsists of several robots working together to create the map of a certain region,these robots are able to distribute their efforts in such a way that the map canbe completed in the shortest time or with the highest precision. The robots arenot only able to build the map, but also to position themselves inside it. Theycan also use this information and different path planning algorithms to traverse theenvironment.

    These robots are often equipped with cameras and proximity sensors. In mostcases computer vision algorithms are used to extract features from each video frameand SLAM techniques are used to combine these features and produce a coherentmap representation along with the agents current estimated position.

    In other cases the position of the vehicle is estimated using the vehicle dynamicsand its internal sensors (e.g., encoders, accelerometers, gyros, IR sensors). Using arelative coordinate system centered on the robots initial position, the rest of themap is built based on the estimated robot state and its sensor readings.

    The map representation can be metric or topological. In the first case: precisecoordinates are updated using probabilistic methods, this type of maps are oftenvery accurate but not interpretable by machines. In the second case: places andlocations are stored as nodes in a graph, paths between those places are indicatedby arcs between nodes and no information regarding the exact position of places isstored.

    Metric maps can be two or three dimensional according to the requirements ofthe specific scenario. Three dimensional map representations are currently beingdeveloped in a field called point clouds.

    Figure 1.3: 3D point cloud that resembles a human body in standing position.

  • 6 CHAPTER 1. INTRODUCTION

    1.1.4 TransportationConsists of the cooperation between several vehicles to move different objects fromone place to another; these loads can be truck trailers in a harbor, medical equipmentin a hospital, tools inside a workshop, etc. Both aerial and ground vehicles aresuitable for transportation; however depending on the situation it will be moreappropriate to use a certain type of vehicle.

    Ground transportation is closely related to platooning because the purpose ofboth is to move things from one place to another in an efficient and automatedway. For that reason, the explanations presented in the section where we discussedplatooning are also applicable here. In most cases, the robots used for automatedtransportation are equipped with grippers especially designed to handle objects witha certain size, weight and shape.

    The process of transportation involves several sub-processes:

    1. Recognizing the object to be transported: generally achieved using computervision algorithms. Sometimes we are presented with a challenging environmentwhere it is difficult to identify the object that we wish to transport.

    2. Positioning the object and the vehicle: also possible using cameras and prox-imity sensors embedded in the robot or other external positioning mechanism.

    3. Picking up the object: a combination between path planning and control the-ory is used to create a valid trajectory and successfully grip the object.

    4. Moving to the destination: path planning and control algorithms are used inthis phase to guide the robot to its destination.

    5. Releasing the load at the appropriate location: this phase is the reverse ofpicking up the object, therefore the same explanation applies.

    Figure 1.4: Intel robot named HERB equipped with 2 grippers for transportation.

  • 1.1. REAL WORLD APPLICATIONS 7

    1.1.5 RescueIn emergency situations such as fires and earthquakes it is a common problem havingto risk human lives to rescue possible victims. For this reason, autonomous robotshave stepped in attempting to replace humans in dangerous environments. In mostmissions, time is a crucial factor and several robots must be coordinated optimallyto identify and rescue the victims depending on their particular situation.

    One of the key factors to complete these missions successfully is the ability ofeach robot to take appropriate decisions on RT, based on its own sensory inputand the incoming transmissions from the other robots. A clear example of thesesituations is when certain victims require higher priority than others or if a specificrobot is unable to perform a task without the aid of another one.

    Generally search and rescue missions are divided into three phases:

    1. Exploration: mapping the space, locating and recognizing the victims. Eachvictim particular situation is evaluated and prioritized accordingly.

    2. Rescue: transporting the victims outside the place of the accident or bringingthem assistance otherwise.

    3. Escape: exiting the place of the accident to ensure the safety of the robotitself.

    Generally, the development of rescue robots is a multidisciplinary field becauseinvolves not only control theory, but also computer vision algorithms which areresponsible for providing the agent with crucial information of the surroundingenvironment.

    Figure 1.5: autonomous rescue robot retrieving a dummy.

  • 8 CHAPTER 1. INTRODUCTION

    1.2 TasksIn this section we describe the specific goals of this thesis in terms of algorithmsand simulations to be implemented. The relevance of these tasks is grounded onthe five real life applications mentioned in this chapter.

    1.2.1 Platooning

    First we approach the problem of creating a controller capable of forming a platoonof an arbitrary number of vehicles. This involves implementing two things: a speedand steering controller for each vehicle and a framework where it is possible to shareinformation between them. A platoon vehicle can be a truck or a car, and they aretreated indistinctively in the following scenarios.

    Figure 1.6: 3D representation of the controller used to hold the platoon formation.

    Secondly we propose an implementation of a controller capable of removing anyvehicle from the platoon except for the leader; rearranging the remaining vehiclesin the same platoon. This involves the platooning controller of the previous taskplus a heuristic algorithm to safely remove a vehicle (truck or car) without causingany disruption to the platoon.

  • 1.2. TASKS 9

    Figure 1.7: 3D representation of the controller used to remove a vehicle from the platoon.

    Lastly, we propose an implementation of a controller capable of inserting avehicle (truck or car) into an existing platoon; this is done by opening a spacebetween two arbitrary vehicles to let the third one in. This involves the platooningcontroller plus a heuristic algorithm responsible for allowing a vehicle to join withoutcausing any disruption to the platoon.

    Figure 1.8: 3D representation of the controller used to insert a vehicle to the platoon.

    1.2.2 SurveillanceFirst we propose an implementation where an aerial vehicle tracks certain discretereference WPs. We generalize the concept of discrete WPs into time-varying refer-

  • 10 CHAPTER 1. INTRODUCTION

    ence points so that we are able to generate different trajectories and shapes withoutrequiring further study of path planning control.

    Figure 1.9: 3D representation of the controller used to perform takeoff, flight and landingof the quadrotor.

    Secondly we propose an implementation where an aerial vehicle hovers above theleader of a platoon and follows it along its trajectory. We also explore the oppositesituation where the aerial vehicle guides the platoon along its trajectory.

    Figure 1.10: 3D representation of the controllers used to hold a platoon formation andperform aerial tracking of the platoon leader using one quadrotor.

  • 1.3. THESIS OUTLINE 11

    Lastly, we propose an implementation where two aerial vehicles perform surveil-lance simultaneously on a convoy. We explore the scenarios where more than twoaerial vehicles are involved without a deep study of path planning algorithms.

    Figure 1.11: 3D representation of the controllers used to hold a platoon formation andperform circular aerial tracking of the platoon leader using two quadrotors.

    1.3 Thesis Outline1.3.1 Chapter 1 Introduction

    This chapter contains the thesis introduction, the real life applications of the controlalgorithms developed, the specific tasks to be implemented in this project and ashort overview of the contents of each chapter.

    1.3.2 Chapter 2 Hardware

    This chapter contains the description of the hardware used in this thesis. Thisincludes not only the characteristics of the aerial and ground vehicles, but also thespecifications of the electronics, sensors and actuators.

    1.3.3 Chapter 3 Positioning Systems

    This chapter contains detailed descriptions of the Ubisense and the Qualisys posi-tioning systems as well as the usage, advantages and disadvantages of each system.Lastly, it contains the theory and implementation of the EKF used to improve theperformance of the Ubisense.

  • 12 CHAPTER 1. INTRODUCTION

    1.3.4 Chapter 4 Testbed

    This chapter contains two sections: in the first we present the overall structure ofthe testbed, we explain how the hardware and software work together in order toprovide a closed loop control system. In the second we explain in detail the layeredcontroller implemented in this thesis to give a consistent structure to the softwarecomponent of the testbed.

    1.3.5 Chapter 5 Ground Vehicles

    This chapter contains the mathematical analysis of the ground vehicle dynamicsand the conventions used to define the systems variables. We also explain how theconcept of layered control is applied to the ground vehicles by describing the scopeof each layer in this specific scenario.

    1.3.6 Chapter 6 Aerial Vehicles

    This chapter contains the mathematical analysis of the aerial vehicle dynamics andthe conventions used to define the systems variables. We also explain how theconcept of layered control is applied to the aerial vehicles by describing the scopeof each layer in this specific scenario.

    1.3.7 Chapter 7 LabVIEW Implementation

    This chapter contains the structure of the LabVIEW program; it describes the func-tionality of each VI and how they work together. In case the reader is interested: twoseparate files called LabVIEW_VIs_Ubisense.pdf and LabVIEW_VIs_Qualisys.pdfare provided with detailed information about the hierarchy of each VI and depen-dencies between them.

    1.3.8 Chapter 8 Simulations

    This chapter contains the simulations used to develop and test the controllers, foreach simulated scenario two figures are presented showing the perspective and topviews of the vehicles involved and their trajectories. Lastly, we introduce the 3DVisualization Engine created in this thesis to reproduce simulations and recordedexperiments. A brief analysis of the code of the Visualization Engine is given sothat it becomes possible to perform adaptations in the future.

    1.3.9 Chapter 9 Experimental Results

    This chapter contains a brief analysis of the experimental results of this thesis, thepractical implications relative to the hardware used such as limitations on the num-ber of vehicles that can be controlled, and remarks on the performance expected

  • 1.3. THESIS OUTLINE 13

    versus the performance observed. We also explore the scalability of the implementa-tion in the light of hardware and software limitations as well as the communicationissues observed.

    1.3.10 Chapter 10 Conclusions and Future WorkThis chapter contains the summary of the goals achieved in this project, the contri-butions of this thesis and final remarks regarding the aspects that can be improved.Lastly, we present possible future work to be done in this area to expand and im-prove the testbed.

    1.3.11 AppendicesThis section contains three parts: Appendix A, where the MATLAB scripts andC/C++ code used throughout the thesis are presented. Appendix B, where theschematic and PCB of the sensor and actuator boards are presented. Appendix C,where the photos of the trucks, quadrotors, and other electronic devices that shapethe testbed are presented.

  • Chapter 2Hardware

    In this chapter we will describe the hardware used in this thesis, we will coverimportant aspects of the robots, sensors, actuators and motes. The purpose ofthis chapter is to give a general idea of the practical limitations inherent to theimplementation and to provide the grounds for those who wish to build a similartestbed.

    2.1 Vehicles2.1.1 Tamiya Truck

    Figure 2.1: Tamiya Scania truck with the transportation trailer.

    15

  • 16 CHAPTER 2. HARDWARE

    Summary

    The Tamiya truck is a realistic scale model of the Scania V8. The speed of thevehicle is controlled using the models gearbox and a commercial speed controller.The steering of the vehicle is controlled using a servo with 180 of rotation. Thethree available shifts of the gearbox can be selected using a different servo or acertain gear can be selected manually.

    Specifications

    Motor Type Brushed 540

    Engine Size 540

    Number of Gears 3

    Tires Radial Tires

    Damper Type Spring

    Table 2.1: specifications of the Tamiya truck.

    Features

    Top Speed 30 km/h 35 km/h

    Drive Type Differential 2 Wheel Drive

    Table 2.2: features of the Tamiya truck.

    Dimensions

    Scale 1/14

    Length 452 mm

    Width 187 mm

    Wheelbase 293 mm

    Table 2.3: dimensions of the Tamiya truck.

  • 2.1. VEHICLES 17

    2.1.2 Tamiya Car

    Figure 2.2: Tamiya Honda car.

    Summary

    The Tamiya car is a scale model of the Honda CR-Z. The speed of the vehicle iscontrolled using a commercial speed controller; this model has only one gear. Thesteering of this vehicle is controlled using a servo with 180 of rotation.

    Specifications

    Motor Type Brushed 540

    Engine Size 540

    Number of Gears 1

    Tires Narrow Radial Tires

    Damper Type Wishbone Suspension

    Body Polycarbonate

    Table 2.4: specifications of the Tamiya car.

    Features

    Top Speed 30 km/h 35 km/h

    Drive Type Differential 2 Wheel Drive

    Table 2.5: features of the Tamiya car.

  • 18 CHAPTER 2. HARDWARE

    Dimensions

    Scale 1/10

    Length 408 mm

    Width 174 mm

    Wheelbase 243 mm

    Table 2.6: dimensions of the Tamiya car.

    2.1.3 DIYdrones ArduCopter

    Figure 2.3: DIYdrones quadrotor.

    Summary

    The ArduCopter is a multirotor UAV development platform based on a design byJani Hirvinen. There are several models with different number of propellers, themost common are: tri-, quad-, hexa- and octa-rotors. The code required to controlthe aircraft is open source and it is easy to modify. The ArduCopter also comeswith a software called Mission Planner that allows the user to set the controllerconstants, each channels minimum and maximum PWM limits, among other things.

  • 2.1. VEHICLES 19

    Features

    Accelerometer 6 DOF IMU

    Gyroscope Gyro Stabilized Flight

    Heading Calculation Magnetometer

    Height Calculation Sonar Sensor

    Onboard Controller Stabilization Double Cascade PID Control

    Configuration GUI for Configuration of PID Parameters

    Motor Controller PWM Electronics Speed Controllers (ESCs)

    Frame Configuration Capability to Fly in + or Compatibility Any R/C Receiver or Servo Controller

    Obstacle Avoidance IR Sensors

    Battery Level Detection Onboard LEDs & Base Station Indicator

    Table 2.7: features of the DIYdrones quadrotor.

    Average Flight Times

    2200 mAh LiPo Battery 9 min 10 min with no Payload

    2650 mAh LiPo Battery 9 min with 300 g Video Camera Payload

    Table 2.8: average flight times of the DIYdrones quadrotor.

    Dimensions and Weight

    Size 60.96 cm from Motor to Motor

    Weight 1500 g

    Table 2.9: dimensions and weight of the DIYdrones quadrotor.

  • 20 CHAPTER 2. HARDWARE

    2.2 Electronics, Sensors andActuators

    2.2.1 ArduPilot Mega CPU Board

    Figure 2.4: ArduPilot Mega CPU board.

    Summary

    ArduPilot is a fully programmable autopilot that requires a GPS module and IMUsensors to create a functioning UAV. The autopilot handles both stabilization andnavigation, eliminating the need for a separate stabilization system. It also supportsa fly-by-wire mode that can stabilize an aircraft when flying manually under RCcontrol, making it easier and safer to fly. The hardware and software are all opensource. The firmware is already loaded, but the autopilot software must be loadedonto the board by the user. It can be programmed with the Arduino IDE.

  • 2.2. ELECTRONICS, SENSORS AND ACTUATORS 21

    Features

    Usage Autonomous Aircraft, Quad Copters and Helicopters

    Microcontroller 16 MHz Atmega2560

    Processing Power Dual-Processor with 32 MIPS

    Memory 256 kB Flash Program Memory, 8 kB SRAM, 4 kB EEPROM

    Analog Ports 16 Spare Analog Inputs (with ADC on each)

    Digital Ports 40 Digital Input/Outputs to Add Additional Sensors

    Serial Ports 4 Dedicated Serial Ports for Two-Way Telemetry

    RC Channels 8 RC Channels

    Table 2.10: features of the ArduPilot Mega CPU board.

    2.2.2 ArduPilot Mega IMU Board

    Figure 2.5: ArduPilot Mega IMU board.

    Summary

    This board features a large array of sensors needed for UAV and Robotics appli-cations, including three-axis angular rotation and accelerations sensors, absolutepressure and temperature sensor, 16 MB data logger chip among other things. It isdesigned to fit on top (or bottom) of the ArduPilot Mega CPU board, creating atotal autopilot solution when a GPS module is attached.

  • 22 CHAPTER 2. HARDWARE

    Features

    Analog Port 12-bit ADC

    Data Storage Built-in 16 MB Data Logger

    Interface Built-in FTDI, Making the Board Native USB

    I2C Port Allows Sensor Arrays

    User Input 2 User-Programmable Buttons (Momentary & Slide)

    Expansion Ports 10-bit Analog Expansion Ports

    Indicators Status LEDs

    Gyros Vibration Resistance Invensense Gyros (Triple Axis)

    Accelerometer Analog Devices ADX330 Accelerometer

    Extra Sensors Absolute Bosch Pressure/Temperature Sensor

    Table 2.11: features of the ArduPilot Mega IMU board.

    2.2.3 Tmote Sky

    Figure 2.6: Tmote Sky wireless module.

    Summary

    Tmote Sky is an ultra-low power wireless module for use in sensor networks, moni-toring applications, and rapid prototyping. Tmote Sky leverages industry standardslike USB and IEEE 802.15.4 to interoperate seamlessly with other devices. By usingindustry standards, integrating humidity, temperature and light sensors, while pro-viding flexible interconnections with peripherals, Tmote Sky enables a wide rangeof mesh network applications.

  • 2.2. ELECTRONICS, SENSORS AND ACTUATORS 23

    Tmote Sky is a drop-in replacement for Moteivs successful Telos design. WithTinyOS support out-of-the-box, Tmote Sky leverages emerging wireless protocolsand the open source software movement. Tmote Sky is part of a line of modulesfeaturing onboard sensors to increase robustness while decreasing cost and packagesize.

    Features

    Microcontroller TI MSP430F1611 Microcontroller at up to 8 MHz

    Storage Size 10 kB SRAM, 48 kB Flash, 1024 kB Serial Storage

    Wireless Capability 250 kb/s 2.4 GHz Chipcon CC2420 IEEE 802.15.4

    Onboard Sensors Onboard Humidity, Temperature and Light Sensors

    Consumption Ultra-Low Current Consumption

    Wakeup Time Fast Wakeup from Sleep (< 6 s)

    Programming Interface USB

    Identification Serial ID Chip

    Expansion Capability 16-pin Expansion Port

    Board Size 32 mm 80 mmTable 2.12: features of the Tmote Sky.

    Figure 2.7: Tmote Sky front view (left), and back view (right).

  • 24 CHAPTER 2. HARDWARE

    2.2.4 Triple Axis Magnetometer

    Figure 2.8: triple axis magnetometer (not soldered).

    Summary

    This is a 3-axis digital compass board based on the Honeywells HMC5883L. Com-munication with the HMC5843 is achieved through an I2C interface. The board hasan I2C translator and 3.3 V power regulator that make it compatible with 3.3 Vand 5 V applications using a solder jumper.

    Features

    Interface I2C Interface with I2C Translator for 5 V Signals Compatibility

    Supply Voltage 2.5 V 3.3 V and 4 V 5.5 V Supply Ranges (Jumper Selectable)

    Resolution Low Current Draw and 4.35 mG Resolution

    Compatibility ArduIMU and ArduPilotMega Shield Pin Compatible

    Table 2.13: features of the triple axis magnetometer.

  • 2.2. ELECTRONICS, SENSORS AND ACTUATORS 25

    2.2.5 Sonar

    Figure 2.9: sonar (not soldered).

    Summary

    The LV-MaxSonar-EZ2 is a good compromise between sensitivity and side objectrejection. The sensor offers three standard outputs (analog voltage, serial data, andpulse width) that are available on all the MaxSonar-EZ products. The sonars ofthis brand also operate with very low voltage from 2.5 V to 5 V with less than 3 mAnominal current draw.

    Features

    Gain Continuously Variable Gain

    Object Detection Includes Zero Range Objects

    Supply Voltage 2.5 V 5.5 V Supply Range with 2 mA Current Draw

    Refresh Rate Up to Every 50 ms (20 Hz Rate)

    Free Run Operation Continually Measure and Output Range Information

    Triggered Operation Provides the Range Reading as Desired

    Interfaces Serial, Analog and Pulse Width

    Sensor Frequency 42 kHz

    Wave Type High Output Square Wave

    Table 2.14: features of the sonar.

  • 26 CHAPTER 2. HARDWARE

    2.2.6 Short Range IR Sensor

    Figure 2.10: short range IR sensor.

    Summary

    The GP2D12 is a short range distance measuring sensor with integrated signalprocessing and analog voltage output.

    Features

    Supply Voltage 4.5 V 5.5 V

    Output Type Analog Output

    Effective Range 10 cm to 80 cm

    LED Pulse Cycle Duration 32 ms

    Typical Response Time 39 ms

    Typical Start Up Delay 44 ms

    Average Current Consumption 33 mA

    Detection Area Diameter 6 cm of diameter at 80 cm

    Table 2.15: features of the short range IR sensor.

  • 2.2. ELECTRONICS, SENSORS AND ACTUATORS 27

    2.2.7 Long Range IR Sensor

    Figure 2.11: long range IR sensor.

    Summary

    The GP2Y0A is a long range distance measuring sensor with integrated signal pro-cessing and analog voltage output.

    Features

    Supply Voltage 4.5 V 5.5 V

    Output Type Analog Output

    Effective Range 20 cm to 150 cm

    Typical Response Time 39 ms

    Typical Start Up Delay 48 ms

    Average Current Consumption 33 mA

    Table 2.16: features of the long range IR sensor.

  • 28 CHAPTER 2. HARDWARE

    2.2.8 Pololu Micro Serial Servo Controller

    Figure 2.12: Pololu micro serial servo controller.

    Summary

    The Pololu micro serial servo controller is a very compact solution for controllingup to eight RC servos from a computer or microcontroller. Each servo speed andrange can be controlled independently, and multiple units can be daisy-chained onone serial line to control up to 128 servos. It possesses three status LEDs and anintegrated level converter for RS-232 applications. The micro serial servo controllercan control any standard RC servo, including giant 1/4-scale servos.

    Features

    PCB size 2.31 cm 2.31 cmServo Ports 8

    Resolution 0.5 s 0.05

    Range 250 s 2750 s

    Supply Voltage 5 V 16 V

    Data Voltage 0 V and 5 V

    Pulse Rate 50 Hz

    Serial Baud Rate 1200 Bd 38400 Bd (Automatically Detected)

    Current Consumption 5 mA (Average)

    Table 2.17: features of the Pololu micro serial servo controller.

  • 2.2. ELECTRONICS, SENSORS AND ACTUATORS 29

    2.2.9 Futaba Servo

    Figure 2.13: Futaba servo with 180 of rotation.

    Summary

    Servo motors are an efficient, easy way to precisely position or move things. SomeServos can also be modified to rotate in a full circle (instead of just 180 ), whichmakes them useful as drive motors for robotics.

    The Futaba S3305 is a heavy duty standard servo with brass gears, dual ballbearings and 9 kgf cm of torque. It is ideal for those applications that requireextra power and strength.

    Features

    Dimensions 20.0 mm 40.0 mm 38.1 mmWeight 45.6 g

    Speed 0.20 s

    Torque 9.00 kgf cmBall Raced Yes

    Table 2.18: features of the Futaba servo.

  • Chapter 3Positioning Systems

    In this chapter we will present an overview of the Ubisense and the Qualisys po-sitioning systems, the programs used to retrieve the data from them and a list ofadvantages and disadvantages of each system. Lastly, we will cover the theory andimplementation of the EKF used to improve the performance of the Ubisense.

    3.1 Ubisense

    Figure 3.1: Ubisense logo.

    The Ubisense Tag Module Research Package is an out-of-the-box, RTLS thatcan be used to track and locate assets and personnel to an accuracy of 15 cm in 3Din RT. It is an all-inclusive solution for RTLS development or an entry level pilotsystem.

    3.1.1 OverviewUbisense tags transmit ultra-wideband (UWB) pulses of extremely short durationwhich are received by the sensors and used to determine where the tag is locatedusing a unique combination of Time-Difference-of-Arrival (TDoA) and Angle-of-Arrival (AoA) techniques. The use of UWB together with the unique AoA andTDoA functionality ensures both high accuracy and reliability of operation in chal-lenging environments.

    31

  • 32 CHAPTER 3. POSITIONING SYSTEMS

    Sensors are grouped into cells with the capability of adding more of them de-pending on the geometry of the area to be covered. In each cell a master sensorcoordinates the activities of the other sensors, and communicates with all the tagswhose location is detected within the cell. By designing overlapping cells, it ispossible to cover very large areas.

    3.1.2 SensorsThe sensors detect ultra-wideband pulses transmitted by Ubisense tags which areused to accurately determine tag locations. The sensors have an array of fourUWB receivers enabling them to measure both Angle-of-Arrival (AoA) and Time-Difference-of-Arrival (TDoA) of tag signals, to generate accurate 3D tracking infor-mation even when only two sensors can detect the tag. The sensors and tags alsosupport two-way conventional RF communications permitting dynamic changes totag update rates and enabling interactive applications.

    3.1.3 SoftwareThe software supplied with the research package includes the distributed locationprocessing software platform, supporting visualization, system customization, andapplication integration via an industry-standard API. A Data Client graphical userinterface application is also supplied which allows the user to send, receive and viewdata that is being sent to a tag module/accessory.

    3.1.4 Data RetrievalA central computer running proprietary Ubisense software enables all the sensorsconnected to the Ethernet hub. Based on the readings obtained from these sensorsand optional filtering parameters, the proprietary Ubisense software calculates thex y z position of each tag in the detection range, and forwards this informationto a TCP/IP connection.

    Any computer connected through an Ethernet cable to the hub is able to readthis information using the functions contained in a DLL. This library is written inC# and it consists of three main parts: run (function), get_position (function) andtag_update (event). Below we present the pseudo code of each part of the libraryand a brief description of the functionality:

    run (function)

    This is the program entry point; it initializes the event reception and starts a loopthat takes care of processing the queue of elements. This queue will constantly befilled with new position updates; therefore it is important that the speed at whichthe queue is emptied is higher than the speed at which the queue is filled. This isachieved tuning the sleep time of the while loop according to the number of updates

  • 3.1. UBISENSE 33

    per second we get from the tags, by doing this we avoid using more computationalresources than needed in the loop.

    The code inside the loop takes care of recognizing whether or not the tag ID waspreviously processed in which case the position of the object is updated, otherwiseif the tag ID was not previously detected a new object is generated.

    Algorithm 1: run function algorithm of the Ubisense.function run(void){initialization();while true do

    if processing_queue.size > 0 thenprocess_element();if tag_exist then

    update_tag();else

    create_tag();end

    endsleep(30);

    end}

    tag_update (event)

    This event is called each time a new package containing the position of a tag isreceived through the TCP/IP connection, this event merely adds the informationreceived to the processing_queue so that we exit the event as fast as possible andnew events can be triggered. To avoid data corruption the processing_queue islocked before any changes are made to it.

    The data added to the processing_queue each time a package is received fromthe TCP/IP connection consists of: a tag ID, the x y z position of the tag, andthe estimated measurement error. The processing of this queue relies solely in thewhile loop explained earlier.

    Algorithm 2: tag_update event algorithm of the Ubisense.event tag_update(tag id, double x, double y, double z){lock(processing_queue);{processing_queue.Add(id, x, y, z);

    }}

  • 34 CHAPTER 3. POSITIONING SYSTEMS

    get_position (function)

    This function is what we use to get the position of a desired tag; the process-ing_queue is also locked to avoid variable corruption when reading the values.This function will be called using as a parameter the tag ID we want to obtaindata from. The tag ID is a unique number which is printed on each physical tagand has the following structure: XXX XXX XXX XXX, for example:020 000 116 037.Algorithm 3: get_position function algorithm of the Ubisense.function get_position(string tag){lock(processing_queue);{return position(tag);

    }}

    Invoke nodes (.NET DLL)

    From LabVIEW we can call the functions contained in the DLL using a .NETconstructor and invoke nodes. These .NET nodes look like this:

    Untitled 1Last modified on 2012-06-16 at 00:15Printed on 2012-06-16 at 00:15

    Page 1

    Untitled 1

    TagIDreference

    0

    getX

    0

    getY

    0

    getZ

    TagID

    reference

    getX

    getY

    getZ

    UbisenseClient

    TagIDTagIDgetXgetX

    UbisenseClient

    TagIDTagIDgetYgetY

    UbisenseClient

    TagIDTagIDgetZgetZ

    TagID

    reference

    getX

    getY

    getZ

    "Untitled 1 History" Current Revision: 0

    Figure 3.2: LabVIEW .NET invoke node calling getX, getY, getZ.

    3.1.5 Advantages

    1. Tags are small and identical, no need to form unique patterns.

    2. Works equally well outdoors as indoors.

    3. No need to recalibrate the system each time.

    4. Several filtering options available: Fixed-Height, Information-Filter, etc.

  • 3.2. QUALISYS 35

    5. Easy for multiple users to connect to the system simultaneously.

    6. Position can be obtained for non-visible vehicles as long as they are in thedetection range.

    3.1.6 Disadvantages

    1. The tags stop transmitting their position if no movement is detected (sleepmode).

    2. The tags require batteries to function.

    3. Each tag has different refresh rate.

    4. The refresh rate of the tags is very low for motion control (< 10 Hz).

    5. The error associated with each measurement is high (20 cm in xy plane and1 m in z-direction).

    6. The quality of the measurements gets much worse when the tag is close towalls or when the tag is not in the detection range of most sensors.

    7. The more tags to be tracked simultaneously, the lower refresh rate obtainedfrom the system.

    8. The system needs to go through a long process of recalibration if any of thesensors is to be moved from its original position or orientation.

    9. The system gives only position, not the orientation of the tag.

    3.2 Qualisys

    Figure 3.3: Qualisys logo.

    Qualisys is a leading, global provider of products and services based on opticalmotion capture. The core technology of Qualisys products has been developed inSweden since 1989. The experienced Qualisys staff has created a unique platformfor optical motion capture, built to medical and industrial standards.

  • 36 CHAPTER 3. POSITIONING SYSTEMS

    3.2.1 Overview

    Optical Motion Capture is widely accepted and used daily all over the world. Itenables the capture of motion that would be difficult to measure in other ways.

    In the medical field, researchers and clinicians use movement data to study andobserve human movement performance.

    In the industry, engineers use position and orientation data to control machineryimproving the safety and reliability of automated processes.

    Qualisys motion capture hardware and software have been designed with lowlatency and maximum speed in mind, without sacrificing accuracy. Qualisys offersan easy way to obtain accurate 3D and 6 DOF position in RT.

    3.2.2 Features The core component of Qualisys motion capture system is one or more infrared

    optical cameras, Oqus, emitting a beam of infrared light. Each camera can beconfigured independently and they can be used in high speed video captureas well. Additionally, there is a possibility to overlay video footage with 3Dpositioning data.

    Small light-weight, retro-reflective markers are placed on an object. Markersof different sizes and hardness can be used interchangeably. Three markers isthe minimum to track 6 DOF rigid bodies, but more markers can be addedto enhance rigid body recognition reliability.

    Cameras emit infrared light onto the markers that reflect the light back tothe camera sensor. This information is then used to calculate the positionof targets with high spatial resolution. Precision and covered volume can beincreased by adding cameras to the system. The refresh rate of the data canbe selected in the range of 1 Hz up to 500 Hz.

    To make a 3D reconstruction of 2D data, the system needs to be calibrated.A wand is simply moved around in the capture volume for 10 s 40 s whilea stationary reference object in the volume defines the lab coordinate system.During the calibration the system performs an automatic detection of eachcameras position and angle.

    Works in RT mode and capture mode. The system can be connected toanalog 10 V signals for synchronization. The data can be retrieved througha TCP/IP or OSC server using LabVIEW, MATLAB or QTM clients. Thedata can also be exported to TSV, C3D and MATLAB for post processingand visualization.

  • 3.2. QUALISYS 37

    3.2.3 Data RetrievalThere is a central computer running proprietary software that enables all the Qual-isys Oqus cameras connected to the Ethernet hub. Based on the video obtained fromthese cameras and several user selectable parameters such as: capture rate, expo-sure time, marker threshold, prediction error, max frame gap, etc. the proprietaryQualisys software calculates the x y z position of each tag in range.

    This data can be requested asynchronously from other computers connected tothe Ethernet hub using a MATLAB script or a LabVIEW VI, in this thesis we onlyuse the second option to retrieve the data. Qualisys comes with a QLC which is aLabVIEW library that consists of following VIs:

    QLC main VI Used for connecting to QTM RT server and downloadingdata.

    Q Command Used for controlling QTM, by sending commands.

    Q2D Used for fetching 2D data.

    Q3D Used for fetching 3D data.

    Q3D No Labels Used for fetching unidentified 3D data.

    Q6D Used for fetching 6 DOF data.

    Q6D Euler Used for fetching 6 DOF data with Euler angles.

    Q Analog Single Used for fetching analog data, only one sample (the latestone).

    Q Analog Used for fetching analog data.

    Q Force Single Used for fetching force data, only one sample (the latestone).

    Q Force Used for fetching force data.

    In the following part we will present in detail the VIs used in this thesis, therest of them will be omitted for simplicity.

    QLC VI

    The main QLC VI must always be included in the project and be given the requiredparameters in order for the QLC to be able to deliver data to LabVIEW. Onlyone instance of QLC.vi should be used per LabVIEW client. Input and outputparameters are described below.

  • 38 CHAPTER 3. POSITIONING SYSTEMSUntitled 2Last modified on 2012-06-16 at 00:21Printed on 2012-06-16 at 00:21

    Page 1

    Address

    Port

    Frequency

    Data

    QTM Message

    Last QTM Event

    Connect to QTM

    Controlling QTMControl QTM

    Camera Frame Number

    Camera Timestamp

    addressconnect

    controlQTMdata

    frequencyport

    camFrameNumbcamTimeStamp

    lastEvent

    stopstop

    Qualisys Process

    Figure 3.4: LabVIEW QLC main VI, used to retrieve frames from the system.

    Connect Set to true to connect to QTM RT server.

    Address QLC needs the IP address of the computer running QTM. If QTM isrunning on the same computer, use: 127.0.0.1 (default) or localhost.

    Port The port used for the connection with QTM. The default value is 22222.

    Frequency The frequency parameter is used to set the update frequency:

    1. AllFrame: sets the update frequency to the camera frequency.2. Frequency (n): sets the update frequency to n Hz.3. FrequencyDivisor (n): sets the update frequency to the camera frequency

    divided by n.

    Data This parameter tells QTM which type of data it should send. To send severaldifferent data types, use a space between each type. The default is to sendAll. Using all data types can result in big data frames, which can reduceperformance. The best practice is only to use the components you need.Available data components: [All] [2D] [3D] [3DRes] [3DnoLabels] [3DnoLabel-sRes] [Analog] [Force] [6D][6DRes] [6DEuler] [6DEulerRes].

  • 3.2. QUALISYS 39

    ControlQTM Set to true to take control of QTM. You need to take control overQTM to be able to control QTM via commands.

    Message Returns status messages from the QTM RT server.

    LastEvent Returns the last event from the QTM RT server. Here are all possibleevents:

    None = 0 Connected = 1 Connection Closed = 2 Capture Started = 3 Capture Stopped = 4 Fetching Finished = 5 Calibration Started = 6 Calibration Stopped = 7 RT From File Started = 8 RT From File Stopped = 9 Waiting for trigger = 10 Camera settings changed = 11 QTM shutting down = 12 Capture Saved = 13

    QTMMaster Returns true if you are the QTM master, i.e. have control overQTM.

    CamTimeStamp Returns a 64 bit integer containing the current camera timestamp in s.

    CamFrameNumber Returns a 32 bit integer containing the current camera framenumber.

    Updated Returns true if new data is read from the QTM RT server. This appliesto all data types, except for Analog and Force.

    Q Command VI

    It is possible to control QTM via the Q Command VI. To be able to control QTMfrom the LabVIEW client, you have to set the controlQTM input to true in theQLC.vi. You must also set the Allow client control checkbox in QTM; it can befound under Processing/RT outputs in workspace options. Only one RT client cancontrol QTM at once. This includes all RT clients, not only LabVIEW clients.

  • 40 CHAPTER 3. POSITIONING SYSTEMS

    command.viC:\Users\Alejandro\Desktop\command.viLast modified on 2012-06-16 at 00:04Printed on 2012-06-16 at 00:26

    Page 1

    Start CaptureStart RT From File

    New Measurement

    Close Measurement

    Stop Capture

    Set Event

    Send Trig

    Event Label

    Save Capture

    Save Filename

    bClosebEventbNewbSavebStart

    bStartFilebStopbTrig

    pEventLabelpFileName

    result

    Figure 3.5: LabVIEW Q Command VI, used to send instructions to the QTM.

    Once you have control over QTM, you can issue following commands:

    New Create new measurement. Unsaved measurement must be saved or closed tobe able to create a new measurement.

    Close Close measurement.

    Start Start a capture in QTM. The length of the capture is the current capturelength in QTM. It is possible to stop a capture prematurely, with the Stopcommand.

    Start RT from file Simulate a RT session by playing the current QTM file.

    Stop Stop QTM capture or RT session from file.

    Save Save current measurement in QTM. The name of the QTM file is set withthe pFileName input in the QCommand VI.

    Send Trig Send an external trigger to QTM. This will trigger a measurement, ifthe system is set to start on external trigger.

    Send Event Set an event in QTM. The name of the event is set with the pEvent-Label input in the QCommand VI. If no name is given, the label will be setto Manual event.

  • 3.2. QUALISYS 41

    Q6D Euler VI

    This VI is used to obtain the 3D position and angles of each rigid body. Severalinstances of this VI can be called from the LabVIEW program. In the followingdiagram we can see the user specified parameters and outputs of the VI.

    Untitled 2Last modified on 2012-06-16 at 00:24Printed on 2012-06-16 at 00:24

    Page 1

    6 DOF Body

    noresidualpitch

    resroll

    x

    yyaw

    z

    Figure 3.6: LabVIEW Q6D Euler VI, used to retrieve rigid body 6 DOF data.

    No Order number of 6 DOF body to read.

    Residual Set to true to read 6 DOF residual values.

    x, y, z 3D position coordinates.

    Roll, Pitch, Yaw Euler rotation angles.

    Res 6 DOF residual values.

    3.2.4 Advantages

    1. Global reference frame can be defined and later translated or rotated.

    2. Can store/import/export configuration files, calibrations, rigid bodies andinterface parameters.

    3. System can be integrated with HD video cameras to film the experiments inRT synchronously with the data recording of position and angles.

    4. Capacity to enable bounding boxes to restrict the area of valid data points bymasking out possible sources of noise.

    5. Data reprocessing allows for recorded experiments to be recalculated for en-hanced precision, coordinate system shift, etc.

  • 42 CHAPTER 3. POSITIONING SYSTEMS

    6. Hierarchical organization of projects, data recordings, and video footage foreasy playback, and quick navigation between experiments.

    7. A great deal of data processing is done inside the cameras, making the centralQTM application lightweight in terms of processor load.

    3.2.5 Disadvantages

    1. Requires several cameras to cover a relatively small volume.

    2. Reflective and shiny materials such as aluminium can cause small disruptions.

    3.3 Extended Kalman FilterWhen using the Ubisense positioning system the error associated with each mea-surement is too large (30 cm on the x y plane and 1 m on the z-axis), for thatreason we implemented an EKF to improve the performance of the controllers. TheEKF was designed and tested using Simulink and then implemented in LabVIEWthrough a MATLAB Script Node.

    In this chapter we will cover the theory behind the EKF; we will show theimplementations in both Simulink and LabVIEW, and analyze the improvementthat the filter produces with the raw measurements. Even though this filter wasdesigned to cope with the poor quality of the Ubisense measurements, it can alsobe used to improve the measurements of other positioning systems.

    The theory and implementation of the EKF presented in this part was takenfrom Phil Goddards webpage called Goddard Consulting and adapted to fit thepurposes of this thesis. Even though we only deal with the EKF in two dimen-sions, scaling the theory and implementation derived here into three dimensions isstraightforward.

    3.3.1 TheoryThe KF is widely used in engineering for estimating unmeasured states of a pro-cess with known dynamics. The KF in general works in an iterative form usingpredictions and measurements to estimate the most likely state at each time step.Using these states estimates as input to control algorithms generally produces muchbetter results than using the raw measurements without filtering.

    The most generic and adequate type of KF for non-linear discrete-time processesis the EKF. However there also exist simpler versions of this filter for linear processeslike the KF and more complex versions for continuous-time systems such as theKalman-Bucy Filter. Considering the non-linear discrete-time process with inputand measurement noise represented by wk and vk:

  • 3.3. EXTENDED KALMAN FILTER 43

    ukwk

    xkf(xk,uk,k)h(xk,uk,k)

    yk

    vk

    yk

    Figure 3.7: EKF detailed diagram.

    We can write the following equations in standard state-space form:

    xk = f(xk1, uk, k) + wk1yk = h(xk, uk, k)yk = yk + vk

    Here

    k discrete point in time.

    uk vector of inputs.

    xk vector of states.

    yk vector of outputs.

    yk vector of measured outputs.

    wk vector of states noise with zero mean Gaussian distributionand Qk covariance.

    vk vector of output noise with zero mean Gaussian distributionand Rk covariance.

    f(.) non-linear function that relates the past state, the currentinput and the current time to the current state.

    h(.) non-linear function that relates the current state, the currentinput and the current time to the current output.

    Table 3.1: the meaning of each variable in the EKF.

  • 44 CHAPTER 3. POSITIONING SYSTEMS

    The EKF algorithm takes as inputs the measured outputs, the process inputsand a certain time k to produce as output the unmeasured observable states and theactual process outputs. This is represented graphically by the following diagram:

    uk ExtendedKalman

    Filter

    xkykyk

    Figure 3.8: EKF simplified diagram.

    The EKF algorithm takes place in 2 steps:

    1. The first step consists of projecting the most recent state estimate and anestimate of the error covariance forwards in time to calculate a predictedstate estimate at the current time step.

    2. The second step consists of correcting the predicted state estimate calculatedin the first step by incorporating the most recent process measurement togenerate an updated state estimate.

    Since the process in non-linear, instead of using directly f() and h() in theprediction and update equations we ought to use the Jacobian of f() and h(). TheJacobians are calculated using the following formulas:

    Fk =f

    x

    (xk,uk,k)

    Hk =h

    x

    (xk,uk,k)

    For the EKF the predictor step is given by the following expressions:

    xk = f(xk1, uk, k)Pk = Fk1Pk1F

    Tk1 +Qk

    And the corrector step is given by the following expressions:

  • 3.3. EXTENDED KALMAN FILTER 45

    Kk = Pk HTk (HkPk H

    Tk +Rk)1

    xk = xk +Kk(yk h(xk , uk, k))Pk = (I KkHk)Pk

    Here

    Pk1 covariance estimate of the measurement error.

    Kk Kalman gain.

    xk1 current estimate of the states after the correction.

    yk current output estimate.

    Table 3.2: the meaning of each variable in the EKF corrector step.

    3.3.2 ImplementationThe problem involves estimating the x y position and x y velocities of anobject based on a succession of noisy x y measurements. To implement theEKF algorithm we need to calculate the Jacobian matrices for the state and themeasurement equations. Using a first order approximation we get the followingexpression:

    x(k + 1) =

    xpos(k + 1)xvel(k + 1)ypos(k + 1)yvel(k + 1)

    =

    1 t 0 0

    0 1 0 0

    0 0 1 t

    0 0 0 1

    xpos(k)xvel(k)ypos(k)yvel(k)

    = Fkx(k)

    This matrix formulation of the equations can be interpreted as follows: over asmall period of time dt, the position in both x and y directions changes proportion-ally to the velocity along that axis, and the velocity remains constant on the nexttime step. Here we named the Jacobian matrix Fk.

    In our particular scenario we are getting x and y directly from the system sothe measurements update equation is very simple:

    mk =[m1k

    m2k

    ]=[xpos

    ypos

    ]1Pk & xk are stored and used in the predictor step of the next iteration.

  • 46 CHAPTER 3. POSITIONING SYSTEMS

    Calculating the Jacobian for the measurement equations we get:

    Hk =(mk)x

    x

    =

    1 0 0 00 0 1 0

    In t