autonomous agricultural robot - aalborg...

178
Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b Martin Holm Pedersen Jens Lund Jensen Fall 2006–Spring 2007 AALBORG UNIVERSITY DEPARTMENT OF ELECTRONIC SYSTEMS AUTOMATION AND CONTROL

Upload: others

Post on 18-Mar-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Autonomous Agricultural Robottowards robust autonomy

Group members of IAS10-1032b

Martin Holm PedersenJens Lund Jensen

Fall 2006–Spring 2007

AALBORG UNIVERSITY

DEPARTMENT OF ELECTRONIC SYSTEMS

AUTOMATION AND CONTROL

Page 2: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 3: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

The Faculty of Engineering, Science and Medicine

Intelligent Autonomous Systems

THEME:Final Thesis

TITLE:Autonomous Agricultural Robot:towards robust autonomy

PROJECT PERIOD:IAS9–IAS10Sep. 2006–Jun. 2007

GROUP:IAS10–1032b

GROUP MEMBERS:Martin Holm PedersenJens Lund Jensen

SUPERVISORS:Roozbeh Izadi-ZamanabadiJesper A. Larsen

NUMBER OF DUPLICATES: 6

NUMBER OF PAGES IN REPORT: 128

NUMBER OF PAGES INAPPENDIX: 50

TOTAL NUMBER OF PAGES: 178

Abstract: This master thesis doc-uments the work of group 1032band concerns the development of amodel based (Fault Detection and Iso-lation)FDI scheme to detect and iso-late faults in an four wheeled agricul-tural robot called an API (AutonomousPlatcare Instrumentation system). Thethesis describes a number of differentmethods for deploying different FDIstrategies as well as preliminary test-ing on a preexisting non-linear modelof the robot.Furthermore the thesis documents theefforts to make the API a more reliableand robust platform with the designat implementation of new sensors aswell as steps to make is possible for therobot to diagnose itself.

The different FDI methods were tested

successfully on the non-linear model

and were able to detect and isolate

some of the selected faults. A new

inclinometer was designed and imple-

mented on the robot to replace the old.

Two new proximity sensors were de-

signed and implemented.

Page 4: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 5: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Preface

This report is written by two students as their master thesis at Aalborg University - de-partment of Electronic Systems. The thesis is the final part of the specialisation in Intel-ligent Autonomous Systems. This report is the documentation of the work conductedin the period from September 2006 to June 2007 and is focused on fault detection andisolation and the API project.

This project is based on the Autonomous Plant-care Instrumentation system (API)which is a joint venture between Aalborg University, Danish Institute of AgriculturalScience, The Royal Veterinary and Agricultural University and 4 industrial companiesand is a pilot project to determine the feasibility of using autonomous platforms for fieldmonitoring and plant care.

The goal of this report is to design and implement fault detection and isolation ofsteering and propulsion faults. A secondary goal is to design and implement additionalhardware in order to make the API a more robust and reliable platform for future researchgroups.

The report is divided in five parts: Part one is a description of the project and thespecification of requirements. The second part is modeling. The third part deals withdesigning and testing various methods of fault detection and isolation. The fourth partis the overall conclusion of the project. The last part is a collection of appendices tosupplement the report.

Citations throughout the report are indicated by a number and optional page or chap-ter number, such as [7, Chapter 2].

The enclosed CD contains the report in PDF and PS formats and the MATLAB and AnsiC source code.

Martin Holm Pedersen Jens Lund Jensen

5

Page 6: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 7: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Nomenclature and Abbreviations

General NomenclatureN Fixed global reference frameM Global reference frame rotated with the inclination of the

robotB Robot reference frame which is fixed with the robotRX→Y The rotation matrix from the X to the Y-frameψx, ψy Angles describing the rotation between N and M [rad]

Nomenclature for Mechanical ModellingθM The angle of the robot with respect to the x-axis of the M-

frame[rad]

y The absolute position in the y-direction [m]x The absolute position in the x-direction [m]χ The absolute position vector of the robot consisting of

[x y θ]T

χ The velocity vector of the robot consisting of [x y θ]T

χ The acceleration vector of the robot consisting of [x y θ]T

β The angle of the wheels with respect to the body of the ve-hicle

[rad]

rw The radius of the wheels [m]

φ The angular velocities of the wheels [rad/s]ywi The y-position of the i’th wheel in the B-frame [m]xwi The x-position of the i’th wheel in the B-frame [m]yICR The y-position of the ICR in the B-frame [m]xICR The x-position of the ICR in the B-frame [m]Fx The propulsion forces provided by the wheels:

[Fx1 Fx2 Fx3 Fx4][N]

τr The propulsion torque provided by the actuators [Nm]Cf The cornering stiffness of the tires [N/rad]α The slip angles of the wheels [rad]V The velocity of the robot [m/s]

7

Page 8: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

v The speed of the robot [m/s]κ Lengths describing the positions of the wheels(wrt. GC) [m]γ Angles describing the positions of the wheels (wrt. GC) [rad]βmax The maximum angle of the wheels with respect to the body

of the vehicle. 2βmax defines the area between the two stop-pers.

[rad]

βmax The maximum speed of the turning actuators [rad/sec]FMg The gravity force affecting the robot in the M-frame [N]Cf1 The cornering stiffness stiction constant [N/rad]Cf2 The cornering stiffness dynamic constant [N/rad]Ra Armature resistance of the actuators [Ω]

Km Motor constant which is equal to the torque constant Kt

and the electrical constant Ke

[NmA ]

Abbreviations

ADI Active Fault IsolationAPI Autonomous Plant-care Instrumentation systemCM Centre of MassBFDF Beard Fault Detection FilterCOM COMmuniation systemDGPS Differential Global Positioning SystemFDI Fault Detection and IsolationFSB Front Sensor BoardFTC Fault Tolerant ControlGC Geometrical Centre of robotGPS Global Positioning SystemICR Instantaneous Centre of RotationOBC OnBoard ComputerPDF Probability Density FunctionPF Particle FilterPF-FDI Particle Filter - FDIPWM Pulse Width ModulationSA Structural AnalysisSF Sensor FusionSO Severity Occurrence indexSPI Serial Peripheral Interface Bus

8 Aalborg University 2007

Page 9: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Contents

I Introduction 13

1 Background 15

1.1 The API Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.2 Project Focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 System description 19

2.1 External Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.2 Internal Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3 API Subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3 Additional Hardware and Software 25

3.1 Forward Proximity Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.2 Inclinometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.3 Front Sensor Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.4 Remote Shutdown of Wheels . . . . . . . . . . . . . . . . . . . . . . . . . . 29

II Modeling 31

4 Model 33

4.1 API Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.2 Kinematic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.3 Dynamic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4.4 Hybrid Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.5 Model Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

III Fault Detection and Isolation 55

5 Introduction 57

6 Fault Analysis 61

6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

9

Page 10: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

CONTENTS

7 Isolability analysis 71

8 Linear Model based FDI of Steering and Propulsion System 75

8.1 Hybrid State Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

8.2 Continuous State Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

8.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

9 Nonlinear Particle Filter Based FDI of Steering and Propulsion System 91

9.1 Requirements for Particle Filter-FDI . . . . . . . . . . . . . . . . . . . . . . 91

9.2 Particle Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

9.3 FDI using Particle Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

9.4 Performance Parameters with regards to Particle Filter-FDI . . . . . . . . . 96

9.5 Preliminary Test of Particle Filter-FDI Method . . . . . . . . . . . . . . . . 97

9.6 Preliminary Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

10 Active Fault Isolation Supervisor 101

10.1 Active Isolation of Steering Faults. . . . . . . . . . . . . . . . . . . . . . . . 101

10.2 Active Isolation of Propulsion Faults. . . . . . . . . . . . . . . . . . . . . . . 102

10.3 Partial Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

IV Conclusion 107

11 Acceptest of Linear FDI 109

11.1 Hybrid State Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

11.2 Continuos State Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

12 Accept Test of Particle Filter-FDI Method 115

12.1 Conclusion on Particle Filter-FDI . . . . . . . . . . . . . . . . . . . . . . . . 115

13 Accept test of Active Fault Isolation Supervisor 119

13.1 Test of Steering Fault Isolation. . . . . . . . . . . . . . . . . . . . . . . . . . 119

13.2 Test of Propulsion Fault Isolation. . . . . . . . . . . . . . . . . . . . . . . . . 122

13.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

14 Conclusion 123

V Appendix 129

A Additional Hardware 131

B Hardware Test 135

B.1 Inclinometer Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

10 Aalborg University 2007

Page 11: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

CONTENTS

B.2 Proximity Sensor Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

C Implemented Software 139

C.1 Simulink blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

C.2 Standalone Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

C.3 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

D Linear FDI 145

D.1 Linear Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

D.2 Matlab Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

E Fault Analysis 147

F Theory of UIO 157

F.1 UIO Design Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

G FDI Method Test 163

G.1 Test of UIO method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

G.2 Test of Beard Fault Detection Filter method . . . . . . . . . . . . . . . . . . 166

H Active FI Supervisor 169

H.1 Active Isolation of Steering Faults. . . . . . . . . . . . . . . . . . . . . . . . 169

H.2 Active Isolation of Propulsion Faults. . . . . . . . . . . . . . . . . . . . . . . 170

I Kalman Filter 173

J Operational Requirements 175

K Pitch and Roll Compensation of GPS, Gyro and Compass 177

K.1 GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

K.2 Gyro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

K.3 Compass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

Group 1032b 11

Page 12: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 13: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Part I

Introduction

13

Page 14: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 15: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 1

Background

As farms grow in size, together with the size of the equipment used on them, there is aneed for ways to automate processes, previously performed by the farmer himself, suchas controlling the fields for pests. These tasks are perfectly suited for autonomous robots,as they often require numerous repetitions over a long period of time and over a largearea.

The use of robots is a rather new development as most of the existing solutions for au-tomatic supervision, is designed for standard farm equipment, such as tractors, combinesand pesticide sprayers. One such solution is FIELDSTAR from AGCRO[19].

In most cases a small agricultural robot would be ineffective in performing farmingjobs, as these often require a large quantity of materials, either to put into the ground,such as seeds or fertilisers, or to take from the field during harvest. But when dealingwith monitoring and mapping of fields or precision spraying of pesticides, a smallerrobot is ideal, as it is more gentle on the crops but also to the ground. This is due to thelower weight compared to a tractor, causing much lesser soil compaction (see Fig. 1.1).The degree of soil compaction is important to consider, especially when dealing withmonitoring and mapping as this is often performed multiple times throughout the year,as soil compaction can cause a number of problems, such as reduced crop growth anddenitrification[15].

Figure 1.1: Effect of wheel load and tire size on soil compactment depth[15].

15

Page 16: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Background

1.1 The API ProjectIn the year 2000 a co-operation between Aalborg University, The Danish Institute of Agri-cultural Science, The Royal Veterinary and Agricultural University and 4 industrial com-panies was formed[7]. The goal was to create a Autonomous Plant-care Instrumentationsystem (API) and through the development gain expertise and knowledge in autonomousfield operations. The main use for the API is crop and weed monitoring and precisionspraying of pesticides.

The first version of the API was a somewhat simple design using small low-weightspoke-wheels mounted on a simple frame as seen in Fig. 1.2.

Figure 1.2: The first version of the API.

I 2002 a more rugged API was designed using larger knobbed wheels and a more ro-bust frame, which also provides compartments for the batteries and the instrumentation(see Fig. 1.3 on the facing page). The mechanical implementation was performed by TheDanish Institute of Agricultural Science research-center in Bygholm.

A number of research projects has been performed with regards to both instrumenta-tion, control and fault detection and isolation. The projects are:

Autonom Robot til Markanalyse: This project deals with the specification, design andimplementation of the first prototype of the API robot[3].

Navigation af Autonom Markrobot: This project[11] focuses on the development of anavigation system for the API. The electrical system of the API is also designed andimplemented in this project.

Modeling and Fault-Tolerant Control af an Autonomous Wheeled Robot: The main fo-cus of this project[4] was the modeling and control of the API. A FDI method respon-sible for detecting faults in the sensors and in minor detail the actuators, is designedand implemented. A FTC scheme is also designed and implemented.

The project status as of September 2006 is that the mechanical part of the project, afully actuated four-wheeled robot, is completed. A number of sensors, such as GPS ,

16 Aalborg University 2007

Page 17: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

1.2 Project Focus

gyro and compass, is implemented. A hybrid controller, designed for path following, isimplemented and tested. A FDI and FTC scheme is partly implemented.

Figure 1.3: The API robot in its current form and configuration

1.2 Project FocusThe focus of this project is to make the API a robust platform for use in future researchprojects focusing on the use of autonomous robots in both agricultural settings but alsoin research areas involved in the use of larger robots in general outdoor environments.

This involves the design and implementation of a FDI scheme fully capable of detect-ing and isolating faults occurring in the wheel actuators and sensors, as well as the designand implementation of additional hardware and software needed for reliable operation.Completing these improvements will improve the robustness and reliability of the cur-rent system and allow future research groups to use the API without a control or elec-tronic background, allowing them to focus on their research.

Based on the focus of the project a number of objectives can be established.

1.2.1 Project Objectives

The primary objective is to make the API a robust platform. This is primarily obtained byimplementing a full FDI scheme focusing on the faults occurring on the wheel actuatorsand sensors. This is archived though the following sub-objectives:

1. Perform a fault analysis and severity assessment of all wheel, proximity sensor andinclinometer faults..

2. Design and implement a linear model based FDI method.

Group 1032b 17

Page 18: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Background

3. Verify the detection of critical faults.

To complement the linear methods two additional methods for FDI is to be examined,resulting in the following objectives:

1. Design and implement a non-linear FDI method.

2. Design and implement an active FDI method.

The practical objectives involving the additional hardware and software are:

1. Design and implementation of proximity sensors on the API.

2. Design and implementation of a separate inclinometer in order to provide pitchand roll measurements.

3. Design and implementation of relays for disconnecting individual wheels.

1.2.2 Operational Requirements

A number of requirements for the operational performance of the API, has been set byprevious groups. The relevant requirements for use in this project are:

1. When a fault occurs, the robot is allowed to deviate from its course, for instancedriving on top of the rows, for a maximum of 30 seconds.

2. The robot must be able to continue operation with at least one sensor/actuator fault.

3. Operating under both normal and faulty operation the robot must not be potentiallydangerous to its environment.

The complete list of requirements can be seen in App. J on page 175.

18 Aalborg University 2007

Page 19: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 2

System description

The following chapter is an overview of the different components that the API is equippedwith. The different components are placed on the outside of the robot as well as insidethe two compartments placed in front and in the back of the robot.

2.1 External ComponentsThe components on the outside can be seen in Fig. 2.1 on the following page, with thefront and rear compartments shown in Fig. 2.2 on page 21 and Fig. 2.3 on page 22.

For communication the API is equipped with three different antennas:

• A GPS antenna used for communication between the GPS receiver and the availableGPS satellites.

• An omni-directional radio modem antenna for communication between the GPS

receiver and the GPS base station to facilitate the use of DGPS.

• Combined WLAN device and antenna with USB interface for communication be-tween the OBC and the API base station.

Sensors placed on the outside are:

• Compass: Measures the heading of the API relative to magnetic north.

• Doppler radar: Measures the forward speed of the API.

• Inclinometer: Measures the pitch and roll of the API.

• Proximity sensors: Measures the distance to the nearest obstacle using the time offlight of emitted sound waves.

2.2 Internal ComponentsThe front and rear compartments contain a number of common components:

19

Page 20: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

System description

Compass

GPS antenna

WLAN

Radio modem antenna

Inclinometer

Proximity sensors

Doppler Radar

Figure 2.1: Components attached to outer structure of the robot.

20 Aalborg University 2007

Page 21: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

2.3 API Subsystems

• LH28 : Embedded micro controller sampling all propulsion and steering sensors aswell as controlling the steering and propulsion actuators. Communicates with theOBC via a CAN bus.

• H-bridge: Driver step responsible for outputting PWM signals to the steering actua-tors.

• Interface Electronics: Interfaces the sensors to the LH28 controller as well as provid-ing the interface from LH28 to the steering and propulsion actuators.

In addition to the common components the front compartment contains a Gyro, GPS

receiver and Radio Modem. The front compartment also houses the Inclinometer andProximity Sensor unit. The DC/DC converter, providing the voltage for the 5Vcc and12Vcc power bus, is placed below the Inclinometer and Proximity Sensor unit.

The back compartment contains the OBC which provides the overall control of the API

robot and the battery charger.

Interface electronics

LH28H-bridges Inclinometer and prox. sensor unit

GyroShutdown interface

GPS receiverRadio modem

Figure 2.2: Front compartment of the API robot.

2.3 API SubsystemsBased on the components listed in the previous chapter a number of subsystems canbe defined with regards to functionality. The subsystems are: Wheel, Gyro, DopplerRadar, OBC, COM, Compass, GPS , Proximity Sensors and Inclinometer. Each subsystemis divided into smaller modules as seen in Fig. 2.3 on page 23, where the boxes withrounded corners symbolizes hardware modules. The square boxes are software modules.

Group 1032b 21

Page 22: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

System description

Interface electronics

LH28

H-bridgesShutdown interface Charger

OBC

Figure 2.3: Back compartment of the API robot.

22 Aalborg University 2007

Page 23: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

2.3 API Subsystems

API robot

Wheels GyroDoppler

radar OBC COM Compass GPS Proximity sensors

Inclinometer

Temp. Sens

4 Prop. act. Temp. sens. PC-104 WLAN Inclinometer DGPS

4 Steer. act. Shared mem. CAN bus

4 Prop. sens. Controller

4 Steer. sens. Sensor Fusion

LH28

Propul. cont.

Steer cont.

Figure 2.4: API subsystems.

The functionality of the different subsystem are listed below:

Wheels The four wheels gives the API the possibility of moving in any direction dueto the individual steering on all four wheels. The wheel system is based on foursteering actuators and four propulsion actuators. The wheels are controlled by aLH28 microcomputer. The available sensors are steering angle and wheel speed.

Gyro The gyro subsystem provides a measurement of the vertical angular velocity ofthe API. As the gyro measurement is dependent on the temperature, a temperaturesensor is available to correct the measurements.

Doppler radar The Doppler radar measures the APIs speed using microwaves. The mea-surement is calculated using the doppler shift of the transmitted microwaves. TheDoppler radar measurements are collected by the LH28 responsible for wheel 1.

OBC The on-board computer is the the overall control system. The OBC is also respon-sible for collecting measurements from the gyro, compass, GPS, proximity sensorsand inclinometer. The OBC is a PC104 stack with a 133MHZ CPU. The OS on the OBC

is Linux. To facilitated manual control of the API a monitor, keyboard and joystickis also implemented.

COM The communication subsystem provides the communication between the OBC andthe four LH28s as well as the communication between the API and the base station.The communication between the OBC and the wheel nodes are based on the CAN

bus and consist of the steering and propulsion references and the sensor measure-ments provided by the wheels. The link connecting the API to the base station

Group 1032b 23

Page 24: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

System description

is standard WLAN. The communication between base station and API is the way-points, describing a path in addition to function as a monitoring system, when theAPI is operating autonomously.

Compass The compass provides the heading of the API with regards to the magneticnorth. To compensate for tilt offset on the heading a internal inclinometer is present.This compensation is due to a noisy and overly sensitive inclinometer, disabled inthe Compass.

GPS The GPS system provides the position of the API . The standard GPS precision isincreased by using DGPS. The DGPS precision is obtained using a base station witha fixed position. The base station can then calculate the error on the GPS measure-ments and transmit the results to the API using the radio modem.

Proximity Sensors The proximity sensors provide a distance measurement of objects infront of the API. This distance can be used for obstacle avoidance and as an emer-gency system, in case of a possible collision.

Inclinometer The inclinometer is placed close to the geometric center of the API andmeasures the pitch and roll of the robot.

24 Aalborg University 2007

Page 25: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 3

Additional Hardware and Software

When the API robot was handed over from the last group it was clear that some modifi-cations were necessary on the physical system in order to make it more robust and moreusable for the current and future projects. The main problems were:

• The implemented WLAN solution was not very robust or very usable as it didn’thave the desired range and was prone to failure when used.

• The interface electronics boxes which are a very important link between the LH28sand the different sensors and actuators related to each wheel were unreliable andimpractical in use.

• The only inclinometer on the API robot housed in the compass is fluid based andsensitive to vibrations to the point where the measurements are nearly unusable.

• The API robot did not have any proximity sensors to detect obstacles in its path.

To try and solve these problems several initiatives are taken.

• A new USB WLAN device is implemented and the old subsystem removed from therobot.

• New interface electronics boxes are designed and implemented on the robot.

• A new accelerometer based inclinometer which is less sensitive to vibrations is de-signed and implemented on the robot.

• Two ultrasonic range finders are designed and implemented on the robot.

This chapter describes the design and implementation of the additional hardware andsoftware needed to improve the existing base on the API robot.

3.1 Forward Proximity SensorsThe API is in its current state essentially blind. There are no sensors dealing with theenvironment. This means that when encountering an obstacle, the robot just continuesits operation. This can result in serious damage to both the API and the obstacle as the API

weighs approximately 225kg and has been measured to reach speeds of at least 2.2 m/s.

25

Page 26: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Additional Hardware and Software

The requirements for the current application are:

• Wide detection volume (>15 cone).

• Range of at least 2 m.

The range demand is based on a measured speed of the robot of 2.2 m/s which gives astopping distance measured to be about Ddyn=1.2 m. The total distance travelled fromdetection of an obstacle until the API robot comes to a complete stop is:

Dstop = Dsample +Dprocess +Ddyn (3.1)

The API robot proximity sensors are sampled once every 60ms yielding a sample distanceat 2.2 m/s of:

Dsample = 2.2 m/s · 0.060s = 0.132m (3.2)

Dprocess is assumed to be negligible so the estimated total stopping distance is:

Dstop = 1.20m + 0.132m = 1.33m (3.3)

There are several alternatives in range finding sensors. Most of them use sound or laserto sense distances but each have very different detection area and volume. While lasersensors generally have long range and accuracy, none provide the wide sensing volumeof the ultrasonic sensor. In light of this and based on the assumption that the API willdrive in a straight line most of the time, it is decided to outfit the API with two forwardfacing proximity sensors. This allows the API to sense any object directly in front of it andtake appropriate countermeasures.

The chosen sensors are two SensComp 6500 Ranging Modules[17] and two SensComp7000 sonar transducers[18]. The sensors are rated at a maximum detection distance of10 meters. The ranging modules are connected to a micro controller, which performs theactual range sampling. The result is transmitted to the OBC via a serial RS232 connection.To save components and implementation, the additional computation power and unusedports on the micro controller is used to sample the new inclinometer described in Sec. 3.2.The ultrasonic transducers are placed on the robot as shown in Fig. 3.1 on the next page.

3.1.1 Conclusion

The two ultrasonic proximity sensors have been mounted on the robot and interfaced tothe OBC. The sensors where tested in Sec. B.2 on page 136 and found to live up to thedemands previously stated in this section as it could detect obstacles in its path whendriving straight and at the desired range.

3.2 InclinometerThe API is in the current state, assumed, to always be level. This is not a viable solution,when driving in fields, as the pitch and roll of the API affect both GPS and compass read-ings as well as introducing disturbances to the estimation of χM. The reason the robotis assumed level, is due to noise in the inclinometer built into the compass. Previous

26 Aalborg University 2007

Page 27: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

3.3 Front Sensor Board

Figure 3.1: The placement of the two Proximity Sensors

attempts to remove the noise has been unsuccessful and a new inclinometer is proposed.The selected inclinometer is the SCA100T[20]. The sensor uses MEMS technology instead ofthe fluid inclinometer in the compass, which makes it more resistant to the external noiseoriginating from driving in rugged terrain, the API is expected to operate in. The SCA100T

is mounted under the GPS antenna and is interfaced using a SPI connection to the microcontroller on the forward proximity sensors. The inclination measurements can then beused to calculate the correct GPS position and the correct compass heading as well as thecorrect gyro measurements. App. K on page 177 shows the tilt correction. It is assumedthat the API in its current status as a research project, will only move on surfaces, that canbe considered level. As a consequence of that, the tilt correction Will not be implemented,only described.

3.2.1 Conclusion

The inclinometer has been designed, implemented and tested successfully in App. B.1 onpage 135. A function and vibration test was conducted and the inclinometer works asintended and the vibration sensitivity is much less than the original inclinometer.

3.3 Front Sensor BoardThe basis of this hardware board is a Microchip PIC16F877[14], responsible for samplingthe sensors. The board and the connected sensors will be named as the Front SensorBoard(FSB). Even though only two proximity sensors are implemented, the FSB has spacefor and can use up to eight as seen in Fig. 3.2 on the next page. This allows future groupsto easily expand the area covered by the proximity sensors. The eight sensors could bearranged with two on each side to enable the API to move around obstacles.

Group 1032b 27

Page 28: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Additional Hardware and Software

GN

D

GND

CON1CON2CON3

CON3CON2CON1

TTL-RXTTL-TX

WACT1WACT2

SDA

SDO

ECHO-J3ECHO-J2

BINH-J3BINH-J2

INIT-J3INIT-J2

INIT-J5INIT-J4

INIT-J7INIT-J6

INIT-J9INIT-J8

ECHO-J5ECHO-J4

ECHO-J7ECHO-J6

ECHO-J9ECHO-J8

BINH-J4BINH-J5BINH-J6

BINH-J8BINH-J9

BINH-J7

SCK

VCC

VCC

VCC

VCC

VCC

74LS138-J274LS138-J2

A1

B2

C3

Y0 15

Y1 14

Y2 13

Y3 12

Y4 11

Y5 10

Y6 9

Y7 7

G16

G2A4

G2B5

74LS138-J174LS138-J1

A1

B2

C3

Y0 15

Y1 14

Y2 13

Y3 12

Y4 11

Y5 10

Y6 9

Y7 7

G16

G2A4

G2B5

U4

74LS151

U4

74LS151

D0 4D1 3D2 2D3 1D4 15D5 14D6 13D7 12

A 11B 10C 9

G 7

Y6

Y5

R1

4.7k

R1

4.7k

Y1Y1

LEDLED

R2

470

R2

470LEDLED

ButtonButton

R3

470

R3

470

PIC16F877PIC16F877

OSC1/CLKIN13

OSC2/CLKOUT14

MCLR/Vpp/THV1

RA0/AN02

RA1/AN13

RA2/AN2/VREF-4

RA3/AN3/VREF+5

RA4/T0CKI6

RA5/AN4/SS7

RE0/AN5/RD8

RE1/AN6/WR9

RE2/AN7/CS10

RB0/INT 33

RB1 34

RB2 35

RB3/PGM 36

RB4 37

RB5 38

RB6/PGC 39

RB7/PGD 40

RC0/T10S0/T1CKI 15

RC1/T10S1/CCP2 16

RC2/CCP1 17

RC3/SCK/SCL 18

RC4/SDI/SDA 23

RC5/SDO 24

RC5/TX/CK 25

RC7/RX/DT 26

RD0/PSP0 19

RD1/PSP1 20

RD2/PSP2 21

RD3/PSP3 22

RD4/PSP4 27

RD5/PSP5 28

RD6/PSP6 29

RD7/PSP7 30

VDD11

VSS12

VDD32

VSS31

LEDLED

C222pFC222pF

R4

470

R4

470

C322pFC322pF

Figure 3.2: The diagram of the Front Sensor Board.

Figure 3.3: The implemented version of the Inclinometer and Proximity Sensor board

28 Aalborg University 2007

Page 29: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

3.4 Remote Shutdown of Wheels

Figure 3.4: Schematic of the Emergency Stop circuit.

3.4 Remote Shutdown of WheelsAs a additional safety measure and as a method for FDI and FTC, a shutdown box capableof disabling the power to the individual wheels, is implemented on the API . This willenable the control system to power off one or more wheels in case of emergency or as amethod of fully isolating steering and propulsion faults. The current version of the API

is not capable of turning off actuators, which in situations with faults occurring on thewheels can result in limited control of the API and in some cases can cause actual damageto the robot. The requirements for the shutdown boxes are:

1. It must not interfere with or disable the emergency stop buttons, mounted on therobot.

2. It shall be able to provide power to the wheels, even when the OBC is shutdown.

3. It shall be able to turn the power to the wheels on or off.

4. It must be able to turn the wheels on, using the voltage provided by the parallelport on the OBC .

The emergency stop system on the API is implemented by a previous group as 4 emer-gency stop switches and 4 relays in series, so if one of the emergency stops are triggered,all relays switches off. A schematic can be seen of Fig. 3.4.

This means that relays for each wheel are already implemented and that the remoteshutdown can be implemented between the emergency switches and the relays for thewheels. The proposed method is to use a transistor placed before the relay, to enablethe power to the relay. The transistor is pulled high by the OBC, triggering the relay and

Group 1032b 29

Page 30: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Additional Hardware and Software

OBC_1

OBC_2

5V

24V

24V

Wheel Relays

Wheel Relay 1Wheel Relay 1

COM

A

B

NC

NO

On/OBCOn/OBC

2.7k2.7k

2.7k2.7k

2.7k2.7k1k1k

BC337BC337

2.7k2.7k

BC337BC337

DIODEDIODE

2.7k2.7k

2.7k2.7k

BYV28-200BYV28-200

12

1k1k

1.6k1.6k

LEDLED

BYV28-200BYV28-200

12

Wheel Relay 2Wheel Relay 2

COM

A

B

NC

NO

DIODEDIODE

1.6k1.6k

LEDLED

Figure 3.5: Schematic of the remote shutdown relays

Figure 3.6: Picture of the remote shutdown boxes

providing power to the wheel actuators. As this requires the OBC to actively turn onthe wheel motors, a switch is added, allowing the wheels to be either permanently on orcontrolled by the OBC. As the relays are placed in both the front and back compartments,two shutdown boxes are implemented, each capable of controlling the two wheels in thecompartment. The schematic of one of the shutdown boxes can be seen in Fig. 3.5.

The final shutdown box can be seen in Fig. 3.6

3.4.1 Implementation Status

The remote shutdown boxes has been implemented and is capable of turning the wheelson or off. Furthermore the functionality of the emergency switches is not bypassed. Thismeans that the functionality described above has been successfully implemented.

30 Aalborg University 2007

Page 31: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Part II

Modeling

31

Page 32: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 33: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 4

Model

This chapter contains the kinematic and dynamic model of the API robot. The kinematicmodel is a mathematical model, which maps the orientation and angular velocity of thewheels to the movement of the robot. The dynamic model maps the propulsion torquesand wheel orientation to the acceleration of the robot. The two models is a result of thework of many previous groups working on the model and is represented here in its latestform with few modifications. The third section describes a hybrid model, based on thedynamic model.

4.1 API GeometryAs mentioned in the System Description, the API is a four wheeled robot with an steeringactuator and propulsion actuator on each wheel. The dimensions of the API is shown inFig. 4.1

76cm

150cm 30cm

100cm35cm 15cm

100c

m

68cm

50cm 34cm

22cm 27cm

107cm

100cmr =23cmw

60cm

Figure 4.1: Dimensions of the API robot[4].

In order to describe the position and orientation of the API a posture vector is definedas χ = [x, y, θ]T , where x and y is the position of the API oriented with angle θ in theinertial reference frame N . To simplify some of the relations between the API and thesurface is is placed on as well as the relations between the different parts of the API, anumber of frames is defined:

N Frame This frame is the inertial reference frame and is defined as the terrain, the robotmoves on as seen from above. The frame is similar to the layout of a map.

33

Page 34: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Model

M Frame This frame is a non-inertial reference frame and describes the same terrain asframe 0 but with the additional information of the pitch and roll of the robot ψx andψy . This is the frame, that describes the actual movement on the field.

B Frame This frame describes the orientation of the robot in M. The frame is fixed in thegeometric center (GC) of the robot and is aligned with the forward direction of theAPI placed in the x-axis of the frame. This means that this frame is M rotated withthe orientation of the robot θM.

The sensors available on the API is measured in different frames. In order to use asensor measurement from another frame, the measurement must be rotated by the oneor more of the rotation matrices shown below:

RN→M =

cos(ψx) 0 sin(ψx)− sin(ψx) cos(ψx) sin(ψy)

|z′|cos(ψy)

|z′|cos(ψx)2 sin(ψy)

|z′|− sin(ψx) cos(ψy)

|z′|− cos(ψx) sin(ψy)

|z′|cos(ψx) cos(ψy)

|z′|

(4.1)

where

|z′| =√

cos(ψx)2 + cos(ψy)2 − cos(ψx)2 cos(ψy)2

RB→M =

cos(θM) sin(θM) 0−sin(θM) cos(θM) 0

0 0 1

(4.2)

Figure 4.2 on the facing page show the definition of the M and B frames together withthe definition of the wheel angles βi. The wheels are fixed in the B frame with (xw,i, yw,i)as the position of the ith wheel. To the right of Fig. 4.2 the position of the wheel withregards to the GC is shown as the intersection of the lines: κi with angle γi. The definitionof the posture vector is also shown.

The values for κi and γi can be seen in Table 4.1.

Parameter κ1 κ2 κ3 κ4

Value 0.707 m 0.707 m 0.707 m 0.707 m

Parameter γ1 γ2 γ3,g γ4

Value 45 135 225 315

Parameter m I

Value 211.5 kg 83.5 kg·m2

Table 4.1: The parameters of the robot.

The geometric center is selected as the basis for all simulation and modeling. Anotherpossibility is to use the center of mass(CM), but as the behaviour of the robot when usingthe GC is more intuitive, this GC is chosen.

34 Aalborg University 2007

Page 35: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

4.2 Kinematic Model

M

B

β1

GC

M

M Mx ,y

θ

β4

β2

β3

NB

γ

γ

1

4

γ3κ3

κ2

κ4

w,3

w,4

w,1

w,2 w,2(x ,y )

w,3(x ,y )

w,4

(x ,y

)

w,1

(x

,y

)

Figure 4.2: To the left: Definitions of the B and M frames with the steering angles ofthe wheels (purple) and the posture vector (red) shown in the M frames. To the right:Position of the geometric center (GC) given by κi and γi[4].

4.2 Kinematic ModelThe purpose of the kinematic model is to map the steering angles of the wheels and theangular velocity of the propulsion actuators (βi and φi) to the velocities of the robot. Thevelocities are given by the vector χN .

Kinematic model

βiφi χN

Figure 4.3: Inputs and outputs of the Kinematic model

The following assumptions are made in the kinematic model:

Assumption 1 All wheel orientations are nominal perpendicular to the InstantaneousCenter of Rotation(ICR).

Assumption 2 It is assumed that the movement of the API is pure rolling with no slip.

The API robot is modeled using the B and M-frame. The posture vector is χM. Whenturning the API robot turns around the ICR in the M-frame.

In order to calculate the ICR the individual wheel angles βi +π2 are used instead of βi,

as the intersecting lines have to be perpendicular to the wheel.

The derivation of the kinematic model is separated into three parts:

1. Calculating the ICR from wheel orientations βi.

2. Projecting the wheel velocities to an angular velocity around the ICR.

3. Calculate the translatory movement from the angular velocity and the vector to theICR.

Group 1032b 35

Page 36: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Model

β2+π2

β1+π2

β4+π2

β3+π2

(xw,2, yw,2)

(xw,1, yw,1)

(xw,3, yw,3)

(xw,4, yw,4)

(x, y)

θ

B

ICR

N -frame

Figure 4.4: Geometric definitions for API robot when turning around the ICR.

The equations used for calculating the ICR can be derived by looking at Fig. 4.4 andFig. 4.5 on the next page. Equation (4.3) describes a straight line, with a slope of tan(βi+

π2 )

on which the point (xi, yi) is placed.

yi = aixi + bi

yi = tan(

βi +π

2

)

xi + bi

yi = − cot(βi)xi + bi (4.3)

A known point on the line is the position of the wheels (xw,i, yw,i). This point is substi-tuted into Eq. (4.3) which is then solved to find bi.

yw,i = − cot(βi) · xw,i + bi ⇒bi = yw,i + xw,i cot(βi)

The calculated bi is then substituted into Eq. (4.3) to yield the final description of the line,that goes through the ICR and the center of wheel i:

yi = (xw,i − xi) cot(βi) + yw,i (4.4)

The ICR can be described as the intersection between two of these lines. The result of this

36 Aalborg University 2007

Page 37: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

4.2 Kinematic Model

βi +π2

βi

(xw,i, yw,i)

ICR

yi = aixi + bi

Figure 4.5: The line from the center of a wheel to the ICR.

operation can be seen in Eq. (4.5) and Eq. (4.6).

y1 = yBICR ∧ x1 = xBICR ⇒

xBICR =xBw,i cot(βi) − xBw,j cot(βj) + yBw,i − yBw,j

cot(βi) − cot(βj)(4.5)

yBICR =cot(βi)y

Bw,j − cot(βj)y

Bw,i + cot(βi) cot(βj)

(

xBw,j − xBw,i

)

cot(βi) − cot(βj)(4.6)

Since the above equations only have a solution when the lines intersect, the model cannotbe used when the robot is moving in a straight line. The model therefore effectivelybecomes a hybrid model with two states. One for turning and one for driving in a straightline. The following two equations will be used to model the behavior of the API, whendriving in a straight line:

xB =1

4

4∑

i=0

(

φi cos(βi))

(4.7)

yB =1

4

4∑

i=0

(

φi sin(βi))

(4.8)

When the ICR has been determined, the angular velocity around it (θM) can be foundfrom the velocity of a wheel (Vwi) and the distance from a wheel to the ICR(ri). This isillustrated in Fig. 4.6 on the following page and shown in Eq. (4.9).

θM =Vwiri

⇔ θM

=ri × Vwi

riTri

(4.9)

This means that θM

can be written as in Eq. (4.10).

Group 1032b 37

Page 38: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Model

Vw,i = φi · rw

ICR

ri

θM

Figure 4.6: The angular velocity around the ICR.

ri =

xBw,i − xBICRyBw,i − yBICR

0

, Vwi = rwφi

cos(βi)sin(βi)

0

θM

=ri × Vwi

riTri

=

xBw,i − xBICRyBw,i − yBICR

0

×

rwφi cos(βi)

rwφi sin(βi)0

(

xBw,i − xBICR

)2+(

yBw,i − yBICR

)2

=

00

rwφi(xBw,i−x

BICR) sin(βi)−(yBw,i−y

BICR) cos(βi)

(xBw,i−xBICR)

2+(yBw,i−y

BICR)

2

(4.10)

This rotational velocity around the ICR will equal the rotational velocity around the GC.The translational velocity of the robot will be the tangential velocity, which can be calcu-lated as shown in Eq. (4.11).

xB

yB

0

= rICR × θM

=

xBICRyBICR

0

×

00

θM

=

yBICR−xBICR

0

θM (4.11)

The derivative of the posture vector (χB) can then be written as shown in Eq. (4.12)where Eq. (4.10) is substituted into Eq. (4.11). The position of the ICR (xBICR and yBICR) arederived in Eq. (4.5) and Eq. (4.6).

χB =

yBICR−xBICR

1

rwφi

(

xBw,i − xBICR

)

sin(βi) −(

yBw,i − yBICR

)

cos(βi)(

xBw,i − xBICR

)2+(

yBw,i − yBICR

)2 (4.12)

Now the velocities have been determined in the B-frame and in order to describe themotion in the N -frame, χB is first rotated into the M-frame by RB→M and is then rotated

38 Aalborg University 2007

Page 39: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

4.3 Dynamic Model

into N by using RM→N as in Eq. (4.13).

χN = RM→NRB→MχB (4.13)

With this the complete kinematic model can be calculated using Eqs. (4.12), (4.5), (4.6)and (4.13) to calculate respectively the ICR, angular velocity, and translational velocityof the robot. The result is shown in Eq. (4.14), which is the total kinematic model of therobot in inertial coordinates.

χN = RM→N

RB→MχB

= RM→N

RB→M

yBICR−xBICR

1

rwφi

(

xBw,i − xBICR

)

sin(βi) −(

yBw,i − yBICR

)

cos(βi)(

xBw,i − xBICR

)2+(

yBw,i − yBICR

)2

(4.14)

where Eq. (4.5) and Eq. (4.6) yields:

xBICR =xBw,i cot(βi) − xBw,j cot(βj) + yBw,i − yBw,j

cot(βi) − cot(βj)

yBICR =cot(βi)y

Bw,j − cot(βj)y

Bw,i + cot(βi) cot(βj)

(

xBw,j − xBw,i

)

cot(βi) − cot(βj)

4.3 Dynamic ModelThe dynamic model sought is a three degree of freedom model incorporating the twofollowing properties:

• Individual use of all four steering actuators. This would enable the model to func-tion even if the wheels were not placed in the correct steering angles as required inthe kinetic model. This requires the modeling of the friction caused by the wheel.

• Individual propulsion torques. This enables the model to function with differentpropulsion forces from each wheel.

The setup of the model is seen in Fig. 4.7 on the next page. The forces Fx is the propul-sion forces, vi is the velocity of a wheel. The friction force of the wheel Fy is dependenton the slip angle of the wheel, α.

In order to simplify the model the following assumption is made: The robot is fixedwith regards to the suspension. This eliminates the roll and pith of the robot due to theeffect of acceleration.

The model has two types of inputs:

• The steering angle of the four wheels βi.

• The propulsion forces of the four wheels Fxi.

and the output is the translational and rotational acceleration of the robot in the Mframe:

χM =

xM

yM

θM

(4.15)

Group 1032b 39

Page 40: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Model

Fx3

Fy2

Fx2

Fy3

Fx4

F 4y

y

y

x

1yF x1F

θ

BBxθ

V

β1

α1

1

M

MM

θ.

M

M

Figure 4.7: The general setup for the dynamic robot model[4].

4.3.1 Modeling of Sideways Friction in the Wheels

The sideways friction forces is defined in [16] as a linear relation between the frictionforce and the slip angle:

Fyi = −Cf · αi (4.16)

where:

• Fyi is the sideways friction force of the i’th wheel [N].

• Cf is the cornering stiffness of the tire [N/rad].

• αi is the slip angle of the i’th wheel [rad].

This equation is found to introduce oscillations in the model and is therefore expandedwith viscous friction. The expanded equation is shown in Eq. (4.17).

Fyi = −(Cf1 + Cf2 · VM) · αi (4.17)

where:

• Cf1 is the cornering stiffness constant, that accounts for the stiction in the system[N/rad].

• Cf2 is the cornering stiffness constant, that accounts for the coulomb and viscousfriction in the system [ N·s

rad·m ].

• VM is the speed of the robot [m/s].

The slip angle αi is defined in the range [−π2 ,

π2 ] and is calculated as the angle from

Fxi to Vi in quadrant 1 and 4 and from Vi to Fxi in quadrant 2 and 3. This can be seen inFig. 4.8 on the facing page.

40 Aalborg University 2007

Page 41: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

4.3 Dynamic Model

x1Fx1F

Fx1

θ

α =0o

V1

β1

x1F

α1

β1

x1F

α1

α1

β1

x1F1yF

1yF

1yF

1yF

1yF

yF 1,res

yF 1,res

θ

α1

β1

o o=−30α α =−90

θ

α1

β1

VV1 1

θ

o=−60α

V1

M

M M

M

M

α o

V1

=60

θ

α =30o

V1

Figure 4.8: Illustration of the slip angle at different steering angles. The green Fyi shownthe definition of the friction force and the cyan Fyi,res shown where the definition and theresulting force differs[4].

In order to determine the slip angle it is necessary to find the true velocity of thewheel (Vi) which consists of two components: One from the velocity of the GC (VM) anda component from the rotation of the robot around the GC (Vti) as seen on Fig. 4.9 on thenext page.

Vti is a function of the angular velocity and distance from GC of the centre of the wheel.

Vti = θM · κi (4.18)

Separating Vti into its components results in the following equations:

Vtxi = θM · κi · sin(γi + θM) (4.19)

Vtyi = θM · κi · cos(γi + θM) (4.20)

Using Fig. 4.10 on the following page is can be seen that xM−Vtxi and yM+Vtyi formsthe opposite and adjacent sides of the triangle with V1 as hypotenuse.

The slip angle can then be calculated as shown in Eq. (4.21).

αi = tan−1

(

yM + θM · κi · cos(γi + θM)

xM − θM · κi · sin(γi + θM)

)

− βi − θM (4.21)

The definition of the sideways friction force does not take into account that the wheelcan rotate more than 180 past the velocity direction resulting in the friction force point-ing in the wrong direction in some cases as seen in Fig. 4.8. Case 2 and 5 should give thesame friction force. To correct this behavior the slip angle is redefined as seen in Eq. (4.22)

Group 1032b 41

Page 42: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Model

θ.

θ

κ 1

y

xxB

V 1Fx1αβ1

γ1

V

Vt1

tx1V

V1

θ

M

MM

M

M

M

M My

.x

.

ty1

Figure 4.9: Wheel 1 and centre of robot with relevant velocities and angles shown forfinding α1[4].

αi =

π − α∗i for π/2 ≤ α∗

i < 3π/2α∗i − 2π for 3π/2 ≤ α∗

i

α∗i for α∗

i < π/2(4.22)

Where α∗i is defined as the αi from Eq. (4.21).

4.3.2 Translational Motion

To describe the translational motion of the robot, the acceleration in the M-frame: xM

and yM must be derived. The acceleration is divided into four different contributions:

• Sideway friction forces: Fyi

Fx1

M

M

+V

yty

1M.

−Vtx1x.

+V t

y1y.

1V

α

β+θM

Figure 4.10: The right-angled triangle formed by xM − Vtxi and yM + Vtyi[4].

42 Aalborg University 2007

Page 43: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

4.3 Dynamic Model

• Propulsion forces: Fxi

• Centripetal force: Fc

• Gravity force: Fg

The Sideway friction force was defined in the previous section and the propulsionforce is defined as Fxi =

τr,i

rw, where τr,i is the propulsion torque for the propulsion actua-

tor.

These two forces are then projected onto the M-frame as seen in Fig. 4.11

F 1y β +θ1

β +θ1

V1

F 1x

M

M

Figure 4.11: The sideways friction and propulsion forces and their projection onto theM-frame[4].

This leaves the centripetal force and the gravity force. The contribution from the cen-tripetal force can be determined using a scenario, where all wheels are perpendicular tothe ICR and where all propulsion actuators are powered down. The acceleration of therobot is then only a result of the centripetal force:

ar = r · (θM)2 ⇔ (4.23)

ar = VM · θM (4.24)

where:r is the radius of the circle the robot is driving on [m].ar is the centripetal acceleration of the robot [m/s2].VM is the velocity of the robot [m/s].

To separate the vector into its two components arx and arx, the relationship betweenthe VM and ar vectors is derived. This is shown in Fig. 4.12 on the next page. Theresulting equations is shown in Eq. (4.25).

arx = yM · θM

ary = xM · θM (4.25)

The final contribution is the acceleration due to the gravity:

Group 1032b 43

Page 44: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Model

arx θx

ar a

V

.

.

.M

ry

M

yM..M

.

.y

x

M

M

ICR

Figure 4.12: The robot with centripetal acceleration shown[4].

The gravity force (FMg ) can be calculated as shown in Eq. (4.26).

FMg =

FMgx

FMgy

FMgz

= m ·

aMgxaMgyaMgz

= RN→M

00

m · g

(4.26)

The complete equation for the translational motion is performed by summing all theforces from each of the four wheels, dividing by the mass of the API robot adding thecentripetal and gravity acceleration as shown in Eq. (4.27).

χMtran =

xM

yM

θM

=

1m

∑4i=1

(

cos(βi + θM) · Fxi − sin(βi + θM) · Fyi)

− yM · θM − aMgx1m

∑4i=1

(

sin(βi + θM) · Fxi + cos(βi + θM) · Fyi)

+ xM · θM − aMgy

0

(4.27)

4.3.3 Rotational Motion

The rotational motion of the robot is caused by two different forces: The friction of thewheels and the force of the propulsion actuators. These two forces results in a rotationaround the GC and therefore must be projected into a line perpendicular to the line orig-inating in the GC and crossing though the center of the wheel. This line is given by κiand γi. As seen of Fig. 4.13 on the facing page this perpendicular line is also the line onwhich the velocity Vti is pointing. The projection of the forces is performed using theangle βi − γi.

The angular acceleration around GC can then be found by multiplying the projectedforces with the arm (κi), summing over the four wheels and dividing with the momentof inertia as shown in Eq. (4.28).

χMrot =

xM

yM

θM

=

001I

∑4i=1

(

sin(βi − γi) · κi · Fxi + cos(βi − γi) · κi · Fyi)

(4.28)

44 Aalborg University 2007

Page 45: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

4.3 Dynamic Model

x1F

1yF

1γκ 1

γ1

β11β

Bx

Vt1γ1

GC

Figure 4.13: Wheel 1 and the GC of the robot with relevant vectors and angles for calcula-tion of the rotational motion[4].

4.3.4 Integration of the Dynamic Model

The complete dynamic model is integrated with the actuator and sensor models. Themain equations for the robot is shown here:

Robot:

χM = χMrot + χM

trans = (4.29)

xM

yM

θM

=

1m

∑4i=1

(

cos(βi + θM) · Fxi − sin(βi + θM) · Fyi)

− yM · θM − aMgx1m

∑4i=1

(

sin(βi + θM) · Fxi + cos(βi + θM) · Fyi)

+ xM · θM − aMgy1I

∑4i=1

(

sin(βi − γi) · κi · Fxi + cos(βi − γi) · κi · Fyi)

(4.30)

where:

Fyi = −(Cf1 + Cf2 · VM) · αi and Fxi =τr,irw

(4.31)

αi =

π − α∗i for π/2 ≤ α∗

i < 3π/2α∗i − 2π for 3π/2 ≤ α∗

i

α∗i for α∗

i < π/2(4.32)

α∗i = tan−1

(

yM + θM · κi · cos(γi + θM)

xM − θM · κi · sin(γi + θM)

)

− βi − θM (4.33)

Actuators:

τr,i =Km(d)

Ra(d)· 4.5 ∗ τrefgain

100τref,i −

(

K2m(d)

Ra(d)+ b(d)

)

φi (4.34)

βi = βref,i (4.35)

Group 1032b 45

Page 46: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Model

The transfer function between βi and βref,i is chosen as one, as the dynamics of the steer-ing actuators is very fast compared to the rest of the API. The propulsion actuator modelis dependent on the angular velocity of the wheel. The equation describing the relationbetween the velocity of the API with the individual angular velocity of the wheels is:

φi =Vwirw

(4.36)

where:rw is the radius of the wheels [m].Vwi is the translational velocity of the wheel [m/s]:

Vwi = cos(βi + θM) · (xM − Vtxi) + sin(βi + θM) · (yM + Vtyi) (4.37)

4.3.5 Summary

The kinematic and dynamic models models each describe the posture vector (χ) in differ-ent ways. However the dynamic model has potential to be more accurate as it factors inthe forces acting on the robot, such as the sideways friction forces and the gravity force.This can also be seen in the assumptions made for the kinematic model in Sec. 4.2 onpage 35. Therefore this model is chosen as the primary model used throughout the restof the project. The kinematic model does however add some useful results as well. Thedynamic model and the ICR -part of the kinematic model are used together when the API

is moving on a circle with radius R as illustrated in Fig. 4.14

ICR Steering act.

Propulsion act.

Dyn. model

βi,refR βi

τi,ref τi

χβi

Figure 4.14: The dynamic model.

4.4 Hybrid ModelingAs the dynamic model described in the previous section is nonlinear, a linear modelbased approach to FDI is not directly applicable. To facilitate the use of linear modelbased FDI, a hybrid model of the non-linear system is developed. The concept is to makea complete hybrid approximation of a given path before the API robot begins to traversethe field.

The hybrid model will contain enough states to make it possible for the API to travelon a predetermined path and have a linear model based FDI scheme detect and partiallyisolate a number of faults as defined in Cha. 6 on page 61.

46 Aalborg University 2007

Page 47: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

4.4 Hybrid Modeling

4.4.1 Typical scenarios

The API will spend most of its operation time traversing a field by following the croprows running the length of a field as seen in Fig. 4.15. These crop rows will have different

Figure 4.15: Typical field row placement.

distances between them and the field will be oriented differently geographically but therobot will typically traverse the field in the same way: Driving in straight lines alongthe length of the field and turning in circles at the ends of the rows as shown in figureFig. 4.16.

-ICR

ICR

θM

M-frame

Figure 4.16: Typical driving scenario.

Group 1032b 47

Page 48: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Model

4.4.2 Path approximation

When a path has been selected for the API robot the next step is to approximate the non-linear behavior of the model with a number of linear models.

To facilitate this an algorithm has been developed in MATLAB which take the differentICRs and angles the API will be travelling on, finds the desired number of working pointsand then linearize the dynamic model in the working points as shown in App. D.1 onpage 145. The linear model can be seen in Eq. (D.5). The result is a finite number of linearmodels which can approximate the entire planned path.

The path on which the API will be travelling can be described with straight lines andcircle segments. The straight lines can be described by a single working point, but as seenin Fig. 4.17 the inherent non-linearities of the circle segments makes it necessary to haveseveral working points to describe the API robots behavior accurately.

0 5 10 15 20 25 30−0.25

−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

0.25

xM

YM

0 5 10 15 20 25 300

θM

Time [s]

Vel

oci

ty[m

/s]

Po

siti

on

[rad

]

π

12π

32π

Figure 4.17: Sinusoidal behaviors in Turning mode scenario.

Figures 4.18 to 4.19 on the facing page show a comparison of different number of stateson a full revolution of the API robot.

In this case 16 states are deemed minimum for describing motion, when driving on acircle. One state for each π/8 section of the circle.

4.4.3 Defining the Hybrid Model

To define the hybrid model, a hybrid tuple is used. The hybrid tuple define the differentoutputs and inputs as well as the states of the model in a standardized way as describedin the literature[2].

H1 = Q,X,U, Y, Init, f , (4.38)

48 Aalborg University 2007

Page 49: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

4.4 Hybrid Modeling

0 0.5 1 1.5 2 2.5 3

−1.5

−1

−0.5

0

0.5

1

1.5

Figure 4.18: The hybrid system with 8 dis-crete states per turn compared with thenonlinear system.

0 0.5 1 1.5 2 2.5 3

−1.5

−1

−0.5

0

0.5

1

1.5

Figure 4.19: The hybrid system with 16discrete states per turn compared with thenonlinear system.

where

Q1 = q1, q2 . . . qn (4.39)

X1 =

x, y, θ, θ

(4.40)

U1 = UD1 ∪ UC1 = β1, β2, β3, β4, τ1, τ2, τ3, τ4 (4.41)

Y1 = YD1 ∪ YC1 =

x, y, θ, θ

(4.42)

Init =

q1, x = x0, y = y0, θ = θ0, θ = θ0

(4.43)

f =

f1(q1, (x, y, θ, θ), (β1, β2, β3, β4, τ1, τ2, τ3, τ4))

f2(q2, (x, y, θ, θ), (β1, β2, β3, β4, τ1, τ2, τ3, τ4))...

fn(qn, (x, y, θ, θ), (β1, β2, β3, β4, τ1, τ2, τ3, τ4))

4.4.4 Test case

Choosing three different ICRs a test path for the hybrid model is formulated. The aim isto prove the hybrid concept for the API robot model by selecting enough states to be ableto test and verify the method. The path is shown in Fig. 4.20 on the next page.

In order to test the hybrid concept a hybrid observer is needed. This will be designedlater. The hybrid model tested in with a hybrid observer in Sec. 11.1 on page 110.

4.4.5 Partial conclusion

A hybrid model has been defined for the API robot. It is designed to estimate the API

robot’s behavior within a predefined working area. A working algorithm has been devel-oped to facilitate this. The actual test is performed when the observer has been designedlater in the project.

Group 1032b 49

Page 50: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Model

−4 −2 0 2 4 6 8 10

0

2

4

6

8

10

12

ylab

el

xlabel

Non−linear Model

Figure 4.20: The test path for the API robot used to verify the hybrid model.

4.5 Model VerificationThis section deals with the with the verification of the dynamic model, described inSec. 4.3 on page 39. The model is verified using the path, shown in Fig. 4.21 on the facingpage with the control signals shown in Fig. 4.22 on the next page. The test is performedwithout a path controller and on asphalt assumed to be level. As some modificationshas been made to the API since the last parameter estimation was performed, a numberof parameters have changed. These parameters are the mass m, moment of inertia I ,Propulsion actuator gain τrefgain

as well as the two variable describing the cornering stiff-ness Cf1 and Cf2. These variables is used in the parameter estimation, meaning that thevariables is compromised of both the actual physical value but also of model variations.The new values for the parameters are:

• Mass: m = 231.5kg.

• Moment of inertia: I = 83.5kg · m2.

• Propulsion actuator gain: τrefgain= 0.74.

• Cornering stiffness: Cf1 = 40N/rad and Cf2 = 4000N·s/rad·m.

The results of the verification run can be seen in Fig. 4.23 on page 52 As can be easily seenthe dynamics of the simulated velocities match the measured velocities. The only differ-ence is an offset on the xMand yMvelocities, which has proven difficult to remove withparameter estimation using the existing model. This can be caused by the assumptionsmade in the model as well as physical influences of the API which is not included in themodel. This includes the suspension on the front wheels of the robot which is not mod-elled. The surface on which the API runs is also significant an the absence of an accurate

50 Aalborg University 2007

Page 51: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

4.5 Model Verification

−6 −5 −4 −3 −2 −1 0 1−4

−3.5

−3

−2.5

−2

−1.5

−1

−0.5

0

0.5

1

Verification path

xM[m]

yM

[m]

Figure 4.21: The path chosen to verify the dynamic model.

0 5 10 15 20 25 30

−50

0

50

β

ref,1

βref,2

βref,3

βref,4

0 5 10 15 20 25 300

10

20

30

40

50

τref,1

τref,2

τref,3

τref,4

Time [s]

Ste

erin

gR

efer

ence

[deg

.]P

rop

uls

ion

To

rqu

eR

efer

ence

[%]

Figure 4.22: The reference signals used for the path chosen to verify the dynamic model.

Group 1032b 51

Page 52: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Model

0 5 10 15 20 25 30

−0.5

0

0.5

MeasuredSimulated

0 5 10 15 20 25 30

−0.5

0

0.5

MeasuredSimulated

0 5 10 15 20 25 30

−0.5

0

0.5

MeasuredSimulated

Time [s]

xM

yM

θM

Figure 4.23: Comparison of the measured and simulated velocities.

model of the friction and stiction between each tire and a given surface has an impactof model performance. The noise on the sensor measurements is also a significant factor,when driving at a slow speed like this test was performed at as the. A model verificationperformed by a previous group at a higher speed showed a higher match between themeasured and simulated response of the API.

Furthermore the steering actuators have different dynamics as well as a small timeoffset in relation to one another. An example is shown in Fig. 4.24 on the next page. Thedifferences are not intentional and is caused by incorrectly tuned steering controllers or asteering controller tuned to a different surface, than what this test is performed on. Thisbeing said the dynamics of the turning wheels are much faster than the rest of the systemand is not deemed necessary to model more precisely.

4.5.1 Conclusion

It can be concluded that the model fits reality well and could possibly be made to fit evenbetter as indicated in a previous project[4]. The uncertainties discussed were are deemedsufficient to account for the deviations between model and reality.

In this thesis the focus is on model based FDI. This means that an accurate modelis important. The level of accuracy needed depends on what kind of faults are to bedetected and how well the model is able to replicate them. Even though the faults arenot verified on the model it is a good assumption that if the model works in the fault freecase then it will also work in the faulty case if the fault can be represented in the model.

52 Aalborg University 2007

Page 53: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

4.5 Model Verification

10 10.2 10.4 10.6 10.8 11 11.2 11.4 11.6 11.8 12

−50

−40

−30

−20

−10

0

10

20

30

40

50

β

1

β2

β3

β4

Time [s]

Ste

erin

gP

osi

tio

n[d

eg.]

Figure 4.24: The measured position of the steering actuators. The steering reference onall four wheel is changed at the same time.

Group 1032b 53

Page 54: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 55: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Part III

Fault Detection and Isolation

55

Page 56: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 57: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 5

Introduction

The ability to detect, isolate and if possible accommodate faults or failures is especiallyimportant in an autonomous system because it is meant to run unsupervised for longperiods of time. So to ensure safe operation and to maximize the time the system can con-tinue operation an FDI-scheme must be designed and implemented in the API robot. Thistask has already been undertaken once by another group which laid the foundation forthe continued work in the current project. As seen in Fig. 5.1 the API robot has a numberof different sensors, actuators and communication interfaces. They all have the potential

Compass

GPS Inclinometer

Gyro Prox. sensors

PC-104

CAN bus

Doppler radar LH28

Propulsion cont. Steering cont.

Propulsion act. Steering act.

Propulsion sens. Steering sens.

Figure 5.1: The different subsystems of the API robot.

to fail at some point. The purpose of the FDI scheme described in this part is mainly to

57

Page 58: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Introduction

provide a way to detect and isolate faults that occur in the steering and propulsion sub-system of the API robot using model based methods. The remaining subsystems apartfrom the inclinometer and proximity sensors are already covered in a previous project [4]and will thus not be covered in the current FDI scheme.

The newly implemented inclinometer and proximity sensors are not included in themodel based FDI but the inclinometer has diagnosing capabilities. The proximity sensorswill receive a simple sanity checking in the implementation.

The following will be included in the FDI part of this master thesis all concerning theFault Detection and Isolation of the steering and propulsion system of the API robot:

Fault Analysis and Severity Assessment: The steering and propulsion system of theAPI robot is analysed for faults and simulations are carried out which together with em-pirical knowledge of the system will help determine their severity as well as how thefaults propagate in the system.

Each fault is then ranked according to their severity and occurrence(SO-index). TheSO-index is found by giving each fault a value from 1-10 according to the severity anda value according to the predicted occurrence frequency. These values multiplied yieldsthe SO-index used.

Isolation Analysis: This analysis is carried out in order to determine how differentfaults affect the system. Using the linearization algorithm described in Sec. D.2 on page 145the model is linearized in two different working points in order to determine isolabliltyof input faults in the API .

Linear model based FDI : This chapter holds the analysis and test a linear FDI schemefor the API robot. It is a hybrid scheme which uses a hybrid state observer and linearFDI. During the chapter, three different hybrid state observers are tested as well as twodifferent continous linear FDI methods.

Non-linear Particle based FDI : This chapter holds a proof of concept of how a non-linear particle filter based could be used to detect and isolate faults occuring on the robot.

Active Fault Isolation Supervisor: This chapter describes the design of a active faultisolation supervisor able to complete the isolation of a given fault in the steering andpropulsion system of the robot by actuating on the API robot.

FDI schemeThe above methods can be used alone or as a part of an FDI scheme to use the best fromeach method. One method could be only applying the linear FDI scheme. Its relativelylight weight but cannot isolate faults completely. Another possibility is to use the particlefilter method alone, but it is very computationally demanding and needs an exact modelof each fault to isolate. In order to use the lightweight characteristica of the linear FDIand still have the ability to isolate faults the following FDI scheme is proposed:

58 Aalborg University 2007

Page 59: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

When a path is planned for the API robot a hybrid representation of that path is madeusing the hybrid model. The path is then run with linear FDI activated. When a fault isdetected the API is stopped and the next step is initiated.

This step includes the application of either a active FI routine. This should result in anisolated fault and if possible remedial actions can be initiated. The scheme will only berun after the usual sanity checking of rate and value of the sensors stay fault free. Theoverall scheme can be seen in Fig. 5.2.

Path planned

Create hybrid states for planned path.

Run path with linear FDI.

Check for fault event.

Descision

Stop API robot.

Run active fault isolation

Fault Isolated

Initiate possible remedial action.

No Fault

Fault

Figure 5.2: Overall FDI scheme.

Group 1032b 59

Page 60: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 61: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 6

Fault Analysis

The following contains a fault analysis of the steering and propulsion system on the API

robot as well as the newly implemented inclinometer and proximity sensors. This in-cludes the sensors shown in Fig. 6.1. Faults on all these subsystems are considered except

• Four propulsion actuators

• Four propulsion sensors

• Two proximity sensors

• Eight wheel stoppers

• Four steering actuators

• Four steering sensors

• One Inclinometer

Figure 6.1: Sensors and actuators fault analyzed.

for the eight wheel stoppers. These are safety features and assumed to be fault free. Allsteering and propulsion faults are simulated and possible causes and effects and are dis-cussed. The faults are simulated in open loop although the robot will be running usinga speed and path controller in the real world and thus behave a bit differently. Howeverthe wheel controllers are taken into consideration in the fault modelling so the simula-tions should give a good indication of the severity of the different faults. The faults andtheir effect of the API are based on the assumption that all wheel controllers are active.

The conditions for the simulations are:

Straight driving ICR = 0m.

• Simulation time = 60 seconds.

• Fault time = 3 seconds.

Turning ICR = 5m.

• Simulation time = 75 seconds.

• Fault time = 3 seconds.

All faults are single faults as two simultaneously occurring faults is very unlikely. Fur-thermore all faults are abrupt faults. That is they don’t grow over time as faults whichare a result of wear and tear do. The faults are rated for severity and occurrence using

61

Page 62: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Fault Analysis

the SO-index for each fault. This will help to determine which faults are more importantto detect.

Propulsion actuatorsThe propulsion actuators consist of four Heinzman DC-motors which are incorporatedinto the hub of each wheel. They are integrated with built in speed sensors and runby a signal from one of the LH28 micro controllers dedicated to each wheel. The faults,considered in the propulsion actuators, are described in Table 6.

Fault Description/Local effect

No actuation The wheel isn’t applied any torque no matter what speedreference is sent to the controller. The wheel can still ro-tate(free run).

Max. positiveactuation

The wheel spins at maximum positive speed no matterwhat speed reference is set.

Max. negativeactuation

The wheel spins at maximum negative speed no matterwhat speed reference is sent to the controller.

Actuator offset The wheel gain of the propulsion actuator changes to bemore or less than expected.

Table 6.1: Effects of propulsion actuator faults.

No actuation has impact on the state of the API robot as seen in Fig. E.1- E.2 onpage 148 and Fig. E.3- E.4 in App. E. The effect is not severe and it will more than likely bepossible to compensate with the rest of the propulsion actuators if correctly detected andisolated. Max. positive actuation and Max. negative actuation are moresevere although probably less likely. These also have potential to blow the main fusesof the API robot which is a worst case scenario since all subsystems will fail completelydue to lack of power. It is important to detect and isolate such a fault fast. The faultActuator offset is probably the least severe of the propulsion actuator faults sinceit can be removed by a controller as long as the offset is not larger than to allow for thecompensation. The effects can be seen Fig. E.9- E.10 in App. E.

Propulsion sensorsThe propulsion sensors are hall effect sensors integrated into the Heinzman units situatedin each wheel hub and output the current speed of a given wheel to the LH28 micro con-trollers dedicated to each wheel. The Henizman has a built-in safety feature which meansthat if the sensor output does not look into a specific impedance the motor goes into afree run states and thus no longer provide torque to the wheel. This will to some extentguard against electrical malfunctions. The propulsion sensors are not used for control ofthe wheels in the current configuration of the robot as its speed is controlled through anestimate of the current speed. This estimate depends mostly on the GPS measurements.That means that when the GPS is in action a fault in a wheel sensor will have almost noimpact on the API robots performance. Therefore the faults are not simulated but only

62 Aalborg University 2007

Page 63: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Fault Possible cause

No actuation Faulty relay, broken wires in either interface or the DC-motor itself. Mechanical causes include a broken shaft orbearing. It could also be caused by a bug in LH28 softwareor software on the OBC. Impedance has changed on sensoroutput.

Max. positiveactuation

Possible causes are a bug in software on the LH28 or soft-ware on the OBC.

Max. negativeactuation

Possible causes are a bug in software on the LH28 or soft-ware on the OBC.

Actuator offset A change in the actuator gain due to wear and tear or adamaged bearing or shaft. A bug in the software on theLH28 or software on the OBC could also cause this fault.

Table 6.2: Causes of propulsion actuator faults.

described. The faults in the propulsion actuators considered for detection and isolationare shown in Table 6.3.

Fault Description

No output The signal from the sensor is zero no matter what speed ref-erence is sent to the propulsion controller.

Sensor offset The signal from the sensor is offset to what the real value is.

Non senseoutput

The signal from the sensor is outside the possible range ofthe sensor.

Table 6.3: Considered propulsion sensor faults.

Fault Possible cause

No output Possible causes include broken wires or a sensor circuit mal-function. It could also be caused by a bug in LH28 softwareor software on the OBC.

Sensor offset A change in the sensor gain due to a mechanical failurewhere one of the magnetic elements are not detected lead-ing to a average lower speed than the gain would suggestA bug in the software on the LH28 or software on the OBC

could also cause this.

Non senseoutput

Sensor circuit malfunction. Loose or partially broken wires.Bug in LH28 software or software on the OBC.

Table 6.4: Causes of propulsion sensor faults.

Group 1032b 63

Page 64: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Fault Analysis

Steering actuatorsThe steering actuators are four Maxon DC-motors with each a worm reducing gear. Thewheel is PWM controlled by the LH28 micro controller through an H-bridge which amplifythe current, setting the speed and torque at which a wheel is turned. Faults in the steeringactuators considered for detection and isolation are show in Table 6.5.

The following faults in the steering actuators are considered for detection and isola-tion.

Fault Description/Local effect

No actuation The wheel does not rotate no matter what reference posi-tion is sent to the steering controller.

Max. positiveactuation

The wheel rotates to full positive position no matter whatreference position is sent to the steering controller.

Max. negativeactuation

The wheel rotates to full negative position no matter whatreference position is sent to the steering controller.

Actuator offset The wheel position is offset in relation to the reference posi-tion.

Table 6.5: Considered steering actuator faults.

Fig. E.11- E.12 in App. E shows how the No actuation fault affect the API robot. TheNo actuation fault is modelled as if a wheel is fixed in 0 so the fault does not show upwhen the robot is moving in a straight line. It only turns up when the wheels are turned tochange the direction of the robot. It is however a severe fault and has the potential to blowthe main fuse. The Max. positive actuation and Max. negative actuationfaults are show in Fig. E.13- E.16 in App. E They are equally severe and have potentialto blow the main fuse. The last fault is the Actuator offset fault which is shownin Fig. E.17- E.18 in App. E. The severity is proportional to the size of the offset but isgenerally considered severe as the robot cannot continue normal operation if it occurs.

Fault Possible cause

No actuation Possible causes include faulty relay, broken wires or faultyH-bridge. Mechanical causes include a broken shaft or bear-ing. It could also be caused by a bug in LH28 software orsoftware on the OBC.

Max. positiveactuation

A fault in steering in the control loop can cause this fault.Possible causes are a bug in software on the LH28 or soft-ware on the OBC.

Max. negativeactuation

A fault in steering in the control loop can cause this fault.Possible causes are a bug in software on the LH28 or soft-ware on the OBC.

Actuator offset An offset on the sensor output in the control loop or a bugin the software on the LH28.

Table 6.6: Causes of steering actuator faults.

64 Aalborg University 2007

Page 65: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Steering sensorsThe steering sensors are optic tachometers which measure the number of holes passing ona rotating disk and enable the LH28 microcontrollers to determine the distance travelled.It does not measure a absolute position but rather a relative one. This makes it importantto carry out calibration before operation of the API robot starts and to keep exact track ofwhere the wheel moves during operation. Faults in the steering sensors considered fordetection and isolation are shown in Table 6.7.

Fault Description

No output The signal from the sensor is zero no matter what referencesignal is sent to the controller.

Sensor offset The signal from the sensor is at an offset to what the realvalue is.

Non senseoutput

The sensor has outputs a non-sense signal instead of thereal signal. The signal is characterized by being outside theknown operating area in terms of slew rate and value.

Table 6.7: Considered steering sensor faults.

Fault Local effect

No output The controller will turn the wheel to a maximum position.

Sensor offset The controller will turn the wheel according to the sensoroffset position.

Non senseoutput

The controller will try to follow to noise signal causing thewheel to rotate erratically.

Table 6.8: Effects of steering sensor faults.

Simulation of global effects of the No output fault can be found in Fig. E.13- E.14in App. E and Fig. E.15- E.16 in App. E. The fault can either cause Max. NegativeActuation or Max. Positive Actuation depending on the value of the positionreference. The effects are very severe depending on the speed of the API robot when thefault occurs. At low speeds the robot will either come to a almost complete stop as seenin the simulations or move in a small circle. At higher speeds the No output fault willblow the main fuse and possibly damage the robot severely. The fault is however notvery likely because the position is the consequence of an integration of the holes in atacho disc. So the only cause is a software bug. Sensor offset is shown in Fig. E.17-E.18 in App. E and shows a moderate offset which is less severe than the No outputfault but the Sensor offset fault could be equally severe. No output is more likelyas it has more causes as seen in Table 6.9 on the next page. See Table 6.10 on the followingpage to get an overview of how the steering sensor faults propagate.

Proximity SensorsThe proximity sensors are mounted in the front end of the API robot as described inSec. 3.1 on page 25 and can observe if an obstacle is present in the path of the robot. They

Group 1032b 65

Page 66: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Fault Analysis

Fault Possible cause

No output Disconnected cable or broken wires or causing power lossor sensor circuit malfunction. It could also be caused by abug in LH28 software or software on the OBC.

Sensor offset Temporary failure to detect holes in the tacho disc. A loosetacho disc. Temporary power loss. Uncalibrated wheels.

Non senseoutput

Sensor circuit malfunction. Loose or partially broken wires.Bug in LH28 software or software on the OBC.

Table 6.9: Causes of steering sensor faults.

Steering system

Sensor

Nooutput

Max.positiveoutput

Max.negativeoutput

Sensoroffset

Nonsenseoutput

Local Effect

No actuationx x Max. positive actuationx x Max. negative actuation

x Actuator offsetx Erratic wheel movement

Table 6.10: Cause and effects on steering system with wheel controller.

work by the principle of active sonar, where a sound is emitted and the time from anemission until an echo is received from an obstacle is measured. This time is proportionalto the distance to the obstacle. The considered faults in the proximity sensors are shownin Table 6.11.

Fault Description

No output The signal from the sensor is zero even though there is anobstacle in range of a given sensor.

Fixed Output The signal from the sensor is fixed at a certain output.

Non senseoutput

The sensor has outputs a non-sense signal instead of thereal signal. The signal is characterized by being outside theknown operating area in terms of slew rate and value.

Table 6.11: Considered proximity sensor faults.

As the proximity sensors are an essentially safety features a fault could be very severe.

66 Aalborg University 2007

Page 67: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

6.1 Conclusion

If gone undetected a faulty proximity sensors could cause the API robot to crash intoobstacles in its path and damage itself and/or the obstacle. Worse yet is the possibilityfor humans or animals to get hit by the robot. Another consequence of a faulty proximitysensor is that they can stop the operation of the API robot completely if the an obstacle iserroneously detected in front of the API.

Fault Possible cause

No output Disconnected cable or broken wires, power loss or sensorcircuit malfunction. It could also be caused by a bug in LH28

software or software on the OBC.

Fixed output The sensor has become detached from its housing and issensing the inside of the housing. It could also be causedby a bug in LH28 software or software on the OBC.

Non senseoutput

Sensor circuit malfunction. Loose or partially broken wires.

Table 6.12: Causes of proximity sensor faults.

InclinometerThe inclinometer is of the type SCA100T-D02[20] mounted in the GC of the API robot andserves to measure the pitch and roll of the vehicle. Its purpose is to compensate for theinclination of the robot in the API model and of the GPS and compass measurements.Generally the inclinometer measurements are important for an accurate estimate of the

Fault Description

No output The signal from the sensor is zero.

Sensor offset The signal is offset from the true value.

Non senseoutput

The sensor has outputs a non-sense signal instead of thereal signal. The signal is characterized by being outside theknown operating area in terms of slew rate and value.

Table 6.13: Considered inclinometer sensor faults.

χM-vector when the robot is on an incline. They become increasingly important if theGPS signal is lost. Faulty measurements are severe if they go undetected since it wouldmean that the χM-vector estimate would become unreliable.

6.1 ConclusionIn conclusion the different subsystems will be rated for severity and occurrence using theSO-index as seen in Table 6.15 on page 69. The proximity sensor faults rank highest onthe SO-index and with good reason. All three faults should get priority in detection andisolation. Fortunately they are all easy to detect with a sanity check when running. Thiswill not be investigated further in the current project since the focus is on model basedFDI. Next in line are faults on the steering sensors and actuators which are also a high

Group 1032b 67

Page 68: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Fault Analysis

Fault Possible cause

No output Disconnected cable or broken wires, power loss or sensorcircuit malfunction. It could also be caused by a bug in sen-sor board software or software on the OBC.

Sensor offset Fault in temperature sensor would cause a small offset. Themechanical mounting could get shifted or turned.

Non senseoutput

Sensor circuit malfunction. Loose or partially broken wires.

Table 6.14: Causes of inclinometer sensor faults.

priority. As described the behaviour of these two subsystems is highly correlated whichgive them equally high SO-index. This leaves propulsion actuators and sensors as wellas the inclinometer. The propulsion sensors are the ones with the lowest priority becauseits not really used in the current control scheme.

But it would be worthwhile to find faults in the propulsion actuators and inclinometeras it has a significant impact on the API robots performance. In the current FDI schemeall faults will be investigated except faults on the propulsion sensor and the non-senseoutputs.

68 Aalborg University 2007

Page 69: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

6.1 Conclusion

Sensor/Actuator Fault Severity Occurrence SO-index

Propulsion actuator No actuation 4 4 16

Max. positiveactuation

8 3 24

Max. negativeactuation

8 3 24

Actuator offset 2 4 8

Propulsion sensor No output 2 4 8

Sensor offset 2 4 8

Non senseoutput

2 3 6

Steering actuator No actuation 8 4 32

Max. positiveactuation

8 5 40

Max. negativeactuation

8 5 40

Actuator offset 4–8 4 16–36

Steering sensor No output 8–9 3 24–27

Sensor offset 4–9 4 16–36

Non senseoutput

4–8 4 16–32

Proximity sensor No output 7 4 28

Fixed output 7 6 42

Non senseoutput

7 5 35

Inclinometer No output 4 4 16

Fixed output 4 4 16

Non senseoutput

5 4 20

Table 6.15: Severity assessment

Group 1032b 69

Page 70: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 71: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 7

Isolability analysis

The purpose of the isolability analysis is to determine how different inputs to the systemaffect the state of the system. Specifically to identify inputs which affect the system indifferent ways and if this changes with the state of the system. This will identify if inputfaults can be isolated.

In this chapter this will be done by using linearized versions of the dynamic model ofthe API robot. By looking at the input matrices of each model it is possible to assess isola-bility in different working-points and compare them. There are basically two differenttypes of states: Straight driving and turning. It will be explored how these states differwith respect to isolability. Later this knowledge can be exploited in the design of the FDI

scheme.

Straight driving statesThe first case is when the robot is driving in a straight line along the on the xM -axiswith 10% torque. To do this the linear state space system is introduced as described inApp. D.1 on page 145. In this case it is a linearization of the non-linear API model in theworking point:

βref,1 : 0 βref,2 : 0 βref,3 : 0 βref,4 : 0τref,1 : 10 τref,2 : 10 τref,3 : 10 τref,4 : 10

θwp : 0 xwp : 0.2664 ywp : 0 θwp : 0

(7.1)

Using these working points the system matrices have the following input matrix:

B =

0.0134 0.0134 0.0134 0.0134 0.0000 0.0000 0.0000 0.00000.0000 0.0000 0.0000 0.0000 4.8805 4.8805 4.8805 4.8805-0.0160 -0.0160 0.0160 0.0160 5.8180 -5.8180 -5.8180 5.8180

0 0 0 0 0 0 0 0

(7.2)

Looking at Eq. (7.2) it can be seen that some columns of the input matrix B have the

71

Page 72: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Isolability analysis

same direction as specified below.

bτref,1= bτref,2

bτref,3= bτref,4

bβref,1= bβref,4

bβref,2= bβref,3

(7.3)

This means that when the API is moving in the designated working point it is only pos-sible to isolate a propulsion actuator fault down to either wheel 1 or 2 or wheel 3 or 4.The same thing holds true in the steering. Here faults register in such a way that the FDI

can only isolate steering actuator faults down to either wheel 1 or 4 or wheel 2 or 3. Thewheel pairs are shown in Fig. 7.1. This limitation has nothing to do with the modelingitself but is rather a limitation in the physical system which cannot be overcome usingpassive FDI. Additionally it can be concluded that this limitation will exist for any θ ascoupling between the wheels are independent of the current heading of the API robot.

Propulsion Pair 1-2

Propulsion Pair 3-4

Steering Pair 1-3Steering Pair 2-4

Figure 7.1: The wheel pairs and the possible isolation of faults.

Turning statesAfter looking at the straight driving scenario next in line is the turning scenario. Thelinearization of the non-linear API model turning in a circle around an ICR of 0.6m yieldedthe following working points:

βref,1 : 1.3734 βref,2 : −1.3734 βref,3 : −0.4266 βref,4 : 0.4266τref,1 : 10 τref,2 : 10 τref,3 : 10 τref,4 : 10

θwp : 6.0868 xwp : 0.1563 ywp : −0.0314 θwp : 0.2665

(7.4)

72 Aalborg University 2007

Page 73: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Using the working points the input matrix has the following values:

B =

0.0051 0.0000 0.0109 0.0130 -2.8184 3.0559 1.7249 -0.67670.0124 -0.0134 -0.0078 0.0031 1.1819 0.0159 2.4087 2.88440.0125 0.0125 0.0211 0.0211 4.2966 -4.2986 -1.7608 1.7530

0 0 0 0 0 0 0 0

It can be seen that none of the columns have exactly the same direction and that it istherefore theoretically possible to isolate a faulty input by looking at the output when therobot is in the given working point. Also it can safely be assumed that the same wouldbe the case when turning in other circles.

Partial conclusionIt is concluded that using passive FDI in the very best case it is only possible to isolateinput faults in straight driving as shown in Table 7.1.

Propulsion act. Steering act.

Wh

eel

1

Wh

eel

2

Wh

eel

3

Wh

eel

4

Wh

eel

1

Wh

eel

2

Wh

eel

3

Wh

eel

4Fault Group

x x Fault group 1

x x Fault group 2

x x Fault group 3

x x Fault group 4

Table 7.1: Isolability in straight driving.

Fortunately we can also concluded that it is theoretically possible to isolate faults to aspecific input when the robot is turning. This does however have other problems becausewhen the robot is running it will quickly leave it again as the robot turns. Noting thelimitations in straight driving these it is still worthwhile to minimize the amount of activeFDI needed for the final isolation after a fault is detected and partially isolated.

Group 1032b 73

Page 74: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 75: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 8

Linear Model based FDI of Steeringand Propulsion System

This chapter deals with the design and implementation of a linear model based FDI

scheme. As described in Sec. 4.4 on page 46 the behavior of the API robot is highly non-linear. This means that in order to take advantage of the many methods for linear FDI

a hybrid approach is proposed in order to make a piecewise linear model. The hybridscheme consists of a hybrid observer which observes the current hybrid state of the non-linear API system. This hybrid state is then used in a continues observer which estimatesthe continues state of the system. Since the hybrid system is an approximation of thebehavior of the non-linear model the observed χM-vector will be the true state plus alinearization error. When the continuous state is estimated, the residue of the real andestimated state is evaluated and the faults detected and if possible isolated. The schemeis summed up on Fig. 8.1 on the following page.

Several methods of designing hybrid and continuous state observers will be investi-gated followed by an evaluation and selection of the most effective approach.

8.1 Hybrid State ObserverThe hybrid system is described using using the ICR of each turn as well as the positionof the robot on the turn circle. The ICR is determined by the angles of the wheels. Sinceone of the goals of the FDI is to detect steering encoder faults, the measurements of thesteering angles is not suitable for use in the hybrid state observer. Instead the steeringreferences βref,i as well as the propulsion references τref,i is used, when relevant.

Three different methods for state observation is designed, implemented and evaluatedon the nonlinear model, resulting in the best suited being considered for implementationon the API robot.

The three methods are:

Modified Multiple Hypothesis Testing (MHT ) , which state observation is based on thestatistical properties of both the linearized models of each discrete state and theproperties provides by the linear Kalman filter.

Directional State Observer (DSO) , which uses the directional vector spanned by the el-

75

Page 76: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Linear Model based FDI of Steering and Propulsion System

Hybrid FDI

Hybrid StateObserver

ContinuousObserver

LinearFDI

βi τi API(q , χM)

χM

q

χM

Faults

Figure 8.1: The FDI structure of the propulsion and steering system.

ements in the continuous state vector and its correlation with the vector spannedby the sensor readings.

ICR State Observer , which based on the velocity of the robot and the rotational speedaround the ICR, can calculate the radius of the circle, the API is traveling on.

The requirements set for the state observers is that they have to accurately detect allstates described in Sec. 4.4 on page 46, meaning multiple parallel driving states as wellas multiple turning states

In order to test the state observers a number of discrete states has been selected: aparallel driving state and a turning state. The turning state is based on an ICR of 1 meter.Only the first three states are selected, forming a arc from 0 to 60

8.1.1 MHT State Observer

The basic idea of Multiple Hypothesis Testing[13] is that each model is a candidate forthe true model, or in the case of an hybrid system, that each discrete state is a candidatefor current discrete state location of the system. MHT is mostly used for multi modelestimation and sensor fusion. The method requires implementation of a Kalman filterfor each model and that those filters are run in parallel. The Kalman Filter used in thismethod is described in App. I on page 173. The structure of the state observer can be seenin Fig. 8.2 on the next page

76 Aalborg University 2007

Page 77: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

8.1 Hybrid State Observer

State 1 Kalman FilterSensors

Inputs &

State 2 Kalman Filter

State n Kalman Filter

State Observer

Discrete State

Estimate q

Figure 8.2: Structure of Modified Multiple Hypothesis Test State Observer

In order to perform MHT two assumptions are made:

1. The Kalman filter based on the current discrete state is among the proposed candi-dates.

2. The system has been in the same discrete state since t = 0.

The second assumption presents some limitations, when dealing with systems, whichdiscrete state changes. These limitations will be approached later in this chapter.

A number of hypotheses is then formed, one for each state: Hi = State qi is correct. Thenull hypothesis is due to assumption one impossible and is omitted. The probability thatstate qi is correct at time tk, conditioned on the measurements Zk is

µi(k) , p(qi|Zk) (8.1)

The probability can then be defined by use of recursion[1] using Bayes rule and the pre-vious probability values:

µi(k) =λi(k)µi(k − 1)

∑nj=1 λj(k)µj(k − 1)

, (8.2)

where

λi(k) =1

(2π)m/2det(S−1i (k|k − 1))1/2

e−12rTi (k)S−1

i (k|k−1)ri(k) (8.3)

Si(k|k − 1) = Ci(k)Pi(k)CTi (k) +Ri(k) (8.4)

ri(k) , z(k) − Cixi(k|k − 1) (8.5)

The elements of Eq. (8.2) are:

• The innovation covariance Si(k|k − 1) is calculated using a linear Kalman Filter

• The estimated dynamic state xi(k) is calculated by the linear Kalman Filter, one foreach of the discrete states.

• The innovation vector ri(k) is the measurement residual.

• m is the dimension of the innovation vector.

Group 1032b 77

Page 78: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Linear Model based FDI of Steering and Propulsion System

• The likelihood of the innovation vector: λi(k) is the probability the observation z(k)would be made, given that the discrete state qi is the correct state.

• The output matrix ci.

The Kalman Filter, that has the highest probability ui(k), is then assumed to use thediscrete state and dynamic model, that best describes the real system:

ui(k) = maxj

(uj(k)) 7→ state(k) = i i ∈ 1, 2 . . . n (8.6)

where the 7→ operator describes the relation between the statement on the left and thestatement on the right. The use of an recursive probability function makes the state ob-server more robust to sudden changes in measurements or if two states have the sameprobability in the same sample. This has the consequence that the probability over timewill converge to a single state as is mentioned in the second assumption: That the systemhas been in the same state since k = 0. As the hybrid model describing the API robot isexpected to change frequently due to the behaviour described in the discrete state, a wayto force the MHT observer to react to state changes, without sacrificing the robustness ofthe current method, has to be found.

The chosen solution is to reset the Kalman filters and the MHT every 2 seconds. Thisinterval can be increased to provide a more stable discrete state estimate or decreased toallow faster state changes. As the full state vector is available as sensor measurements,the Kalman filters are reset by setting xi(k − 1) where i ∈ 1, 2 . . . n to the currentsensor reading of the state vector subtracted the working points for each linear modeland resetting the estimate covariance Pi(k − 1) to Pi(0), where i ∈ 1, 2 . . . n. The MHT

is reset by setting µi(k) = 1n , where i ∈ 1, 2 . . . n. The results of the turning scenario can

be seen in Fig. 8.3 on the facing page. The parallel driving state is also correctly detectedbut is not shown.

8.1.2 Directional State Observer

The Directional State Observer is based on the vector correlation of the directional vector,spanned by the continuous output vector χM

i of the linear model, where i represents theith discrete state in the hybrid system, and the vector spanned by the sensor measure-ments, corresponding to the χM vector.

The vector correlation is defined by:

corri(t) =|χM(t)|T · ˆχM

i (t)

|χM(t)| · | ˆχMi (t)|

i ∈ 1, 2 . . . n (8.7)

The estimated ˆχMi vector is calculated using the linear model derived for each discrete

state in the hybrid system.

The predictor used in this approach is shown in Eq. (8.8).

x = Aix+Biu

ˆχMi = Cix (8.8)

Each predictor is run concurrently and each ˆχMi vector’s correlation is calculated. The

model, which shows the largest correlation with the measurements is deemed the “true”

78 Aalborg University 2007

Page 79: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

8.1 Hybrid State Observer

0 1 2 3 4 5 6 7−0.3−0.2−0.1

00.10.20.3

Nonlinear ModelHybrid System

0 1 2 3 4 5 6 7−0.1

0

0.1

0.2

0.3

Nonlinear ModelHybrid System

0 1 2 3 4 5 6 70

0.1

0.2

0.3

Nonlinear ModelHybrid System

0 1 2 3 4 5 6 7

1

2

3

True StateEst. State

Time [s]

xM

yM

θM

State

Figure 8.3: Simulation results of the MHT State Observer when driving around an ICR @1 m.

state vector and the corresponding discrete state is then the “true” state of the hybridsystem. This is shown mathematically in Eq. (8.9)

corri(t) = maxj

(corrj(t)) 7→ state(t) = i i ∈ 1, 2 . . . n (8.9)

The simulation results of the turning scenario is shown in Fig. 8.4 on the next page. Theparallel driving state is also correctly detected but is not shown.

8.1.3 ICR State Observer

The third method is to calculate the radius of the circle, the API robot is moving on, us-

ing rICR =

√x2+y2

θand comparing the radius with the circles described in the hybrid

model. Using the information about the models describing the current trajectories, thecorresponding discrete state can be found by comparing the position on the circle (θ) tothe working points of the linear model. This is then used in the continuous observers.The process is performed in the following order:

1. The current ICR is calculated using Eq. (8.10)

2. In order to find closest match to the ICRs defined in the hybrid model, the differencebetween ICRs is calculated and the smallest is chosen as the “true” candidate for anICR . Se Eq. (8.11)

3. Using Eq. (8.12) all discrete states, which is defined using the “true” ICR, is com-pared with regards to θM and the θwp, defined in the discrete state. Selecting thediscrete state closest to the current θM as the “true” state completes the process.

Group 1032b 79

Page 80: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Linear Model based FDI of Steering and Propulsion System

0 1 2 3 4 5 6 7−0.3−0.2−0.1

00.10.20.3

Nonlinear ModelHybrid System

0 1 2 3 4 5 6 7−0.1

0

0.1

0.2

0.3

Nonlinear ModelHybrid System

0 1 2 3 4 5 6 70

0.1

0.2

0.3

Nonlinear ModelHybrid System

0 1 2 3 4 5 6 7

1

2

3

True StateEst. State

Time [s]

xM

yM

θM

State

Figure 8.4: Simulation results of the Directional State Observer when driving around anICR @ 1 m.

rICRcurrent =

(xM)2 + (yM)2

θM(8.10)

rICRwp = rICRcurrent − minj

(rICRcurrent − rICRj) (8.11)

θwpi= min

j(|θM − θwpj

|) 7→ state(t) = i i ∈ j ∀rICRwp = rICRj (8.12)

where:

• rICRcurrent is the current radius of the circle, the API is moving on.

• rICRwp is the closest match in the ICRs defined in the hybrid model (rICRj) to the

current ICR.

• θwpiis the closest match of the position of the API, in regards to the θwp defined in

the hybrid model.

The simulation results is shown in Fig. 8.5 on the facing page. The parallel driving stateis also correctly detected but is not shown.

8.1.4 Partial Conclusion

The simulation of the state observers shows that all three observers successfully detectsthe three turning states although at different times. The observers also correctly detectsthe parallel driving state. The MHT is slower to detect a state change due to the inherentproperty of the hypothesis testing algorithm namely assumption two: “The system hasbeen in the same discrete state since t = 0.”

80 Aalborg University 2007

Page 81: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

8.2 Continuous State Observer

0 1 2 3 4 5 6 7−0.3−0.2−0.1

00.10.20.3

Nonlinear ModelHybrid System

0 1 2 3 4 5 6 7−0.1

0

0.1

0.2

0.3

Nonlinear ModelHybrid System

0 1 2 3 4 5 6 70

0.1

0.2

0.3

Nonlinear ModelHybrid System

0 1 2 3 4 5 6 7

1

2

3

True StateEst. State

Time [s]

xM

yM

θM

State

Figure 8.5: Simulation results of the ICR State Observer when driving around an ICR @ 1m.

Further simulations, show that they are equally successful, when turning around otherICRs.

Figure 8.6 on the next page shows the comparison between the reference discrete stateand the state estimates from each of the three state observers. The reference state is basedon the information concerning the working area for each state, which for state 1,2 and 3is equal to 0 ≤ θM < 1

8π, 18π ≤ θM < 1

4π and 14π ≤ θM < 3

8π resulting in statechanges at 1

16π,316π and 5

16π.

The comparison shows that the ICR observer follows the hybrid system the closestalthough the difference compared to the directional observer is minimal, and it is therefordecided to use the ICR state observer in the Hybrid FDI. Furthermore, the ICR observer isthe least complex in terms of computing power and method complexity and is thereforideal for implementing on a less powerful computer, such as the OBC.

8.2 Continuous State ObserverThe purpose of the continuous state observer is to estimate the continuous state using amodel. Because of the non-linearities of the system a number of linear models are cho-sen to cover different areas of the state space. The models are defined in Sec. 4.4 onpage 46. The current discrete state and the corresponding model is chosen based on thestate provided by the hybrid state observer, designed in the previous section. Two differ-ent methods for continuous state observation is designed, implemented and evaluated onthe non-linear model, resulting in the best suited being considered for implementation onthe API robot. The two methods are:

Unknown Input Observers, which is an approach where it is possible to perform linear

Group 1032b 81

Page 82: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Linear Model based FDI of Steering and Propulsion System

0 1 2 3 4 5 6 7

1

2

3

0 1 2 3 4 5 6 7

1

2

3

0 1 2 3 4 5 6 7

1

2

3

0 1 2 3 4 5 6 7

1

2

3

Time [s]

State

State

State

State

True Discrete State

MHT Observer

Directional Observer

ICR Observer

Figure 8.6: Comparison of the discrete state of the three methods, when driving aroundan ICR @ 1 m.

disturbance decoupling which can also be used for isolation.

Beard Fault Detection Filter, which is a standard state observer, with detection and iso-lation based on the "fault event direction".

As the observers is to be used for FDI , the area of interest is not the continuous state itself,but rather the residual, from which it is possible to detect and isolate a given fault. Allmethods take starting point in the linear model of the following form:

x(t) = Ax(t) +Bu(t) (8.13)

y(t) = Cx(t) (8.14)

where

u(t) = [τref,1(t) τref,2(t) τref,3(t) τref,4(t) βref,1(t) βref,2(t) βref,3(t) βref,4(t)]T (8.15)

x(t) =[

x(t) y(t) θ(t) θ(t)]T

(8.16)

There are a number of proven linear methods for generating residuals with FDI in mind.In the next sections two different approaches are presented and evaluated to determinewhich is better in the current case.

8.2.1 Unknown Input Observers

Unknown Input Observers(UIOs) is a method, with which it is possible to decouple theobserved state from known disturbances and thereby also from the residual. This prop-erty improves the robustness of the observer and can be used to help isolate faults in asystem. The theory of the UIO can be found in App. F on page 157.

82 Aalborg University 2007

Page 83: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

8.2 Continuous State Observer

UIOs are designed for the API robot system to detect and isolate faults in the propulsionand steering system using a linearized version of the non-linear API robot model. SeeSec. D.2 on page 145 for details. The linear model has the following structure:

x(t) = Ax(t) +Bu(y) + Ed(t) (8.17)

y(t) = Cx(t) (8.18)

where E is the fault distribution matrix.

To design the UIOs eight different models are needed. Each one decoupling a differ-ent part of the model inputs. In this case each of the wheel angles, βi, are decoupledalong with each of the propulsion torques of the wheels, τi. The decoupled models arepresented below:

x1 =A1x+B1 [τref,2 τref,3 τref,4 βref,1 βref,2 βref,3 βref,4]T + E1τref,1

x2 =A2x+B2 [τref,1 τref,3 τref,4 βref,1 βref,2 βref,3 βref,4]T + E2τref,2

x3 =A3x+B3 [τref,1 τref,2 τref,4 βref,1 βref,2 βref,3 βref,4]T + E3τref,3

x4 =A4x+B4 [τref,1 τref,2 τref,3 βref,1 βref,2 βref,3 βref,4]T + E4τref,4

x5 =A5x+B5 [τref,1 τref,2 τref,3 τref,4 βref,2 βref,3 βref,4]T + E5β1

x6 =A6x+B6 [τref,1 τref,2 τref,3 τref,4 βref,1 βref,3 βref,4]T + E6β2

x7 =A7x+B7 [τref,1 τref,2 τref,3 τref,4 βref,1 βref,2 βref,4]T + E7β3

x8 =A8x+B8 [τref,1 τref,2 τref,3 τref,4 βref,1 βref,2 βref,3]T + E6β4 (8.19)

The decoupling results in each of the corresponding UIO residuals to become insensitiveto a fault in the decoupled input. Each fault presents with a different set of residuals.Ideally a fault should have no effect on the corresponding decoupled residual. However,due to the inherent non-linearities it is necessary to evaluate the residual in the followingway:

i = minj

(rj(t)) (8.20)

where i is the index of the residual least affected by a given fault. The residual ri is giventhe value 0 and the rest are given the logic value 1.

In Eq. (8.21) the different faults are presented with their corresponding logical residualpatterns.

Fault in r1 r2 r3 r4 r5 r6 r7 r8β1 0 1 1 1 1 1 1 1β2 1 0 1 1 1 1 1 1β3 1 1 0 1 1 1 1 1β4 1 1 1 0 1 1 1 1τ1 1 1 1 1 0 1 1 1τ2 1 1 1 1 1 0 1 1τ3 1 1 1 1 1 1 0 1τ4 1 1 1 1 1 1 1 0

(8.21)

These decoupled systems are then used to design 8 different UIOs . The process of UIO

design is explained by an example in App. F.1 on page 159.

Group 1032b 83

Page 84: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Linear Model based FDI of Steering and Propulsion System

Detection is performed by simple threshold logic:

|r(t)| < Threshold for fault− free case (8.22)

|r(t)| ≥ Threshold for faulty case (8.23)

8.2.2 Test of UIO method

This test will verify the method of detecting and isolating faults using the UnknownInput Observer method. It is a simulation test where the non-linear model is used at apredefined working point. The working point is given by:

β1 = 0 τ1 = 10 x = 0.2664β2 = 0 τ2 = 10 y = 0

β3 = 0 τ3 = 10 θ = 0β4 = 0 τ4 = 10 θ = 0

(8.24)

That is, the API robot is driving straight ahead with a 10% actuation on all wheels. Thetest is performed without a path controller and all faults are introduced at 3 seconds.

It must be noted that, as it is only possible to see actuator faults directly, only the effectof the different actuator faults are tested. For example a No actuation fault couldhave several different causes as described in Cha. 6 on page 61. In this case a fault isisolated by looking at what residue has lowest value. This follows from the assumptionthat the residue, where a given input is decoupled in the UIO , will be the least affectedby a fault in this input. It is observed on Fig. 8.7 on the next page that within 10 ms ofthe introduction of a fault, the values of UIO−τ1 and UIO-τ2 (B1 and B2 respectively) areaffected less than the rest of the residues. This can also be observed in Fig. 8.8 where thefault isolation indicates a fault in the inputs τ1 or τ2. Furthermore it can be seen that theindication remains constant for at least 1.5 seconds. As previously established it is notpossible to isolate the fault more using passive model based FDI because of the inherentdependencies between the two inputs.

Figures from Fig. G.1 to G.7 on pages 163–165 show the isolation of the rest of the faulteffects as indicated by each figure.

8.2.3 Beard Fault Detection Filter method

The Beard Fault Detection Filter method(BFDF) calls for the use of an full order stateobserver[9] to generate directional residuals. This residual can then be used to isolatea given fault based on its direction. The method is based on a method presented in [5,page 87-98] and can be extended to use UIO theory to help decouple disturbances. In thecurrent case nothing is known about the disturbances of the system, so instead a standardfull order observer is applied.

The current linear system can be described as:

x(t) = Ax(t) +Bu(t) + bifai(t)

y(t) = Cx(t) + Ijfsj (8.25)

where:

84 Aalborg University 2007

Page 85: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

8.2 Continuous State Observer

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

1.252.5

x 10−3

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

1.252.5

x 10−3

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

1.252.5

x 10−3

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

1.252.5

x 10−3

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

1.252.5

x 10−3

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

1.252.5

x 10−3

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

1.252.5

x 10−3

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

1.252.5

x 10−3

Time [s]

UIO-τ1 :

UIO-τ2 :

UIO-τ3 :

UIO-τ4 :

UIO-β1 :

UIO-β2 :

UIO-β3 :

UIO-β4 :

Figure 8.7: UIO residues following the propulsion actuator fault: No actuation onwheel 1.

bi Is the ith column of the input matrix B.

fai(t) Denotes a fault in the ith actuator.

Ij Is the jth column of a unit matrix I.

fsj(t) Denotes a fault in the jth sensor.

The residual of a full order observer is described by:

˙x(t) = Ax(t) +Bu(t) +K(y(t) − Cx(t))

r(t) = y(t) − Cx(t) (8.26)

In the current system all sensor faults will present as actuator faults because of feedbackin steering and propulsion sensors. As a consequence the model can be reduced to:

x(t) = Ax(t) +Bu(t) + bifai(t)

r(t) = Cx(t) (8.27)

It is known that if a fault occurs in the ith actuator it will have a signature directiondenoted by bi also called the fault event direction.

When the full order observer is designed, there exists some design freedom in the formof the feedback gain K. The feedback gain must stabilize the error system associated with

Group 1032b 85

Page 86: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Linear Model based FDI of Steering and Propulsion System

0 1 2 3 4 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure 8.8: UIO decision function following the propulsion actuator fault: Noactuation on wheel 1.

the observer:

e(t) = (A−KC)e(t) + bifai(t)

r(t) = Ce(t) (8.28)

Since there is not a single solution to this the remaining design freedom can be used toensure that the residual, r(t) maintains a fixed direction. This can be done as described in[5, page 89] by ensuring that:

rank[bi (A−KC)bi · · · (A−KC)n−1bi] = 1, (8.29)

i.e. that the controllability matrix of the pair (A, bi) has rank one.

After the observer is designed a way of evaluating the residual vector has to be found.One way is to observe how the current residual correlates with the different fault eventdirections. In this case there are eight fault event directions so there are eight correlationswhich can be described by:

CORRi(t) =|(Cbi)T r(t)||Cbi|2|r(t)|2

(8.30)

If CORRj > CORRk then the fault is more likely to be in actuator j than actuator k. Detec-tion is performed by simple threshold logic:

|r(t)| < Threshold for fault− freecase (8.31)

|r(t)| ≥ Threshold for faultycase (8.32)

8.2.4 Test of Beard Fault Detection Filter method

This test will verify the method of detecting and isolating faults using the BFDF method.It is a simulation test where the non-linear model is used at a predefined working point.

86 Aalborg University 2007

Page 87: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

8.2 Continuous State Observer

The working point is given by:

β1 = 0 τ1 = 10 x = 0.2664β2 = 0 τ2 = 10 y = 0

β3 = 0 τ3 = 10 θ = 0β4 = 0 τ4 = 10 θ = 0

(8.33)

That is, the API robot is driving straight ahead with a 10% actuation on all wheels. Thetest is performed without a path controller and all faults are introduced at 3 seconds.

It must be noted that, as it is only possible to see actuator faults directly, only the effectof the different actuator faults are tested. For example a No actuation fault couldindicate several different causes as described in Cha. 6 on page 61.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

0.5

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

0.5

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

0.5

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

0.5

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

0.5

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

0.5

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

0.5

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

0.5

1

Time [s]

BFDF-τ1 :

BFDF-τ2 :

BFDF-τ3 :

BFDF-τ4 :

BFDF-β1 :

BFDF-β2 :

BFDF-β3 :

BFDF-β4 :

Figure 8.9: BFDF correlation following the propulsion actuator fault: No actuation onwheel 1.

In this case a fault is isolated by looking at what residue has the highest correlationwith b1 in this case. It is observed on Fig. 8.9, that within 10 ms of the introductionof a fault at the 3 second mark the correlation with τ1 and τ2 (b1 and b2 respectively)increase to values higher than the rest. This can also be observed by looking at Fig. 8.10on the following page where the fault isolation indicates a fault in BFDF-τ1 or BFDF-τ2 .Furthermore it can be seen that the indication remains constant for at least 2 seconds.

This is not a factor in the isolability but it shows something about the robustness ofthe method but also that the fault might is not severe enough to make the API leave itsworking point. As previously established it is not possible to isolate the fault more usingpassive model based FDI because of the inherent dependencies between the two inputs.

Group 1032b 87

Page 88: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Linear Model based FDI of Steering and Propulsion System

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure 8.10: BFDF decision function following the propulsion actuator fault: Noactuation on wheel 1.

Figures from Fig. G.8 to G.14 on pages 166–168 show the isolation of the rest of thefault effects as indicated by each figure.

8.2.5 Partial Conclusion

Both methods are tested with the straight driving linear model and work as expected. Thefull state observer seems like a good choice since only one observer is needed as opposedto the UIO method. When inspecting the time that a given fault remains isolated it isapparent that the BFDF method is more robust against non-linearities when a fault occurs.It outperforms the UIO method in every fault situation as can be seen in App. G.2 onpage 166 and App. G.1 on page 163. The tests used the non-linear model and give a goodindication of how it will work on the real system. There are however uncertainties withregards to noise-levels and model accuracy which will have an impact on performanceof the fault isolation particularly. Comparing the two methods it can be concluded thatthe UIO method is more computationally demanding because it needs to run eight linearmodels simultanously while the BFDF method only needs eight correlation calculations.Since the UIO method does not provide any other significant advantages to it has beenchosen to continue with the BFDF method in the hybrid FDI proposed for the API robot.

8.3 ConclusionThe purpose of the linear FDI part of the thesis was to design a method to enable the useof linear FDI on the API robot. The proposed method called for two parts: First a hybridmodel with a hybrid state observer to ensure that the current linear model was the bestfitting of the available models. Second a continous linear FDI observer to detect and ifpossible isolated faults that occur in the API system.

In Sec. 8.1 on page 75 a hybrid observer was succesfully designed and tested as aftera three different methods were explored se described in Sec. 8.1.4 on page 80. This alsotested the hybrid model defined in Sec. 4.4 on page 46. The conclusion is that the stateob-server maintains a good fit to the non-linear models state trajectory and functions as itwas designed to.

88 Aalborg University 2007

Page 89: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

8.3 Conclusion

Sec. 8.2 on page 81 described the design and test of two different linear FDI methodsthe BFDF method and the UIO method. Both were tested on the non-linear model andfound suitable for implementation in the linear FDI scheme and the BFDF method wasselected for implementation as described in Sec. 8.2.5 on the preceding page.

The last part was to combine and test the ICR observer and BFDF methods to make upthe hybrid FDI method described in Cha. 8 on page 75. The test was carried out in Sec. 11on page 109 and showed that the had some problems detecting and isolating the some ofthe faults described in Sec. 6 on page 61 but could detect the No actuation faults onwheel 1 and 3.

Group 1032b 89

Page 90: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 91: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 9

Nonlinear Particle Filter Based FDIof Steering and Propulsion System

The chapter describes the design and implementation of a particle based approach forFDI. The focus of the project until now has been the linear approach, due to its relativematurity compared to a non-linear approach as the particle filter FDI method is. The non-linear approach is chosen as a Proof-of-Concept, due to computational limitations, whichwill be explained in detail later. The FDI method described in this chapter is a particlefiltering based likelihood ratio approach[12], which combines the Log Likelihood Ratio(LLR) with multi model particle filters. The method will in this chapter be abbreviated asPF -FDI (Particle Filter-FDI). The structure of the PF-FDI method can be seen in Fig. 9.1

Sensors

Inputs & Particle filter: No Faults

Particle filter: Fault 1

Particle filter: Fault N

FaultFault Diagnosis

Figure 9.1: The structure of the PF-FDI method

9.1 Requirements for Particle Filter-FDIAs mentioned in the introduction the particle filter is based on multiple non-linear mod-els. Normally the PF is used for state estimation, requiring only the nominal model ofthe system. But as this method is for use with FDI, a additional number of models arerequired, more specifically a model describing each of the fault scenarios, subject to de-tection and isolation. This entails the requirement that all faults must be described bya non-linear stochastic state space model. This excludes the fault scenario: Erratic

91

Page 92: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Nonlinear Particle Filter Based FDI of Steering and Propulsion System

wheel movement, since a random change of the parameters of the model can not be de-scribed adequately enough due to the inherent difference of the modelled random param-eter and the actual parameter. A special case is the fault scenarios: Actuator offsetand Sensor offset. These scenarios is possible to model accurately but as the size ofthe offset is not a constant, a large number of models is required to cover the range ofpossible offsets. This has the consequence that all Offset faults is excluded from use inthe PF-FDI method.

In the case of the API: 6 fault scenarios each applicable to any one the wheels, resultingin a total of 25 models required for full FDI of all singular faults. The 6 fault scenarios canbe seen in Cha. 6 on page 61. The PF-FDI can be extended to include multiple faultsoccurring or present at the same time. This however requires models for all possiblecombinations of two concurrent faults. The scenarios and therefore models selected forimplementation in this case is:

No Faults: The nominal model.

Fault 1: Propulsion Actuator Fault: No Actuation on wheel 1.

Fault 2: Propulsion Actuator Fault: No Actuation on wheel 3.

Fault 3: Steering Actuator Fault: Max. Negative Actuation on wheel 2.

The level of isolability described in Cha. 7 on page 71, is also valid for PF-FDI methodas the couplings are due to the physical design of the API and not limitations due tomodeling.

The models are chosen as the dynamic model described in Sec. 4.3 on page 39 and isdisplayed below for reference:

xM

yM

θM

=

1m

∑4i=1

(

cos(βi + θM) · Fxi − sin(βi + θM) · Fyi)

− yM · θM − aMgx1m

∑4i=1

(

sin(βi + θM) · Fxi + cos(βi + θM) · Fyi)

+ xM · θM − aMgy1I

∑4i=1

(

sin(βi − γi) · κi · Fxi + cos(βi − γi) · κi · Fyi)

(9.1)where:

Fyi = −(Cf1 +Cf2 · VM) · αi (9.2)

Fxi =τr,irw

(9.3)

τr,i =Km(d)

Ra(d)· V(d),i −

(

K2m(d)

Ra(d)+ b(d)

)

φi (9.4)

The faults are introduced by setting τr,i = −(

K2m(d)

Ra(d)+ b(d)

)

φi, where i is 1 for fault 1 and

3 for fault 2 The model for fault 3 is obtained by setting β2 = −π/2.2

Choosing the dynamic model has one shortcoming as the model is dependent on θM,which causes the different models, to diverge more quickly due to the strong effect of theorientation of the robot. This shortcoming is solved, by using the sensor measurementsof the API orientation as θM in all models, making all models point in the same direction,while still allowing the induced faults to effect the χMvector.

92 Aalborg University 2007

Page 93: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

9.1 Requirements for Particle Filter-FDI

The models is described by the following set of nonlinear state space models indexedby m = 0, 1, . . . ,M

x(m)k = f (m)

(

x(m)k−1, w

(m)k−1

)

(9.5)

y(m)k = h(m)

(

x(m)k−1, v

(m)k−1

)

(9.6)

where

• x is the state vector.

• w is a zero mean white noise with known .

• f(x,w) is the non-linear model.

• y is the output measurements vector.

• v is a zero mean white measurement noise with known PDF.

• h(x, v) is the non-linear measurement model.

The process noise w is for the steering actuator considered to be insignificant, due to thequality of the steering encoder used for feedback in the steering angle controller on theLH28s. The remaining actuator is the propulsion actuator. As no actuator noise determi-nation has occurred, the process noise is selected based on the propulsion sensor. Thisis made possible, due to the way the sensor noise is estimated[4]. The sensor noise isestimated without correction in regards to the actual speed of the wheel. This means thatboth sensor and actuator noise is present in the noise measurements. Since the amountof actuator noise versus sensor noise is unknown, the worst case scenario is chosen andthe noise estimated is selected as 100% actuator noise. The propulsion actuator noise canthen be written as:

wpropulsion = N (0, σ2propulsion) (9.7)

σ2propulsion = 5 · 10−4 · φ (9.8)

The new actuator model is:

τr,i =Km(d)

Ra(d)· V(d),i −

(

K2m(d)

Ra(d)+ b(d)

)

(φi + N (0, σ2propulsion)) (9.9)

The measurement noise v is selected as the noise on the three sensors used in the outputmeasurement vector[4]: y = [xMGPS y

MGPS θ

MGyro]

T . The noise is:

vxM = N (0, σ2xGPS

) (9.10)

vyM = N (0, σ2yGPS

) (9.11)

vθM = N (0, σ2θgyro

) (9.12)

σ2xGPS

= 1 · 10−4[m] (9.13)

σ2yGPS

= 1 · 10−4[m] (9.14)

σ2θGyro

= 1.1 · 10−5[/s] (9.15)

Group 1032b 93

Page 94: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Nonlinear Particle Filter Based FDI of Steering and Propulsion System

The basis idea of this multi-model approach is to calculate the probabilities of each model

being the correct model. This probability is called the conditional probability p(x(m)k |Zk),

where Zk is the set of measurements up to time k. Comparing the probabilities to eachother and selecting the model with the highest probability would then provide the correctmodel thereby detecting and isolating the fault or confirm nominal operations. Using lin-ear models, these probabilities could be obtained using Kalman filters, but when dealingwith non-linear models, this is not possible and a more advanced solution is required,such as particle filters.

9.2 Particle Filter DesignThe PF generates a approximation of the p(x

(m)k |Zk) using a swarm of particles, each prop-

agated though the non-linear models. The particle filter uses the following algorithm:

0. Initial conditions: N particles x(m)0 (i) : i = 1, 2, . . . , N sampled from the PDF p(x

(m)0 |Z0)

1. Prediction: Sample from the noise vector w, N samples w(m)k−1(i) : i = 1, 2, . . . , N.

These sample can then be propagated together with the particles from the previous

time step x(m)k−1(i) : i = 1, 2, . . . , N through the non-linear model:

x(m)k|k−1(i) = f

(m)k−1

(

x(m)k−1(i), w

(m)k−1(i)

)

(9.16)

2. Update: Each particle is assigned a weight, which results in the posterior PDF p(x(m)k |Zk)

being represented in terms of weighted particles. The weights are given for eachparticle i = 1, 2, . . . , N , by:

wk(i) =p(yk|xk|k−1(i))

∑Nj=1 p(yk|xk|k−1(j))

(9.17)

The probability p(yk|xk|k−1) is called the likelihood and is defined by:

p(yk|xk|k−1) = (2π)−3/2|Σv|−1/2 exp

(

−1

2

(

yk − hk(xk|k−1))T

Σ−1v

(

yk − hk(xk|k−1))

)

(9.18)where yk is the sensor measurement for the current time step and where the co-variance matrix Σv is defined by:

Σv =

σ2xGPS

0 0

0 σ2yGPS

0

0 0 σ2θGyro

(9.19)

3. Resample: In order to ensure that the particles accurately describes the PDF of thedifferent models, the particles that have drifted so far away, that their contributionto the PDF is negligible, is excluded from further propagation. The chosen methodis Systematic Resampling[6, 8]. The remaining particles is then used as basis for thenext time step of the algorithm.

4. Step 1,2 and 3 form a single iteration and is performed for each time step k.

94 Aalborg University 2007

Page 95: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

9.3 FDI using Particle Filters

9.3 FDI using Particle FiltersThe FDI part of the PF-FDI combines the log likelihood ratio with multiple hypothesistesting and particle filters. The method described is an adaptation of the GeneralizedLikelihood Ratio[22] to function with PFs instead of the linear Kalman filters used in theoriginal method. The design function for detecting a single fault can be written as:

gk = maxjSkj

H1

≷H0

λ (9.20)

where:

• Skj is the joint log likelihood ratio of the PDF of the nominal and faulty model, fromtime j to k.

• H0 and H1 is the no change and change hypothesis.

• λ is the threshold between the two hypothesis.

The PF-FDI approach extends the decision function with particle based LLRs and multiplehypothesis testing.

The extended joint LLR for model (m) is defined as:

Skj (m) =

k∑

r=j

ln

(

p(yr|Hm, Zr−1)

p(yr|H0, Zr−1)

)

(9.21)

The extended joint LLR is also known as the cumulative sum.

The likelihood p(yr|Hm, Zr−1) is defined using the likelihood of each particle, calcu-lated in the Update step of the PF algorithm in Sec. 9.2 on the preceding page:

p(yr|Hm, Zr−1) =1

N

N∑

i=1

p(

yr|x(m)r|r−1(i)

)

(9.22)

The decision function for multi-model cases can then be written as:

gk = max1≤j≤k

max1≤m≤M

Skj (m)H1

≷H0

λ (9.23)

The λ fault threshold is selected as 200 in order to avoid small spikes in the joint LLR,furthermore all LLR values exceeding 5000 is ignored, as a larger value is consideredoutside the expected range of the LLR under both nominal and faulty operation

Fault detection is obtained when gk > 0, with fault isolation being archived by deter-mining which fault index m is responsible for the change of gk.

As the decision function requires a linear growing number of calculations as Skj mustbe calculated for each possible fault time from 1 to k, a sliding window (W) is applied tothe function resulting in the final decision function:

gk = maxk−W+1≤j≤k

max1≤m≤M

Skj (m) (9.24)

Group 1032b 95

Page 96: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Nonlinear Particle Filter Based FDI of Steering and Propulsion System

The window must be wide enough to provide detection and isolation of the selectedfaults. This is obtained using trial and error, which in this implementation has resultedin a window size of 25 samples equal to a period of 0.5 seconds.

The design of the PF-FDI method is now complete.

9.4 Performance Parameters with regards to ParticleFilter-FDI

The PF-FDI algorithm is computational heavy with regards to CPU usage due to its recur-sive and multi model nature as well as the methods underlying PFs . The implementationuses the following parameters:

No. of models: M = 4 One nominal model and 3 fault models.

No. of Particles: N=7 The number of particles needed for estimating the PDF of the dif-ferent models. The number of particles is decided using a commonly used rule ofthumb: 2 times the number of states plus one. The non-linear model used consistof three states.

Sample Rate: Ts = 50Hz = 0.02 The sample rate used throughout the API control, FDI

and estimation software.

Numerical Integration method: Runge-Kutta The choice of integration method requires4 pass-throughs of the nonlinear models in order to estimated the result of the inte-gration. Intruns = 4.

Complexity of the Nonlinear Model The dynamic model used is a complex model dueto the use of trigonometric functions and multiplications

This adds up to the following equation, which illustrates the number of times the nonlin-ear model is calculated.

nodynamic model runs = M ·N · intruns (9.25)

= 112runs

sample· 1

Ts(9.26)

= 5600runs

second(9.27)

The full implementation of the PF-FDI multiply this number by a factor of 6 due to the 25fault scenarios. The FDI part of the algorithm is also dependent on the constants listedabove, but is less demanding due to the absence of numerical integrations and trigono-metric functions.

As mentioned in the introduction, the PF-FDI is only a Proof-of-Concept, as the cur-rent OBC is not powerful enough to compute the required number of passthroughs ofthe dynamic model, due to its 150MHZ CPU. Therefore the test of the PF-FDI method isperformed offline using the non-linear model as the real system.

96 Aalborg University 2007

Page 97: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

9.5 Preliminary Test of Particle Filter-FDI Method

9.5 Preliminary Test of Particle Filter-FDI MethodThis section functions as a preliminary test of the PF-FDI. The scenario from the hybridmodel chapter is the basis of the simulation of the PF-FDI method. The path can be seenin Fig. 4.20 on page 50. The fault tested in this section is Propulsion Actuator Fault: NoActuation on wheel 1(Fault 1) and is introduced at time = 25. The result can be seen inFig. 9.2 on the next page

The figure shows the behaviour of the different fault models and how the measure-ments changes from following the no fault particles to following the fault 1 particles. Thejoint LLR can be seen in Fig. 9.3 on the following page. The output of the decision functioncan be seen in Fig. 9.4.

The results show that the fault detected was not the fault introduced although thefault was of the same type as the real fault. Previous simulations has also shown falsealarms occurring before any faults was introduced. This is caused by the behaviour ofthe different nonlinear models. When going from driving straight to turning or whengoing from nominal operation to faulty operation, the velocities cross each other causingthe joint LLR to indicate a high likelihood of one or more fault models. The largest of thejoint LLR is detected by the decision function resulting in a false alarm or wrong faulttype. These crossings are of short duration and therefore an additional requirement isadded to the decision function. The requirement is that for fault detection, a fault mustbe detected by the original decision function continually over a period of 2 seconds.

This concludes the design of the PF-FDI method.

9.6 Preliminary ConclusionIn this chapter the PF-FDI method was designed. The fault scenarioes and the nominalscenario was implemented as Particle Filters using the non-linear dynamic model. TheFDI part of the method was also implemented and can be concluded to function as ex-pected, although with the added requrement that a faults is to be present in at least 2seconds before detection. The implementation of the method was designed to allow easyaddition of the remaining faults.

Group 1032b 97

Page 98: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Nonlinear Particle Filter Based FDI of Steering and Propulsion System

0 5 10 15 20 25 30 35 40 45 50−1

−0.5

0

0.5

1

MeasurementsNo FaultsFault 1Fault 2Fault 3

0 5 10 15 20 25 30 35 40 45 50−0.4

−0.2

0

0.2

0.4

0.6

0 5 10 15 20 25 30 35 40 45 50−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Time [s]

xM

yM

θM

Figure 9.2: Simulation results of the Particle Filters with Fault 1: Propulsion ActuatorFault: No Actuation on wheel 1 @ t=25

0 5 10 15 20 25 30 35 40 45 50−250

−200

−150

−100

−50

0

50

100

150

200

250

Fault 1Fault 2Fault 3

Time [s]

Lo

gL

ikel

iho

od

Rat

io

RealFault Time

Figure 9.3: Joint Log Likelihood of the Particle Filters with Fault 1: Propulsion ActuatorFault: No Actuation on wheel 1 @ t=25

98 Aalborg University 2007

Page 99: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

9.6 Preliminary Conclusion

0 5 10 15 20 25 30 35 40 45 50

0

1

2

3

Fault Detected

Time [s]

Fau

lt

RealFault Time

DetectedFault Time

Figure 9.4: Decision function of the Particle Filters with Fault 1: Propulsion ActuatorFault: No Actuation on wheel 1 @ t=25. The detected fault is not the fault occurring inthe simulation.

Group 1032b 99

Page 100: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 101: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 10

Active Fault Isolation Supervisor

As described in Cha. 7 on page 71, the chosen FDI scheme is only able to isolate faultsdown to which wheel pair is faulty as seen in Fig. 7.1 on page 72, when driving straightwhereas only fault detection is possible when turning. In order to provide complete faultisolation down to which sensor or actuator is responsible for the deviation of the API, aActive Fault Isolation scheme(AFI) is to be implemented. The purpose of the AFI is to takecontrol of the API in case of a detected fault. The AFI will then perform a series of tests inorder to isolate the faults.

The method is divided into two parts, one responsible for isolating steering faults andthe second responsible for the propulsion faults. The placement of the AFI supervisor isbetween the FD supervisor and the FTC supervisor.

A overview of the method can be seen in Fig. 10.1 on the next page. The figuresdescribing the AFI method use double ringed circles as start states, thick bordered circlesas stop state and thin circles as states. This is due to the discrete event nature of themethod.

A added value of the active testing of the different actuators and sensors, is that falsealarms can be detected and consequently ignored. This is an improvement over the pas-sive FDI, which due to only observing the system, is vulnerable to detecting and isolatinga non existing faults i.e.. a false alarm. The passive FDI then propagates the false alarmto the FTC supervisor resulting in poor performance of the control system, due to thedisabling of working actuators or sensors.

10.1 Active Isolation of Steering Faults.Isolation of steering faults, both sensor and actuator, is based on the electro-mechanicalstoppers implemented on the API. The stoppers are designed as a security measure inorder to prevent turning the wheels outside the safe area as well as avoiding breakingthe cables inside the joint due to twists. The procedure of the isolation of steering faultscan be seen in Fig. 10.2 on page 103 and is described in App. H.1 on page 169.

101

Page 102: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Active Fault Isolation Supervisor

Steering

Fault

Isolation

Fault

Isolated

Steering

Fault

Detected

Propulsion

Fault

Isolation

Propulsion

Fault

Detected

No

Fault

Isolated

No

Fault

Isolated

Fault

IsolatedActive

FI

Supervisor

Disabled

Active

FI

Supervisor

Actived

Figure 10.1: Flowchart of the AFI process

10.2 Active Isolation of Propulsion Faults.The isolation of propulsion faults is based on the velocity of the API and the kinematicmodel:

V B =√

(xB)2 + (yB)2

V Bref =

(xBref )2 + (yBref )

2

xBref =1

4

4∑

i=0

(

φi cos(βi))

yBref =1

4

4∑

i=0

(

φi sin(βi))

The propulsion faults, that are considered for detection are:

Non-model Based The following faults can be isolated by setting the wheels to zero ac-tuation and comparing the VM and wheel sensor measurements.

• Max. positive actuation

• Max. negative actuation

• Max. positive output

• Max. negative output

A special case is the No actuation fault, which requres that the wheel is given anon zero reference, as the isolation method is to verify that the API is moving, whensubjected to actuation.

102 Aalborg University 2007

Page 103: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

10.2 Active Isolation of Propulsion Faults.

Move

to

Nearest

Stopper

No OutputMax.

Positive

OutputMax

Positive

Actuation

Max

Negative

Actuation

Max.

Negative

Output

Sensor

Offset

No

Actuation

Steering

Fault

Isolated

No

Stopper

Activated

Min

Stopper

Activated

at

Start

Max

Stopper

Activated

at

Start

No

Change

in

Output

Stopper

Actived

with

Offset

on

Sensor

Max

Negative

Output

W.o.

Stopper

Actived

Max

Positive

Output

W.o.

Stopper

Actived

Wheels remaining

Continue

to

next

Wheel

No Fault

Isolated

No

Fault

Isolated

Steering

Fault

Isolation

Steering

Fault

Isolation

Fault

Isolated

Figure 10.2: Flowchart of AFI, when isolating steering faults.

Group 1032b 103

Page 104: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Active Fault Isolation Supervisor

Model Based The remaining faults require a model based approach, which involvesdriving back and forth and comparing the measured velocity VM with the modelbased velocity V B

ref .

• No output

• Sensor offset

Since the propulsion sensor is not used in the control of the propulsion torque, onlythe isolation of actuator faults will be implemented.

The procedure of the isolation of propulsion faults can be seen in Fig. 10.3 on the facingpage and is described in App. H.2 on page 170.

The speed of the robot VM can be measured two different ways. One is to use thevelocity of the API as reported by the GPS. The other is to use the equipped doppler radar,which measures the speed of the API directly. During the implementation, it becameclear that the doppler radar is not able to distinguish which direction the API is movingas the measured speed is always postive. If used in the isolation of propulsion faults thetwo Max. positive actuation and Max. negative actuation scenarioes willbe deteced as Max. positive actuation. Therefore the speed measurment will becalculated using the GPS velocities.

10.3 Partial ConclusionThe AFI for isolation of steering faults has been implemented fully as for the propulsionpart only the propulsion actuator faults has been implemented. The isolation of steer-ing sensor faults Max. positive output and Max. negative output have beenimplemented but can not be tested on the API. Instead the algorithm for detecting thesetwo faults have been verified by setting the variables to values, that mimic the effect ofthe two faults. This simulation have shown that the algorithm work and is able to isolatethe faults. During the implementation of the steering faults a design fault was detectedin the LH28 software. The error is in the way the status of the electromechanical stoppersare made available to the OBC. The stoppers are a part of an FDI scheme on the LH28

and is only transmitted, when the controller is active and when the wheel is actuated.This effects the detection and isolating of the Max. positive actuation and Max.negative actuation on the steering actuators although only when the fault is simu-lated by fixing the steering control signal to the max. or min. value. The two faults aredetected as No actuation, which means that even though the fault type is not correct,a fault is detected. It is expected that the real fault will enable the status of the stoppersto be sent, thereby enabling full fault isolation.

The remaining faults are not effected and is correctly isolated.

104 Aalborg University 2007

Page 105: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

10.3 Partial Conclusion

Max.

Positive

OutputMax

Positive

Actuation

Max

Negative

Actuation

Max.

Negative

Output

Propulsion

Fault

Isolated

on

Wheel i

VM

Smaller

Then

Thredshold

VM

Greater

than

Threshold

VM

= 0

&

Vi = min

VM

= 0

&

Vi = max

Wheels remaining

Continue

to

Wheel i+1No Fault

Isolated Model

Based

FI

No Wheels remaining

Fault

Isolated

No OutputSensor

Offset

No

Actuation

VM

= Vref

&

Vi = 0

VM

= Vref

&

Vi <> Vi,ref

VM

= 0

&

Vi = 0

No

Fault

Isolated

i

VM

Vref

Vi

Vi,ref

VM

= Vref

&

Vi = Vi,ref

Continue

to

Wheel i+1

= Index i is the wheel, being testet

= The current velocity of the API

= The calculated velocity of the API

= The velocity of wheel i

= The calculated velocity of wheel i

Propulsion

Fault

Isolation

Propulsion

Fault

Isolation

Figure 10.3: Flowchart of AFI, when isolating propulsion faults.

Group 1032b 105

Page 106: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 107: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Part IV

Conclusion

107

Page 108: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 109: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 11

Acceptest of Linear FDI

The two methods that make up the linear FDI scheme will be tested in this chapter andthe results analysed. The two methods are integrated into a SIMULINK block as describedin App. C.1.3 on page 141 and tested using the scenario described in Sec. 4.4 on page 46

As it was decided to only detect faults when the robot was driving straight. A simpleway of detecting this was implemented in the matlab code. A simple threshold on θwas used to detect weather the robot was turning or driving in a straight line. A secondthreshold was implemented on the Euclidean norm of the observer residue in order todetect when the BFDF isolation should be activated.

It proved to be a challenge to tune the different thresholds in the the BFDF to be ableto detect different faults correctly. It was however possible to get the FDI scheme todetect and isolate a few faults reliably. Figure Fig. 11.1 on the following page and NoActuation show the results of faults being introduced after 3 seconds on wheel 1 and 3.

It can be seen that the two scenarios detect the given fault correctly. As was discoveredin Sec. 7 on page 71 it is not possible to isolate input faults more than indicated in theresults. Fig. 11.1 on the following page should indicate a fault in τ1 or τ2 at 3 secondswhich it does. Fig. 11.1 on the next page should indicate a fault in τ3 or τ4 and while lessapparent it also indicates the correct faults initially.

It was discovered during the test that it was difficult to separate faults from statechanges and this interfered with the fault isolation. However a 20 sample delay on thestate observer was enough to give the BFDF algorithm time to detect a given fault. Still itwas difficult to get the algorithm to detect the different faults using identical thresholds.

Linear FDI ConclusionIt can be concluded that while the hybrid FDI scheme implementation has some problemswhich need to be dealt with but detects the No Actuation fault on propulsion correctlyon all wheels.

109

Page 110: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Acceptest of Linear FDI

0 10 20 30 40 50 60 70 80 90 100 110−0.4

−0.2

0

0.2

0 10 20 30 40 50 60 70 80 90 100 110

−0.2

0

0.2

0 10 20 30 40 50 60 70 80 90 100 110

−0.2

0

0.2

0 10 20 30 40 50 60 70 80 90 100 110t

No Fault

BFDF-τ1 :

BFDF-τ2 :

BFDF-τ3 :

BFDF-τ4 :

BFDF-β1 :

BFDF-β2 :

BFDF-β3 :

BFDF-β4 :

xM

yM

θM

Figure 11.1: Showing the results of the No Actuation fault on wheel 1

11.1 Hybrid State ObserverThis section will determine if the hybrid state observer designed in Sec. 8.1 on page 75is able to follow the test path set forward in Sec. 4.4 on page 46. In addition to testingthe state observer, the hybrid model is also tested. The results of the test can be seen inFig. 11.3 on page 112 and Fig. 11.4 on page 113

Partial ConclusionThe hybrid model and thereby the hybrid state observer is shown to follow the nonlinearmodel. As the hybrid model is linearized in a finite number of working points, a completematch between the hybrid model and the non-linear model is impossible. If the deviationis deemed to large it is trivial to devide the state space into smaller areas. The current

110 Aalborg University 2007

Page 111: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

11.2 Continuos State Observer

0 10 20 30 40 50 60 70 80 90 100 110−0.4

−0.2

0

0.2

0 10 20 30 40 50 60 70 80 90 100 110

−0.2

0

0.2

0 10 20 30 40 50 60 70 80 90 100 110

−0.2

0

0.2

0 10 20 30 40 50 60 70 80 90 100 110t

No Fault

BFDF-τ1 :

BFDF-τ2 :

BFDF-τ3 :

BFDF-τ4 :

BFDF-β1 :

BFDF-β2 :

BFDF-β3 :

BFDF-β4 :

xM

yM

θM

Figure 11.2: Showing the No Actuation fault on propulsion on wheel 3

number of states for a turning scenario is 16, which for the case of FDI is deemed enough.The final conclusion can only be made, after the FDI method has been tested.

11.2 Continuos State ObserverThis chapter will describe the accept test of the linear fdi method BFDF.

Group 1032b 111

Page 112: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Acceptest of Linear FDI

−4 −2 0 2 4 6 8 10

0

2

4

6

8

10

12

Non−linear ModelHybrid Model

xM

yM

Figure 11.3: The xy plot of the Non-linear model compared to the Hybrid model.

112 Aalborg University 2007

Page 113: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

11.2 Continuos State Observer

0 10 20 30 40 50 60 70 80 90 100−0.5

0

0.5

0 10 20 30 40 50 60 70 80 90 100−0.5

0

0.5

0 10 20 30 40 50 60 70 80 90 100−0.5

0

0.5

0 10 20 30 40 50 60 70 80 90 1000

50

100

Non−linear ModelHybrid Model

State

Time [s]

xM

yM

θM

State

Figure 11.4: The velocities of the Non-linear model compared to the Hybrid model. Atthe bottom the discrete state is shown.

Group 1032b 113

Page 114: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 115: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 12

Accept Test of Particle Filter-FDIMethod

This chapter presents the accept test of the Particle Filter-FDI Method designed and im-plemented in Cha. 9 on page 91

The scenario from the hybrid model chapter is the basis of the simulation of the PF-FDI method. The path can be seen in Fig. 4.20 on page 50. The faults are introduced 25seconds into the simulation. The faults tested in this section is the three fault scenarioslisted below:

Fault 1: Propulsion Actuator Fault: No Actuation on wheel 1.

Fault 2: Propulsion Actuator Fault: No Actuation on wheel 3.

Fault 3: Steering Actuator Fault: Max. Negative Actuation on wheel 2.

The results of Fault 1 detection can be seen in Fig. 12.1 on the following page.

The simulations shows that the fault was detected at approximately 28 seconds, 3second after fault induction. Furthermore the fault was correctly isolated as fault 1.

The simulation results for the remaining faults can be seen in Fig. 12.2 to 12.3 onpages 116–117

12.1 Conclusion on Particle Filter-FDIThe Fig. 12.1 to 12.3 on pages 116–117 shows that the PF-FDI is able to detect and identifyall implemented faults in under 4 seconds or 200 samples after fault induction. This isaccording to the specifications, listed in Sec. 1.2 on page 17, more specifically requirement1, well below the maximum time allowed for the API from deviating from its path.

Consequently this means that the PF-FDI is a valid method for detecting and isolatingfaults on the API. Therefore the only remaining problem is the processing requirementsmentioned in Sec. 9.4 on page 96, which with the current specifications of the API OBC

is unavailable. If the OBC is replaced, the designed PF-FDI method is easily extended toinclude all fault scenarios as well as all combinations of faults if so desired.

115

Page 116: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Accept Test of Particle Filter-FDI Method

0 5 10 15 20 25 30 35 40 45 50−1

−0.5

0

0.5

1

MeasurementsNo FaultsFault 1Fault 2Fault 3

0 5 10 15 20 25 30 35 40 45 50−0.4

−0.2

0

0.2

0.4

0.6

0 5 10 15 20 25 30 35 40 45 50−0.5

0

0.5

1

0 5 10 15 20 25 30 35 40 45 500

1

2

3

Fault Detected

Time [s]

xM

yM

θM

Fault

RealFault Time

DetectedFault Time

Figure 12.1: The simulation results of the Particle Filters FDI method with Fault 1: Propul-sion Actuator Fault: No Actuation on wheel 1 @ t=25.

0 5 10 15 20 25 30 35 40 45 50−1

−0.5

0

0.5

1

MeasurementsNo FaultsFault 1Fault 2Fault 3

0 5 10 15 20 25 30 35 40 45 50−0.2

0

0.2

0.4

0.6

0 5 10 15 20 25 30 35 40 45 50−0.5

0

0.5

1

0 5 10 15 20 25 30 35 40 45 500

1

2

3

Fault Detected

Time [s]

xM

yM

θM

Fault

RealFault Time

DetectedFault Time

Figure 12.2: The simulation results of the Particle Filter FDI method with Fault 2: Propul-sion Actuator Fault: No Actuation on wheel 3 @ t=25.

116 Aalborg University 2007

Page 117: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

12.1 Conclusion on Particle Filter-FDI

0 5 10 15 20 25 30 35 40 45 50−1

−0.5

0

0.5

1

MeasurementsNo FaultsFault 1Fault 2Fault 3

0 5 10 15 20 25 30 35 40 45 50−1

−0.5

0

0.5

1

0 5 10 15 20 25 30 35 40 45 50−0.5

0

0.5

1

0 5 10 15 20 25 30 35 40 45 500

1

2

3

Fault Detected

Time [s]

xM

yM

θM

Fault

RealFault Time

DetectedFault Time

Figure 12.3: The simulation results of the Particle Filter FDI method with Fault 3: SteeringActuator Fault: Max. Negative Actuation on wheel 2 @ t=25.

Group 1032b 117

Page 118: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 119: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 13

Accept test of Active Fault IsolationSupervisor

This chapter will describe the accept test of the Active FI supervisor designed and imple-mented in Cha. 10 on page 101.

As the AFI method is divided into two parts, the steering and propulsion faults will betested in two separate tests.

13.1 Test of Steering Fault Isolation.The test is performed by introducing faults to the wheels by means of the fault simulationcode, designed and implemented by a previous group[4].

The first scenario is performed with no faults active, in order to establish a baselinefor the test. The faults introduced to the wheels are:

• No output

• No actuation

• Sensor offset

The results of the test is presented as the steering angle as measured by the steeringsensor. The results of the no faults test can be seen in Fig. 13.1 on the following pageThe results show that the wheels is moved from side to side in order to test each faultpossibility. The electro-mechanical stoppers was activated as expected. The detectedfault was no faults i.e. a false alarm.

The No output test in Fig. 13.2 on the next page shows that the steering angle mea-surement is not changing although the stoppers are activated as expected. This behaviouris correctly detected and isolated as the No output fault.

The No actuation test in Fig. 13.3 on page 121 shows that the steering angles areconstant and that the stoppers is not activated. This behaviour is correctly detected andisolated as the No actuation fault.

The Sensor offset test in Fig. 13.4 on page 121 shows that the steering angles isoffset by approximately 40 and that the stoppers is activated. This behaviour is correctlydetected and isolated as the Sensor offset fault.

119

Page 120: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Accept test of Active Fault Isolation Supervisor

0 5 10 15 20 25−200

−100

0

100

200

0 5 10 15 20 25

−100

0

100

0 5 10 15 20 25

0

1

Stopper MaxStopper Min

Time [s]

Ste

erin

gA

ng

ler

Ste

erin

gP

WM

Ele

c.S

top

per

s

Figure 13.1: The input and output of the no faults test.

0 5 10 15 20 25−200

−100

0

100

200

0 5 10 15 20 25

−100

0

100

0 5 10 15 20 25

0

1

Stopper MaxStopper Min

Time [s]

Ste

erin

gA

ng

ler

Ste

erin

gP

WM

Ele

c.S

top

per

s

Figure 13.2: The input and output of the No output test.

120 Aalborg University 2007

Page 121: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

13.1 Test of Steering Fault Isolation.

0 5 10 15 20 25−200

−100

0

100

200

0 5 10 15 20 25

−100

0

100

0 5 10 15 20 25

0

1

Stopper MaxStopper Min

Time [s]

Ste

erin

gA

ng

ler

Ste

erin

gP

WM

Ele

c.S

top

per

s

Figure 13.3: The input and output of the No actuation test.

0 5 10 15 20 25−200

−100

0

100

200

0 5 10 15 20 25

−100

0

100

0 5 10 15 20 25

0

1

Stopper MaxStopper Min

Time [s]

Ste

erin

gA

ng

ler

Ste

erin

gP

WM

Ele

c.S

top

per

s

Figure 13.4: The input and output of the Sensor offset test.

Group 1032b 121

Page 122: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Accept test of Active Fault Isolation Supervisor

13.2 Test of Propulsion Fault Isolation.The test is performed by introducing faults to the wheels by means of the fault simulationcode, designed and implemented by a previous group[4].

The three propulsion actuator was successfully detected and isolated. All wheels ex-cept the wheel subjected to the isolation, were shutdown. In the case of the No actuationfault, an attempt to move the API with the tested wheel was performed.

13.3 ConclusionThe steering part of the AFI scheme was able to detect and isolate all introduced faultsas well as detecting a false alarm, when the baseline test was performed. The propulsionactuator faults were likewise detected and isolated.

122 Aalborg University 2007

Page 123: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Chapter 14

Conclusion

In the final conclusion of the report the objectives described in the introduction Cha. 1 onpage 15 is evaluated and concluded on.

Objective 1: Fault Analysis

Objective 1 was to perform a fault analysis and severity assessment of all wheel, proxim-ity sensor and inclinometer faults. The fault analysis was performed on all selected faults.The severity assessment showed that the faults with the highest severity index was theproximity sensor faults. The wheel faults with the highest severity index was the steeringsensor and actuation faults, due to the strong correlation between the sensor and actuatorfaults. The remaining actuator faults and the inclinometer faults was conclude to have alower severity index, but was included due to their effect on the APIS performance. Thepropulsion sensors was excluded from further investigation as was all non-sense outputfaults.

Objective 2: Linear FDI

Objective 2 was to design and implement a linear model based FDI method. The objectiveconsists of two parts: A hybrid state observer and a continuous state observer. Three hy-brid state observers were designed and implemented on the non-linear model. The bestsuited was selected as the state observer used to provide the continuous state observerwith an linear model. The state observer was successfully able to follow a path selectedas an representative of an actual path.

Objective 3: Critical Faults

Objective 3 was to verify the detection of critical fault in the API . This has not beenperformed because the FDI scheme was not fully implemented on the API robot. Criticalfaults were however simulated and found as intended.

Objective 4: Non-linear FDI

Objective 4 was to design and implement a non-linear FDI method. The chosen methodwas the a particle filter based approach, which was implemented for a subset of faults.

123

Page 124: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Conclusion

The approach was successfully tested with all implemented faults detected and isolated.In order for the particle filter method to be implemented on the API , a more powerfulOBC must be present.

Objective 5: Active FDI

Objective 5 was to design and implement an active FDI method. The method was fullyimplemented and tested for all steering actuator and sensor faults. The propulsion actu-ator faults were also implemented and tested. The active FDI method was able to detectand isolate all the desired faults faults.

Objective 6: Proximity Sensors

Objective 6 was to design and implement proximity sensors on the API. The proximitysensors has been implemented and is capable of detecting obstacles in front of the API atsuch a distance, that the API is able to brake and stop before hitting the object.

Objective 7: Inclinometer

Objective 7 was design and implement separate inclinometer in order to provide pitchand roll measurements. The inclinometer was implemented as is able to provide pitchand roll measurements at much greater accuracy then the built-in inclinometer on thecompass.

Objective 8: Remote Shutdown

Objective 8 was to design and implement relays for disconnecting individual wheels. Thenecessary hardware and software has been implemented. The Shutdown boxes is able toturn off or on each wheel individually.

124 Aalborg University 2007

Page 125: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Bibliography

[1] T. Bak.Lecture notes - estimation and sensor information fusion.Available online on http://www.control.aau.dk/~tb/Teaching/Courses/

Estimation/sensfusion.pdf, November 2000.

[2] Thomas Bak and Rooxbeh Izadi-Zamanabadi.Lecure notes - hybrid systems.Available online on http://www.control.auc.dk/~tb/hybrid/hs.pdf,

2004.

[3] Jens Biltoft, Johnny Nielsen, and Peter Thomsen.Autonom robot til markanalyse.Master’s thesis, Aalborg University, 2001.

[4] Morten Bisgaard, Dennis Vinter, and Kasper Zinck Østergaard.Modelling and fault-tolerant control of an autonomous wheeled robot.Master’s thesis, Aalborg University, 2004.

[5] Jie Chen and R.J. Patton.Robust Model-based Fault Diagnosis for Dynamic Systems.Kluwer Academic Publishers, 1999.

[6] Z Chen.Bayesian filtering: From kalman filters to particle filters, and beyond.Technical report, The Natural Sciences and Engineering Research Council of Canada,

2006.Available online on http://soma.crl.mcmaster.ca/~zhechen/download/

ieeebayesian.ps.

[7] Svend Christensen.Api project.Aalborg University, 2000.Available online on http://www.cs.auc.dk/api/.

[8] Randal Douc and Olivier Cappe.Comparison of resampling schemes for particle filtering.Image and Signal Processing and Analysis, 2005. ISPA 2005. Proceedings of the 4th Inter-

national Symposium on, pages 64–69, 2005.

125

Page 126: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

BIBLIOGRAPHY

[9] Gene F. Franklin, J. David Powell, and Micheal Workman.Digital Control of Dynamic Systems.Addison-Wesley, 1998.

[10] Digital Compass Module HMR3000.Honeywell, 2004.Available online on http://www.ssec.honeywell.com/magnetic/

datasheets/hmr3000.pdf.

[11] Anders Joergensen, Michael Kristensen, Peter Egtoft Nielsen, and Rene Soerrensen.Navigation af autonom markrobot.Master’s thesis, Aalborg University, 2003.

[12] Ping Li and Visakan Kadirkamanathan.Particle filtering based likelihood ratio approach to fault diagnosis in nonlinear

stochastic systems.IEEE transaction on systems,man and cybernetics, 31(3):337–343, August 2001.

[13] D. D. Magill.Optimal adaptive estimation of sampled stochastic process.IEEE transaction of automatic control, 10(4), 1965.

[14] PIC16F87X Data-sheet.Microchip Technology Inc., 2001.Available online on http://www.senscomp.com/specs/7000%

20electrostatic%20spec.pdf.

[15] B. H. Nolte and N. R. Fausey.Soil compaction and drainage.Available online on http://ohioline.osu.edu/b301/index.html, 2001.

[16] Eric J. Rossetter and J. Christian Gerdes.A Study of Lateral Vehicle Control Under a ’Virtual’ Force Framework.Stanford University, 2002.

[17] 6500 Series Ranging Modules.SensComp, Inc., Sep 2004.Available online on http://www.senscomp.com/specs/6500%20module%

20spec.pdf.

[18] Series 7000 Transducer.SensComp, Inc., Sep 2004.Available online on http://www.senscomp.com/specs/7000%

20electrostatic%20spec.pdf.

[19] AGCO Global Technologies.Fieldstar.Available online on http://www.fieldstar.dk/Agco/FieldStar/

FieldStarUK/, 2007.

126 Aalborg University 2007

Page 127: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

BIBLIOGRAPHY

[20] The SCA100T dual axis inclinometer series.VTI technologies, rev. a edition, May 2007.Available online on http://www.vti.fi/midcom-serveattachmentguid-e69426306a371172

SCA100T_inclinometer_datasheet_8261800A.pdf.

[21] Greg Welch and Gary Bishop.An introduction to the kalman filter, July 2006.

[22] Alan S. Willsky and Harold L. Jones.A generalized likelihood ratio approach to the detection and estimation of jumps in

linear systems.Automatic Control, IEEE Transactions on, 21(1):108–112, February 1976.

Group 1032b 127

Page 128: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 129: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Part V

Appendix

129

Page 130: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 131: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Appendix A

Additional Hardware

The API is equipped with a number of wheel actuators and sensors, each of which iscontrolled or sampled by one of the four LH28 computers. The interface between the LH28

and the wheels consist of four interface electronics boxes with additional componentsenabling the communication between different types of signals. The boxes is responsiblefor:

• Connecting the steering PWM signal from the LH28 with the H-bridge.

• Connecting the propulsion PWM signal from the LH28 with the Heinz motors.

• Providing a safety measurement capable of disabling the actuation of steering ac-tuator in one direction if the corresponding electromechanical stopper has beenactivated.

• Connecting the steering encoders to the LH28.

• Connecting the electromechanical stoppers to the LH28.

• Removing non-linearities in the propulsion control signal.

Previously the boxes has been in the form of a prototype and has not been completelydocumented in its current configuration. The prototype can be seen in Fig. A.1 on thenext page

The prototypes has previously been the source of problems. The use of fragile wiresthroughout the prototype together with the use of 24Vcc direct from the batteries in-creases the likelihood of serious short circuits. The 24Vcc used in the boxes are also usedfor the actuation of the wheels and are therefor protected with a 60 Amp fuse. This causesthe majority of short circuits between the 24Vcc to either destroy the components in theaffected circuit or burn a wire or path away on the prototype board.

To avoid such situations from happening the prototype boards has been reverse engi-neered and implemented with a printed circuit board.

The diagram of one the boxes can be seen in Fig. A.2 on page 133

The in/output of the different connectors are listed below:

131

Page 132: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Additional Hardware

Figure A.1: The prototype of the interface electronics boxes

132 Aalborg University 2007

Page 133: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

VCCgnd

24vdcVCC

24vdc

VCC

VCCh

VCCh

VCCh

VCC

4.75k4.75k

2.2u2.2u

4.75k4.75k 6.8k6.8k

StoppersStoppers594837261

123BC547BC547

10k10k

J2

HEADER 10

J2

HEADER 10

123456789

10

74LS0874LS084

56

74LS0874LS0812

1311

H-bridgeH-bridge

594837261

10k10k

10k10k

1k1k

LH AgroLH Agro

13251224112310229218207196185174163152141

2.21k2.21k

74LS1474LS14

12

HeinzHeinz

8157

146

135

124

113

10291

4.75k4.75k

22.1k22.1k

J3

HEADER 10

J3

HEADER 10

12345678910

LEDLED

3.92k3.92k

BD139BD139

-

+

TLC2201-

+

TLC2201

3

26

74

6.8k6.8k

33.2k33.2k

10k10k

EncoderEncoder

594837261

4.75k4.75k

Figure A.2: Diagram of the interface electronics boxes

LH28 connector

3 Electromechanical stopper left output

15 Electromechanical stopper right output

5 Steering PWM input

6 Steering PWM input

7 Steering Encoder output

8 Steering Encoder output

10 Propulsion PWM output

11 Wheel speed output

12 Propulsion direction output

Heinz connector

1 Propulsion direction input

3 Wheel speed input

5 Propulsion PWM input

10 Status output

Group 1032b 133

Page 134: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Additional Hardware

Steering encoder connector

3 Steering PWM output

4 Steering PWM output

Electromechanical stoppers connector

1 Electromechanical stopper left output

5 Electromechanical stopper right output

H-bridge connector

3 Steering Encoder output

5 Steering Encoder output

The final version of the interface electronics boxes can be seen in Fig. A.3

Figure A.3: The final version of the interface electronics Boxes

The new boxes works identically to the old and has subsequently been used as thestandard instrumentation for the API.

134 Aalborg University 2007

Page 135: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Appendix B

Hardware Test

In order to verify that the two implemented sensors function correctly and to get sta-tistical data, a number of tests are carried out on the proximity sensor as well as theimplemented inclinometer.

B.1 Inclinometer TestThe inclinometer test is divided into two parts: A function test and a vibration test. Thepurpose of the function test is to determine if the inclinometer function as intended. Thevibration test is carried out in order to determine if the new MEMS based sensor canhandle the vibrations of the API robot better than the existing fluid based inclinometer.

B.1.1 Function test

Test setup

The inclinometer is placed on a surface which can be tilted on one axis with its x-axisalong the edge of the surface. The surface is then tilted to an approximated angle, whichwas found using a ruler and simple geometry. Then the measurements are taken. Thesurface is tilted 3 times for each axis and 20 samples recorded for each angle and theaverage taken. The test is not intended as an accuracy test as the equipment for thisis not available but rather as an integrity test of the measurements from the SCA100T

inclinometer[20].

Results

Approx. angle. Inclinometer output

0 -2.72

6 3.14

14 9.12

24 20.37

Table B.1: Y-axis inclinometer test.

Approx. angle. Inclinometer output

0 0

6 5.88

14 11.85

24 23.10

Table B.2: Y-axis test corrected for offset.

135

Page 136: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Hardware Test

Approx. angle. Inclinometer output

0 -3.33

6 2.63

14 9.81

24 20.73

Table B.3: X-axis inclinometer test.

Approx. angle. Inclinometer output

0 0

6 5.96

14 13.15

24 24.06

Table B.4: X-axis test corrected for offset.

As can be seen in Table B.2 on the preceding page and Table B.4 the measurements fitthe angles well when compensated for the initial offset and therefore the data integrity isassumed to be good.

B.1.2 Vibration test

The previous group[4] experienced some problems with the built-in inclinometer of theHMR3000 compass module implemented on the API robot [10]. The sensor was far tonoisy to be used for compensating the compass when the robot was moving. A newmodule was proposed as described in Sec. 3.2 on page 26. The inclinometer needs to betested after implementation to verify its noise sensitivity. It is expected that the MEMS-based SCA100T[20] will handle vibration noise better than the old sensor could but beingaccelerometer based will probably have an affect on accuracy of the SCA100T when theAPI robot accelerates.

Test setup

The API robot is placed on an asphalt surface, which is close to level. All wheels areturned to 0 and the robot is actuated with 20% torque on all wheels for 10 seconds as themeasurements from each sensor are logged by the OBC.

Results

Sensor Roll[] Pitch[]

HMR3000[10] 4.51 2.48

SCA100T[20] 126.41 36.43

Table B.5: Comparison of the variance calculated for the two different sensor types.

When comparing data from the old inclinometer, seen in Fig. B.1 on the next page,with the new shown in Fig. B.2 on page 138 it is easily observed that the new sensor isless noisy than the old sensor.

B.2 Proximity Sensor TestThe proximity sensors are implemented on the API robot in order to make it safer forits surroundings as well as to avoid damaging the API if it encounter an obstacle. Thepurpose of this test is to verify that the API will detect obstacles which are placed in itspath. This includes a test to see if it is possible for an stationary object to get hit by the

136 Aalborg University 2007

Page 137: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

B.2 Proximity Sensor Test

0 1 2 3 4 5 6 7 8 9 10

−20

−10

0

10

20

0 1 2 3 4 5 6 7 8 9 10

−20

−10

0

10

20

0 1 2 3 4 5 6 7 8 9 100

0.2

0.4

0.6

Ro

ll[

]P

itch

[]

Sp

eed[m/s

Time[s]

Figure B.1: Data from the HRM3000 inclinometer when moving on asphalt. The speed ofthe API is shown in the bottom graph.

robot when it travels in a straight line.

Test setup

The assumption is that if the proximity sensors can detect an object which is inside thelines formed by the edge of the left and right wheels at a distance of 2m, it will not bepossible to hit this object when the robot runs in a straight line. Additionally an objectwill be moved to different places inside the area in front of the robot to try and find a blindspot. The obstacle chosen for this test is a a metal cylinder 1.1m and 0.03m in diameter.The setup can be seen in Fig. B.3 on the next page

Results

As described the metal cylinder was placed at a distance of 2m from the proximity sensorsas seen in Fig. B.3 on the following page. Different locations across the beams were testedand no blind spots where found inside the edges of the left and right wheel. Additionallythe rod was moved to one side until it was no longer in sight of the sensors to find thereal detection angle at that distance. Using simple geometry the angle is found to approx.13.

Group 1032b 137

Page 138: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Hardware Test

0 1 2 3 4 5 6 7 8 9 10

−20

−10

0

10

20

0 1 2 3 4 5 6 7 8 9 10

−20

−10

0

10

20

0 1 2 3 4 5 6 7 8 9 100

0.2

0.4

0.6

Ro

ll[

]P

itch

[]

Sp

eed[m/s

Time[s]

Figure B.2: Data from the SCA100T inclinometer when moving on asphalt. The speed ofthe API is shown in the bottom graph.

θ = 13

θ = 13

2m

Edge of right wheel.

Edge of left wheel.

Figure B.3: Top view of the detection area sensitivity angles on the SensComp 7000 ultra-sonic transducers. The measured detection angle is shown.

138 Aalborg University 2007

Page 139: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Appendix C

Implemented Software

This chapter describes the software modified and added during the project. The fullsource code can be found on the CD attached. The software can be divided into threegroups: Simulink blocks comprising of both control and FDI blocks, stand alone programsand sensor/actuator interfaces. A list of the added software is shown below:

Simulink Blocks:

• Proximity Supervisor.

• Active FDI Supervisor

• Hybrid FDI and State Observer

Standalone Programs:

• Proximity Supervisor.

• Automatic Calibration of Steering Actuators

Interfaces:

• Inclinometer Interface

• Proximity Sensor Interface

• Remote Shutdown Interface

The following sections describes the design and implementation of the software.

C.1 Simulink blocksThese Simulink blocks provide additional features and functionalities to the Simulinkcontrollers, FDI and state estimators. They can not be run by them self.

139

Page 140: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Implemented Software

C.1.1 Proximity Supervisor

The proximity supervisor is responsible for the safe operation of the API, when the robotis operation autonomously, in regards to objects in front of the robot. The supervisorcontinually monitors the distances reported by the proximity sensors, and in case of aobject inside the safety zone of 2 meters, instructs the controller, to enter a halt state. Thisstate disables the controller, sets the LH28 controllers into standby and sets the propulsionreference to 0%. This effectivily stops the API from moving until the object is moved ormoves itself away. A hysteresis of 0.3 meters is added in order to avoid a situation, wherethe API and the object is moving in the same direction, resulting in multiple starts andstops as the API catches up to the object, and then stops.

The proximity supervisor is placed before the controller and the FDI in the controllerstructure shown in Fig. C.1.

Steering & Propulsioncontroller

Proximity Supervisor

Controller

FDI and FTC

Figure C.1: The placement of the Proximity Supervisor in the control structure of the API.

The supervisor has been tested and stops the API, when the object is inside the safetyzone and returns to normal operation when the object is moved.

C.1.2 Active FDI supervisor

The Active FDI Supervisor described in Cha. 10 on page 101, has been implemented inSimulink. The supervisor is placed after the FDI methods described in the followingsection. The supervisor uses the wheel pairs identified by the FDI methods as potentiallyfaulty and performs a series of tests in order to isolated the faults. The output of thesupervisor is compatible with the FTC supervisor implemented by a previous group. Asthe test involved in isolating faults involves disabling of the wheels, thereby disable thebraking of the wheels, a safety check is implemented, that requires the API to be bothstationary and on a level surface, before the test can be performed. The block can be seenin Fig. C.2 on the next page.

The inputs are:

• The status of the electro mechanical stoppers.

• The speed of the wheels

• The steering angle of the wheels

• A control signal enabling the supervisor

140 Aalborg University 2007

Page 141: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

C.1 Simulink blocks

Active FDI Supervisor

Fault

Fault Detected

Wheel Shutdown

Steering Open Loop

Beta Ref

Propulsion PWM

Fault 07gr1032 b

Active FDI in Progress

Steer PWM

Stoppers

Dphi

Beta

Enable

Steering Pairs

Propulsion pairs

Speed

Inclination

Figure C.2: The Simulink block: Active FDI Supervisor

• The wheel pair to be tested for steering faults

• The wheel pair to be tested for propulsion faults

• The speed of the API

• The inclination of the API

The outputs are:

• The fault detected as used by the previous FTC

• The current state of the API : FAULT,NOFAULTS or HALT.

• The wheels that are to be disabled by the Remote Shutdown block.

• The status of the wheel controllers: Open loop or closed loop

• The steering reference signal

• The propulsion PWM signal

• The fault detected as described in this project

• A signal indication the activation of the supervisor

• The steering PWM signal for use in open loop control

C.1.3 Hybrid FDI and State observer

The hybrid state observer selected in Sec. 8.1 on page 75 and the linear FDI method se-lected in Sec. 8.2 on page 81 has been implemented as a Simulink block. The block can beseen in Fig. C.3 on the following page.

The inputs are:

• Inputs given to the API robot. Includes torques and wheel angles for each wheel.

Group 1032b 141

Page 142: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Implemented Software

linear FDI (BFDF)

u

dchi−vector+theta

time

residual

corr

fault

state

hybrid state

fault detected

Detect only

Figure C.3: The Simulink block: Linear FDI(BFDF)

• Current estimated state.

• Time

The outputs are:

• The euclidean norm of the residue.

• The correlation vector that indicates correlation between an fault an a given input.

• The current isolated fault or faults.

• The linear state of the current linear model.

• The current hybrid state.

• Fault detected port. TRUE if a fault is detected.

• Indicate whether the detect only state is activated.

C.2 Standalone ProgramsThese programs can be used without the controllers active.

C.2.1 Proximity Supervisor

The standalone proximity supervisor is identical to the proximity supervisor describedin Sec. C.1 on page 139 in regards to the functionality it provides. The standalone super-visor can be run concurrently with the manual control. If a object is detected, a signalis sent to the previously implemented CAN communication program: “canif.c”, whichsubsequently issues a standby command to the LH28s. This stops the API from movinguntil the object is moved.

The source code for this program is located in “c-code\proximity_supervisor.c” forthe main program and “c-code\canif.c” for the extension to the CAN communication pro-gram.

142 Aalborg University 2007

Page 143: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

C.3 Interfaces

C.2.2 Automatic Calibration of Steering Actuators

The current software on the LH28s loses all information of steering position, when it is ter-minated or the API is powered down. As the encoder on the steering actuator is measuredrelative to the previous position, the wheel is left at the angle it had when the softwareclosed. When the API is restarted, the current and potentially wrong position is used asthe origin. The results in offset on the steering. This will in most cases only result in offsetof the position and speed of the API, which can be suppressed by the controller but it canalso result in complete shutdown of the robot. This happens when the initial position ismore than 45 from the real origin. This allows the wheel to move to a position, where itis perpendicular to the direction of the robot, which causes the wheels to draw to muchcurrent and the fuses blowing. To prevent this situation from happening, the wheelshad to be individually and manually calibrated, with either the joystick or the keyboard.This is a time consuming progress. In order to avoid this a automatic calibration of thesteering wheels is needed.

The designed method uses the electro-mechanical switches installed at the outer limitsof the wheel. These stoppers are activated, when a magnet on the wheel moves close tothem. The stoppers then prevents the wheels from turning in that direction by disablingthe PWM signal to the H-bridge, driving the steering. The status of the stoppers areavailable on the OBC.

The algorithm for the automatic calibration is as follows:

1. Turn the wheel in one direction until a stopper is activated.

2. Reset the LH28s measurement of the steering angle to zero.

3. Using the now know position of the wheels and the distance from the stopper tothe real origin to move the wheel in the opposite direction until the halfway pointbetween the two opposite stoppers is reached.

4. Reset the LH28s steering angle to the now correct origin.

5. Repeat step 1–4 for each wheel

The algorithm has been implemented and is capable of calibrating the wheel to ± acouple of degrees, when standing on the laboratory floor or on asphalt or hard gravelland. If the API is placed on grass, the steering actuators is not powerful enough toturn the wheels and the calibration fails to return the wheels to the correct position. Theproblem with the steering actuators is known and has been documented by a previousgroup[4].

The program is implemented as a standalone program, requiring the program to berun manually before the use of any controllers. The algorithm can easily be converted toa Simulink, block allowing the controller to initialize the calibration, when needed.

The source code for this program is located in “c-code\calibrate_wheels.c”.

C.3 InterfacesThis section describes the interfaces designed and implemented for the use of the newsensors and actuators. The interface is defined as both the interface to the actual sensor

Group 1032b 143

Page 144: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Implemented Software

measurements or actuator control signals as well as the interfaces to Simulink in the formof Simulink block. The new interfaces are:

Inclinometer Interface This interface samples the inclinometer data received from theFSAC box. The Simulink block makes the pitch and roll measurements available tothe controller and the state estimator.

Proximity Sensor Interface This interface samples the proximity sensors and calculatesthe distance from the API to the nearest object. The Simulink interface makes themeasurements available to the proximity supervisor.

Remote Shutdown Interface This interface is responsible for turning the wheels on oroff as requested by the Active FDI Supervisor or a FTC supervisor. The interfaceuses the parallel port to send a signal to the Remote Shutdown Boxes as describedin Sec. 3.4 on page 29.

Furthermore a interface for the status of electromechanical stoppers has been made,allowing the Active FDI an known position of the wheel.

The Simulink blocks can be seen in Fig. C.4.

Stoppers

Stoppers

Remote Shutdown

Disable Wheel

Proximity Sensor

Distance: Front Left

Distance: Front Right

Inclinometer

Pitch

Roll

Figure C.4: The interface blocks used in Simulink

The source code for the new interfaces is located in “c-code\fsac_collect.c” for the In-clinometer and Proximity sensor interface, “c-code\remote_shutdown.c” for the RemoteShutdown interface.

144 Aalborg University 2007

Page 145: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Appendix D

Linear FDI

D.1 Linear ModelTo be able to use linear observers for FDI on the API robot, a linear model is needed. Thecurrent nonlinear model can be found in Cha. 4.3 on page 39 and has the form:

xy

θ

=

f(x, y, θ, θ, β1, β2, β3, β4, τref,1, τref,2, τref,3, τref,4)

f(x, y, θ, θ, β1, β2, β3, β4, τref,1, τref,2, τref,3, τref,4)

f(x, y, θ, θ, β1, β2, β3, β4, τref,1, τref,2, τref,3, τref,4)

(D.1)

To linearize the model a first order Taylor approximation is applied to the nonlinearmodel. The Taylor approximation is expressed formally as:

f(x) ≈ f(x) +df(x)

dx

x=x

· x (D.2)

where x is the constant value of the operating point, and x is the changing small signalvalue, or in other terms:

x = x+ x (D.3)

For functions of two variables the principle is the same:

f(x, y) ≈ f(x, y) +∂f(x, y)

∂x

x=x, y=y

· x+∂f(x, y)

∂y

x=x, y=y

· y (D.4)

After applying this approximation (also called the first degree Taylor approximation), theoperating point values are subtracted from the equation, after which only the small signalterms remain.

D.2 Matlab ImplementationBecause the model is being linearized in many different working points and because themodel is fairly complicated, the process of linearizing the model is simplified by imple-menting a MATLAB function, which takes a set of working points as input and generatesa linear model from these working points.

The function does the following:

145

Page 146: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Linear FDI

1. Calculates the operating points for xwp, ywp, θwp based on choices of βwp,i, τwp,ref,iand θwp.

2. Differentiating the equations with regards to x, y, θ, βi, τwp,ref,i and θ.

3. Evaluating the differentiated equations with regards to xwp, ywp, θwp, βwp,i, τwp,ref,iand θwp giving the gains for every input and state.

4. Placing the resulting gains in the linear matrices A, B, C and D.

The result is a linear model for every working point with the following state-space struc-ture.

x = Ax+Bu

y = Cx+Du (D.5)

where

A =

xx,g xy,g xθ,g xθ,gyx,g yy,g yθ,g yθ,gθx,g θy,g θθ,g θθ,g0 0 1 0

(D.6)

B =

xτref,1,g xτref,2,g xτref,3,g xτref,4,g xβ1,g xβ2,g xβ3,g xβ4,g

yτref,1,g yτref,2,g yτref,3,g yτref,4,g yβ1,g yβ2,g yβ3,g yβ4,g

θτref,1,g θτref,2,g θτref,3,g θτref,4,g θβ1,g θβ2,g θβ3,g θβ4,g

0 0 0 0 0 0 0 0

(D.7)

C =

1 0 0 00 1 0 00 0 1 0

(D.8)

D = 0 (D.9)

and

x =[

xM yM θM θM]T

(D.10)

u = [τref,1 τref,2 τref,3 τref,4 βref,1 βref,2 βref,3 βref,4]T (D.11)

y = χM =[

xM yM θM]T

(D.12)

146 Aalborg University 2007

Page 147: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Appendix E

Fault Analysis

This appendix show the behaviour of the API under a fault scenario. The faults simulatedis described in Cha. 6 on page 61. As mentioned before all faults manifest them self asactuator faults due to the controllers placed on the LH28s, only actuator faults are simu-lated in the appendix. Due to the different behaviour of the API when turning in regardsto when the API is driving straight, two nominal scenarios is chosen as basis for the sim-ulation. The chosen scenarios are straight driving and turning around an ICR @ 5 meters.The figures show both scenarios side by side, with the nominal behaviour marked with.

All faults are valid for all four wheels, but due to the similar behaviour only the firstfault scenario is simulated on all wheels. The remaining faults are only simulated forwheel one and three. The faults are introduced 3 seconds into the simulation and remainthroughout the simulation. The conditions for the test is:

Straight driving ICR = 0m.

• β1−4 = 0

• τ1−4 = 10%

• Simulation time = 60 seconds.

• Fault time = 3 seconds.

• Controller = Wheel controller.

Turning ICR = 5m.

• β1−4 =

0.1107−0.1107−0.0907

0.0907

• τ1−4 = 10%

• Simulation time = 75 seconds.

• Fault time = 3 seconds.

• Controller = Wheel controller.

147

Page 148: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Fault Analysis

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]yN

[m]

yN

[m]

Straight Driving Turning

Figure E.1: Simulation of the Propulsion Actuator Fault: No Actuation on wheel 1,when driving straight and turning around an ICR @ 5m.

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.2: Simulation of the Propulsion Actuator Fault: No Actuation on wheel 2,when driving straight and turning around an ICR @ 5m.

148 Aalborg University 2007

Page 149: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.3: Simulation of the Propulsion Actuator Fault: No Actuation on wheel 3,when driving straight and turning around an ICR @ 5m.

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.4: Simulation of the Propulsion Actuator Fault: No Actuation on wheel 4,when driving straight and turning around an ICR @ 5m.

Group 1032b 149

Page 150: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Fault Analysis

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]yN

[m]

yN

[m]

Straight Driving Turning

Figure E.5: Simulation of the Propulsion Actuator Fault: Max. Positive Actuationon wheel 1, when driving straight and turning around an ICR @ 5m.

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.6: Simulation of the Propulsion Actuator Fault: Max. Positive Actuationon wheel 3, when driving straight and turning around an ICR @ 5m.

150 Aalborg University 2007

Page 151: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.7: Simulation of the Propulsion Actuator Fault: Max. Negative actuationon wheel 1, when driving straight and turning around an ICR @ 5m.

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.8: Simulation of the Propulsion Actuator Fault: Max. Negative actuationon wheel 3, when driving straight and turning around an ICR @ 5m.

Group 1032b 151

Page 152: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Fault Analysis

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]yN

[m]

yN

[m]

Straight Driving Turning

Figure E.9: Simulation of the Propulsion Actuator Fault: Actuator Offset on wheel 1,when driving straight and turning around an ICR @ 5m.

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.10: Simulation of the Propulsion Actuator Fault: Actuator Offset on wheel3, when driving straight and turning around an ICR @ 5m.

152 Aalborg University 2007

Page 153: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.11: Simulation of the Steering Actuator Fault: No Actuation on wheel 1, whendriving straight and turning around an ICR @ 5m.

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.12: Simulation of the Steering Actuator Fault: No Actuation on wheel 3, whendriving straight and turning around an ICR @ 5m.

Group 1032b 153

Page 154: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Fault Analysis

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]yN

[m]

yN

[m]

Straight Driving Turning

Figure E.13: Simulation of the Steering Actuator Fault: Max. Positive Actuationon wheel 1, when driving straight and turning around an ICR @ 5m.

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.14: Simulation of the Steering Actuator Fault: Max. Positive Actuationon wheel 3, when driving straight and turning around an ICR @ 5m.

154 Aalborg University 2007

Page 155: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.15: Simulation of the Steering Actuator Fault: Max. Negative actuationon wheel 1, when driving straight and turning around an ICR @ 5m.

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.16: Simulation of the Steering Actuator Fault: Max. Negative actuationon wheel 3, when driving straight and turning around an ICR @ 5m.

Group 1032b 155

Page 156: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Fault Analysis

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]yN

[m]

yN

[m]

Straight Driving Turning

Figure E.17: Simulation of the Steering Actuator Fault: Actuator Offset on wheel 1,when driving straight and turning around an ICR @ 5m.

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

−10 −5 0 5 10 15 20−15

−10

−5

0

5

10

15

NominalFaulty

xN [m]xN [m]

yN

[m]

yN

[m]

Straight Driving Turning

Figure E.18: Simulation of the Steering Actuator Fault: Actuator Offset on wheel 3,when driving straight and turning around an ICR @ 5m.

156 Aalborg University 2007

Page 157: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Appendix F

Theory of UIO

A linear system with all possible faults and additive unknown disturbance can be de-scribed by a state space model as [5]:

x(t) = Ax(t) + Bu(t) + Ed(t) + R1f(t)y(t) = Cx(t) + R2f(t)

(F.1)

where x(t) is the state vector, y(t) is the output vector, u(t) is the input vector, d(t) isthe unknown disturbance vector, f(t) is a fault vector. A, B, C are system matrices,E is a disturbance distribution matrix, R1 and R2 are fault distribution matrices whichrepresent the effect of systems faults.

The structure of a full-order observer, for this system Eq. (F.1), is described as [5, page71]:

z(t) = F z(t) + TBu(t) + Ky(t)x(t) = z(t) + Hy(t)

(F.2)

where x(t) is the estimated state vector and z(t) is the state of the full-order observerand F , T , K , H are unknown matrices that have to be designed for achieving unknowninput de-coupling and some other design requirements. The full-order observer is illus-trated in Fig. F on the following page.

When the UIO-based residual generator Eq. (F.2) is applied to the system Eq. (F.1), thestate estimation error will be:

e(t) = x(t) − x(t)

e(t) = x(t) − ˆx(t)

= x(t) − z(t) − Hy(t)

= x(t) − z(t) − H(Cx(t) + R2f(t))

= (1 − HC)x(t) − F z(t) − TBu(t) − Ky(t) − HR2f(t) (F.3)

where K is defined as:

K = K1 + K2

Ky(t) = K1y(t) + K2y(t)

= K1(Cx(t) + R2f(t)) + K2y(t) (F.4)

157

Page 158: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Theory of UIO

Full-order Unknown Input Observer

disturbance faults

input

u(t)System

output

y(t)

T B H

K

r(t)

C

z(t) x(t) y(t)

F

Ed(t) R1f(t) R2(t)f(t)

Figure F.1: The structure of a full-order unknown input observer.

Then substituting Eq. (F.4) into Eq. (F.3) yields:

e(t) = (1 − HC)[Ax(t) + Bu(t) + Ed(t) + R1f(t)] − F z(t)

−TBu(t) − K1Cx(t) − K1R2f(t) − K2y(t) − HR2f(t)

= [(1 − HC)A− K1C]x(t) + [(1 − HC)B − TB]u(t)

+(1 − HC)Ed(t) − F z(t) − K2y(t)

+[(I − HC)R1 − K1R2]f(t) − HR2f(t) (F.5)

By adding Eq. (F.6) to Eq. (F.5)

0 = −[(I − HC)A− K1C]x(t) + [(I − HC)A− K1C]x(t)

= −[(I − HC)A− K1C]x(t) + [(I − HC)A− K1C](z(t) + Hy(t)) (F.6)

yield:

e(t) = [(I − HC)A− K1C]e(t) + [(I − HC)B − TB]u(t)

+(1 − HC)Ed(t) + ([(1 − HC)A− K1C] − F )z(t)

+([(1 − HC)A− K1C]H − K2)y(t)

+[(1 − HC)R1 − K1R2]f(t) − HR2f(t) (F.7)

158 Aalborg University 2007

Page 159: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

F.1 UIO Design Example

The residual can be generated as:

r(t) = y(t) − y(t)

= Cx(t) + R2f(t) − Cx(t)

= Ce(t) + R2f(t) (F.8)

From Eq. (F.7) it can be seen, that if the following relations hold true:

(1 − HC)E = 0 (F.9)

T = (1 − HC) (F.10)

F = (1 − HC)A− K1C (F.11)

K2 = FH (F.12)

then the residual will be independent of the disturbance Eq. (F.9), inputs Eq. (F.10),outputs Eq. (F.12) and the internal states of the observer Eq. (F.11). Therefore the designof the UIO is to solve Eqs. (F.9) - (F.12) and making all the eigenvalues, of the matrix F ,stable so the estimation error e(t) will approach zero asymptotically.There are some necessary and sufficient conditions for Eq. (F.2) to be a UIO for the systemdefined by Eq. (F.1) and these are:

(i) rank(CE) = rank(E)

(ii) (C, A1) is detectable pair, where

A1 = A− E[(CE)TCE]−1(CE)TCA

= A− HCA

= TA (F.13)

F.1 UIO Design ExampleStep 1: Check for rank condition

Scenario 1:

rank(E1) = 1 (F.14)

rank(C ·E1) = 1 (F.15)

rank(E1) = rank(CE1) ⇒ an UIO exists.

Step 2: Compute H,T and A1

Scenario 1:

H = E1[(CE)TCe]−1(CE)T (F.16)

T = I −HC (F.17)

A1 = TA (F.18)

Group 1032b 159

Page 160: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Theory of UIO

where

H =

0.4340 0.0000 -0.4956 0.00000.0000 0.0000 0.0000 0.0000-0.4956 0.0000 0.5660 0.00000.0000 0.0000 0.0000 0.0000

T =

0.5660 0.0000 0.4956 0.00000.0000 1.0000 0.0000 0.00000.4956 0.0000 0.4340 0.00000.0000 0.0000 0.0000 1.0000

A1 =

-1.1879 -9.9978 -22.980 6.67720.0008 -71.701 -7.7937 47.746-1.0401 -8.7543 -20.122 5.84680.0000 0.0000 1.0000 0.0000

Step 3: Check observability

Scenario 1:Observability matrix:

obsA1C =

1.0000 0.0000 0.0000 0.00000.0000 1.0000 0.0000 0.00000.0000 0.00000 1.0000 0.00000.0000 0.00000 0.00000 1.0000-1.1879 -9.9978 -22.980 6.67720.0008 -71.701 -7.7937 47.746-1.0401 -8.7543 -20.122 5.84680.0000 0.0000 1.0000 0.000025.306 929.90 574.32 -619.648.0484 5209.3 763.37 -3469.022.159 814.25 502.89 -542.58-1.0401 -8.7543 -20.122 5.8468-626.69 -71956. -20005 47926-799.42 -3.8027·105 -59614 2.5324·105

-548.74 -63007. -17517 4196522.159 814.25 502.89 -542.58

(F.19)

rank(obsA1C) = 4 (F.20)

length(A1) = 4 (F.21)

rank(obsA1C) = length(A1) ⇒ an UIO exists.

160 Aalborg University 2007

Page 161: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

F.1 UIO Design Example

Step 4: Compute K1

Scenario 1:The MATLAB command “place” is used in order to place poles of the observer

K1 = place(A1, C, [−10 − 10 − 10 − 10]); (F.22)

K1 =

8.8121 -9.9978 -22.980 6.67720.0008 -61.701 -7.7937 47.746-1.0401 -8.7543 -10.122 5.84680.0000 0.0000 1.0000 10.000

Step 5: Compute F and K

F = A1 −K1C (F.23)

K = K1 +K2 = K1 + FH (F.24)

K1 =

-10.000 0.0000 0.0000 0.00000.0000 -10.000 0.0000 0.00000.0000 0.0000 -10.000 0.00000.0000 0.0000 0.0000 -10.000

K =

4.4723 -9.9978 -18.024 6.67720.00080000 -61.701 -7.7937 47.746

3.9161 -8.7543 -15.783 5.84680.00000 0.00000 1.0000 10.000

Group 1032b 161

Page 162: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 163: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Appendix G

FDI Method Test

G.1 Test of UIO methodFigures from Fig. G.1 to G.7 on pages 163–165 show the isolation of the fault effects asindicated by each figure.

0 1 2 3 4 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.1: Isolated fault following the propulsion actuator fault: No actuation onwheel 1.

0 1 2 3 4 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.2: Isolated fault following the propulsion actuator fault: Max. positiveactuation on wheel 1.

163

Page 164: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

FDI Method Test

0 1 2 3 4 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.3: Isolated fault following the propulsion actuator fault: Max. negativeactuation on wheel 1.

0 1 2 3 4 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.4: Isolated fault following the propulsion actuator fault: Sensor offset onwheel 1.

164 Aalborg University 2007

Page 165: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

G.1 Test of UIO method

0 1 2 3 4 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.5: Isolated fault following the steering actuator fault: Max. positiveactuation on wheel 1.

0 1 2 3 4 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.6: Isolated fault following the steering actuator fault: Max. negativeactuation on wheel 1.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.7: Isolated fault following the steering actuator fault: Actuator offset onwheel 1.

Group 1032b 165

Page 166: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

FDI Method Test

G.2 Test of Beard Fault Detection Filter methodFigures from Fig. G.8 to G.14 on pages 166–168 show the isolation of the rest of the faulteffects as indicated by each figure.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.8: Isolated fault following the propulsion actuator fault: ‘No actuation’ on wheel1.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.9: Isolated fault following the propulsion actuator fault: ‘Max. positive actua-tion’ on wheel 1.

166 Aalborg University 2007

Page 167: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

G.2 Test of Beard Fault Detection Filter method

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.10: Isolated fault following the propulsion actuator fault: ‘Max. negative actua-tion’ on wheel 1.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.11: Isolated fault following the propulsion actuator fault: ‘Sensor offset’ onwheel 1.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.12: Isolated fault following the steering actuator fault: ‘Max. positive actuation’on wheel 1.

Group 1032b 167

Page 168: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

FDI Method Test

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.13: Isolated fault following the steering actuator fault: ‘Max. negative actuation’on wheel 1.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Time [s]

no fault

τ1

τ2

τ3

τ4

β1

β2

β3

β4

Fau

lty

inp

ut

Figure G.14: Isolated fault following the steering actuator fault: ‘Actuator offset’ on wheel1.

168 Aalborg University 2007

Page 169: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Appendix H

Active FI Supervisor

This Appendix describes the procedure involved in the active isolation of faults in thesteering and propulsion actuators and sensors.

H.1 Active Isolation of Steering Faults.Isolation of steering faults, both sensor and actuator, is based on the electro-mechanicalstoppers implemented on the API . The stoppers are designed as a security measure inorder to prevent turning the wheels outside the safe area as well as avoiding breaking ofthe cables inside the joint due to twists. The procedure of the isolation of steering faultsis as follows:

1. When a fault regarding steering is detected, the Active FDI Supervisor is activated.

2. All wheels are powered down via the hardware implemented in App. 3.4 on page 29.

3. The following procedure is used for those of the four wheels, that the FDI has de-tected a possible faulty situation in.

3.a The wheel is turned on and the steering controller is set to open loop.

3.b The wheel is turned until the nearest stopper is activated, indicating full actu-ation.

3.c If the stoppers is not activated after a time period of 5 s. the wheel is identifiedas having a faulty steering actuator, specifically the no actuation fault.

3.d If the stopper indicating Max. actuation is activated at the start time of theAFI and if the stopper is not disabled, when turning the wheel the opposite di-rection, the wheel is identified as having a faulty steering actuator, specificallythe Max. positive actuation fault.

3.e If the stopper indicating Min. actuation is activated at the start time of theAFI and if the stopper is not disabled, when turning the wheel the opposite di-rection, the wheel is identified as having a faulty steering actuator, specificallythe Max. negative actuation fault.

3.f If one of the stoppers are activated, but the steering sensor shows no changefrom the sensor measurement taken at the start of the AFI, the wheel is identi-fied as having a faulty steering sensor, specifically the No Output fault.

169

Page 170: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Active FI Supervisor

3.g If the sensor measurement shows that the wheel is at its maximum positiveposition, without the stopper indicating Min. actuation being activated,the wheel is identified as having a faulty steering sensor, specifically the Max.positive output fault.

3.h If the sensor measurement shows that the wheel is at its maximum negativeposition, without the stopper indicating Min. actuation being activated,the wheel is identified as having a faulty steering sensor, specifically the Max.negative output fault.

3.i If one of the stoppers is activated, but the steering sensor shows an offset be-tween the known position of the stoppers and the one measured by the sen-sors, the wheel is identified as having a faulty steering sensor, specifically theSensor offset fault.

3.j The wheel is turned off.

4. The isolated faults is returned to the FDI supervisor.

H.2 Active Isolation of Propulsion Faults.The isolation of propulsion faults is based on the velocity of the API : VM =

(xM)2 + (yM)2

1. when a fault regarding propulsion is detected, the Active FDI Supervisor is acti-vated.

2. All wheels are returned to β = 0.

3. All wheels are powered down via the hardware implemented in App. 3.4 on page 29.

4. The following procedure is used for those of the four wheels, that the FDI has de-tected a possible faulty situation in.

4.a The wheel is turned on and the propulsion controller is set to zero torquereference:τref,i = 0.

4.b If the velocity VM is greater than a set threshold, the wheel is identified as hav-ing a faulty propulsion actuator, specifically the Max. positive actuationfault.

4.c If the velocity VM is smaller than a set threshold, the wheel is identified as hav-ing a faulty propulsion actuator, specifically the Max. negative actuationfault.

4.d If the velocity VM shows that the API is stationary and the propulsion sen-sor show the max. positive speed, the wheel is identified as having a faultypropulsion sensor, specifically the Max. positive output fault.

4.e If the velocity VM shows that the API is stationary and the propulsion sen-sor show the Min. positive speed, the wheel is identified as having a faultypropulsion sensor, specifically the Min. positive output fault.

4.f The propulsion controller is set to 25% torque reference:τref,i = 25.

4.g If the velocity VM shows that the API is stationary, the wheel is identified ashaving a faulty propulsion actuator, specifically the No actuation fault.

170 Aalborg University 2007

Page 171: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

H.2 Active Isolation of Propulsion Faults.

4.h The wheel is turned off.

5. If no propulsion actuator faults is isolated, the remaining propulsion sensor faultscan be isolated

5.a All wheels are turned on and the propulsion controller is set to τref,i = 25%for 2 seconds hereafter the reference is set to zero.

(a) Using the kinematic model for straight driving as seen in Eq. (H.1)

xB =1

4

4∑

i=0

(

φi cos(βi))

yB =1

4

4∑

i=0

(

φi sin(βi))

(H.1)

5.b If one of the propulsion sensors shows zero output and the velocity VM isequal to the calculated velocity, the wheel is identified as having a faulty propul-sion sensor, specifically the No output fault.

5.c If the velocity VM is equal to the calculated velocity, and the propulsion sen-sor is not equal to the estimated wheel speed, which is calculated based onEq. (4.36), the wheel is identified as having a faulty propulsion sensor, specifi-cally the Sensor offset fault.

5.d The wheels are turned off.

6. The isolated faults is returned to the FDI supervisor.

Group 1032b 171

Page 172: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b
Page 173: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Appendix I

Kalman Filter

The Kalman filter used in the MHT state observer designed in Sec. 8.1 on page 75 isthe linear discrete Kalman Filter[21] The Kalman filter is an recursive continuous stateobserver using knowledge of the systems model, process noise and measurement noise.The model of the system can be written as a discrete state:

xk = Axk−1 +Buk + wk (I.1)

zk = Cxk + vk

The A,B and C matrix can be found in App. D.1 on page 145. The process noise w isintroduced as the change in τr,i resulting from the noise in the propulsion sensor[4]. Thepropulsion actuator noise is:

wpropulsion = N (0, σ2propulsion) (I.2)

σ2propulsion = 5 · 10−4 · φ (I.3)

The process noise covariance matrix is Q = I · σ2propulsion

The measurement noise v is selected as the noise on the four sensors used in the outputmeasurement vector[4]: y = [θMCompass x

MGPS y

MGPS θ

MGyro]

T . The noise is:

vθM = N (0, σ2θCompass

) (I.4)

vxM = N (0, σ2xGPS

) (I.5)

vyM = N (0, σ2yGPS

) (I.6)

vθM = N (0, σ2θGyro

) (I.7)

Where

σ2θCompass

= 0.016[] (I.8)

σ2xGPS

= 1 · 10−4[m] (I.9)

σ2yGPS

= 1 · 10−4[m] (I.10)

σ2θGyro

= 1.1 · 10−5[/s] (I.11)

173

Page 174: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Kalman Filter

The measurement noise covariance is:

R =

σ2θCompass

0 0 0

0 σ2xGPS

0 0

0 0 σ2yGPS

0

0 0 0 σ2θGyro

(I.12)

The Kalman filter process is divided into two steps:

• The prediction step, which uses a previously estimated state and the linear modelto predict the value of the next state as well as the state estimate covariance:

xk|k−1 = Axk|k−1 +Buk (I.13)

Pk|k−1 = APk−1|k−1AT +Q

• The update step, which uses the current measurement of the output together withthe statistical properties of the model, to correct the state estimate. The values cal-culated is the innovation covariance, the Kalman gain resulting in the updated stateestimate and state estimate covariance:

Sk = CPk|k−1CT +R (I.14)

Kk = Pk|k−1CTS−1

k

xk|k = Axk|k−1 +Kk(Zk − Cxk|k−1

Pk|k−1 = (I −KkC)Pk|k−1

The two steps is repeated for every sample: k = 1, 2, . . . ,K

174 Aalborg University 2007

Page 175: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Appendix J

Operational Requirements

A number of requirements for the operational performance of the API, has been set byprevious groups:

1. The robot must be able to follow a straight line connecting two way points with aprecision better than ±10 cm.

2. The robot must be able to arrive at a way point in a set with a orientation within±10 of the designated orientation.

3. The robot must be able to determine its position and orientation with a precisionbetter than ±5 cm and ±5.

4. When the robot is operating nominally and driving on a field, planted with sensi-tive crops or with crops with sufficient width between the row, to fit the wheels ofthe robot, it must never drive on top of the crops.

5. When a fault occur, the robot is allowed to deviate from its course, for instancedriving on top of the rows, for a maximum of 30 seconds.

6. The robot must be able to continue operation with at least one sensor/actuator fault.

7. The robot must be kept operational as long as possible under faulty conditions.

8. Operating under both normal and faulty operation the robot must not be potentiallydangerous to its environment.

As the API is designed for versatility, a single set of requirements is impossible. Therequirement listed above is selected based on the considerations listed below:

Requirement 1 is selected as a path traversing a field is expected to have a overlap, ofapproximately 10cm in order to ensure perfect coverage.

Requirement 2 is selected based on the fact that the API can drive in all directions. Inorder to avoid the API arriving at a way-point in a undesirable orientation, the API

must be inside ±10 of the chosen orientation.

175

Page 176: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Operational Requirements

Requirement 3 is chosen based on the precision of the sensors responsible for the mea-surements, the requirements are to be compared to. The sensors is question is thecompass and the GPS , with the following specifications:

• Compass: ±1.5

• GPS: ±3 cm

These values are under optimal conditions and therefore the selected values are setat a higher value.

Requirement 4 and 5 is selected as a worst case scenario. This involves driving on a fieldwith sensitive crops. It is decided that a controller capable of driving on that kindof field, is easily adapted to driving on less sensitive crops.

Requirement 6 and 7 is a general requirement as an autonomous robot always is ex-pected to function as long as possible. Furthermore if the robot fails in the middleof a field, there is the additional disturbances caused by locating and retrieving therobot, as is it unlikely that all but the most simple faults can be repaired in the field.

Requirement 8 is based in the environment, the robot is designed to function in. Thereason is that the robot can not be assumed to be the only entity on the field. Someof the other entities are tractors, animals, both domesticated and wild, and in thecase of faults or service to the robot, also the farmer himself or a service technician.

176 Aalborg University 2007

Page 177: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Appendix K

Pitch and Roll Compensation of GPS,Gyro and Compass

The three sensors GPS, Gyro and Compass all measures in the M or B frame. Since theinertial frame is the N frame, the sensor measurements must be compensated for the tiltof the robot. The inclinometer described in Sec. 3.2 on page 26 provides a pitch and rollmeasurement, which describes the tilt of the robot in regards to the B frame. In order tocompensate for measurements in the M frame, the pitch and roll measurement must berotated using the rotation matrices listen in Sec. 4.1 on page 33. The rotated inclinometermeasurement is shown in the following equation:

[

ψxψy

]

= R(B→M)T ·[

pitchroll

]

(K.1)

The ψx and ψx angles is the tilt of the M

K.1 GPSThe GPS system provides the position of the API. The GPS uses information from threeor more satellites to determine the position. This position is centered in the GPS antennaplaced in the GC : 107 cm from the ground. When driving on level terrain, it is correct toassume that the position reported by the GPS is the position of the GC in relation to theground. This changes when the API is placed on a tilted surface. If the API is placed ona hill, the vector projected from the GPS antenna to the ground, does not cross the GC aswhen level. This means that the position reported by the GPS does not correspond to theactual position of the API. The offset of the position measurement can be calculated usingthe tilt of the surface and the hight of the GPS antenna as shown in equation K.2.

[

∆xGPS(t)∆yGPS(t)

]

=

[

sin(ψx)hGPSsin(ψy)hGPS

]

(K.2)

K.2 GyroThe gyro measures the angular velocity θ of the API in the B frame. To convert the gyromeasurements from the B frame to the N two rotations is needed. The tilt corrected

177

Page 178: Autonomous Agricultural Robot - Aalborg Universitetkom.aau.dk/group/09gr723/gruppe1032_CD/master.pdf · Autonomous Agricultural Robot towards robust autonomy Group members of IAS10-1032b

Pitch and Roll Compensation of GPS, Gyro and Compass

angular velocities can be seen in the equation below:

θN = R(M→N )T ·R(B→M)T · θB (K.3)

K.3 CompassThe compass measures the orientation θ of the API with regards to the magnetic north.This means that the compass measurement is performed in the M frame. The compen-sated compass measurement can be seen in the equation below:

θN = R(M→N )T · θM (K.4)

178 Aalborg University 2007