comparative study on road and lane detection in mixed ...1156730/fulltext01.pdfde sto¨rsta...

53
INOM EXAMENSARBETE MASKINTEKNIK, AVANCERAD NIVÅ, 30 HP , STOCKHOLM SVERIGE 2017 Comparative study on road and lane detection in mixed criticality embedded systems Evaluation of performance on Altens mixed criticality platform SANEL FERHATOVIC KTH SKOLAN FÖR INDUSTRIELL TEKNIK OCH MANAGEMENT

Upload: others

Post on 12-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

INOM EXAMENSARBETE MASKINTEKNIK,AVANCERAD NIVÅ, 30 HP

, STOCKHOLM SVERIGE 2017

Comparative study on road and lane detection in mixed criticality embedded systems

Evaluation of performance on Altens mixed criticality platform

SANEL FERHATOVIC

KTHSKOLAN FÖR INDUSTRIELL TEKNIK OCH MANAGEMENT

Page 2: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Comparative study on road and lane

detection in mixed criticality

embedded systems

Evaluation of performance on Altens mixed criticality platform

Master ThesisRoyal Institute of Technology

Stockholm, Sweden

Sanel [email protected]

June 22, 2017

Page 3: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

.

Examensarbete MMK2017:156 MDA614

Comparative study on road and lane detection inmixed criticality embedded systems

Sanel Ferhatovic

Approved: Examiner: Supervisor:

2017-06-22 Martin Torngren De-Jiu ChenCommissioner: Contact person:

Alten Detlef Scholle.

Abstract.One of the main challenges for advanced driver assistance systems (ADAS)is the environment perception problem. One factor that makes ADAS hardto implement is the large amount of different conditions that have to betaken care of. The main sources for condition diversity are lane and roadappearance, image clarity issues and poor visibility conditions. A review ofcurrent lane detection algorithms has been carried out and based on that alane detection algorithm has been developed and implemented on a mixedcriticality platform. The thesis is part of a larger group project consisting offive master thesis students creating a demonstrator for autonomous platoondriving. The final lane detection algorithms consists of preprocessing stepswhere the image is converted to gray scale and everything except the regionof interest (ROI) is cut away. OpenCV, a library for image processing hasbeen utilized for edge detection and hough transform. An algorithm for errorcalculations is developed which compares the center and direction of the lanewith the actual vehicle position and direction during real experiments. Thelane detection system is implemented on a Raspberry Pi which communicateswith a mixed criticality platform through UART. The demonstrator vehiclecan achieve a measured speed of 3.5 m/s with reliable lane keeping using thedeveloped algorithm. It seems that the bottleneck is the lateral control ofthe vehicle rather than lane detection, further work should focus on controlof the vehicle and possibly extending the ROI to detect curves in an earlierstage.

Keywords: Lane detection, Image processing, Raspberry Pi 3, Platoon driving

i

Page 4: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

.

Examensarbete MMK2017:156 MDA614

Jamforande studie av olika vaghallningsalgoritmer

Sanel Ferhatovic

Godkant: Examinator: Handledare:

2017-06-22 Martin Torngren De-Jiu ChenUppdragsgivare: Kontaktperson:

Alten Detlef Scholle.

Sammanfattning.En stor utmaning for avancerade forarstodsystem (ADAS) ar problemet

med uppfattning av miljon runt omkring. En faktor som gor ADAS svart attimplementera ar den stora mangd olika forhallanden som maste tas hand om.De storsta kallorna till olikheter ar utseendet pa korfaltet och vagen, daligasiktforhallanden samt otydliga bilder. En granskning av nuvarande algorit-mer for korfaltsdetektering har utforts och baserat pa den har en korfalts-detekteringsalgoritm utvecklats och implementerats pa ett blandkritiskt sys-tem. Avhandlingen ar en del av ett storre grupprojekt bestaende av femmastersstudenter som ska skapa en demonstrator for autonom konvojkorning.Den slutgiltiga korfaltsdetekteringsalgoritmen bestar av forbehandlingssteg,dar bilden konverteras till graskala och allt utom intresseomradet ar bortk-lippt. OpenCV, ett bibliotek for bildbehandling har anvants for kantdetekter-ing och houghtransformation. En algoritm som jamfor korfaltets mittpunktoch riktning med fordonets faktiska position och riktning har utvecklats ochanvands i experiment for kontroll av fordonet. Korfaltsdetekteringsalgo-ritmen har implementeras pa en Raspberry Pi som kommunicerar med enblandkritisk plattform genom UART. Demo-fordonet kan uppna en uppmatthastighet pa 3,5 m/s med palitlig vaghallning med den utvecklade algorit-men. Det verkar som att flaskhalsen ar kontroll av fordonet i sidled och intekorfaltsdetektering, ytterligare arbete bor fokusera pa kontroll av fordonetoch eventuellt utoka synfaltet for att detektera kurvor i ett tidigare skede.

Nyckelord: Korfaltsdetektering, Bildbehandling, Raspberry Pi 3, Konvojkorn-ing

i

Page 5: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

ACKNOWLEDGEMENTS

I would first like to thank my academic supervisor De-Jiu Chen for the guid-ance and support throughout the project. The examiner Martin Torngrenalso deserves recognition for his involvement in the project. Further I wouldlike to express gratitude to my industrial supervisor Detlef Scholle for givingme the opportunity to write my thesis at Alten and for the support duringthe project.

I would also like to thank the team that I have had the privilege to be partof. Emil, Erik, Daniel and Hanna, it has truly been a pleasure to work withyou and you have all been a great contribution to this thesis.

Finally, I must express my very profound gratitude to my family, first of allmy parents, Zlatko and Edina, my sister Amela and my girlfriend Jenny forproviding me with unfailing support and continuous encouragement through-out my years of study and through the process of researching and writingthis thesis. This accomplishment would not have been possible without them.Thank you.

Sanel Ferhatovic

ii

Page 6: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Contents

Acknowledgements ii

Abbreviations vii

1 Introduction 1

1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.4.1 Team goal . . . . . . . . . . . . . . . . . . . . . . . . . 41.4.2 Individual goal . . . . . . . . . . . . . . . . . . . . . . 4

1.5 Use case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.6 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.7 Method description . . . . . . . . . . . . . . . . . . . . . . . . 51.8 Ethical considerations and sustainablility . . . . . . . . . . . . 6

2 Literature review 7

2.1 SAE level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Lane keeping . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2.1 Modalities for environment perception . . . . . . . . . 102.3 Flowchart of image processing . . . . . . . . . . . . . . . . . . 12

2.3.1 Image acquisition . . . . . . . . . . . . . . . . . . . . . 132.3.2 Preprocess . . . . . . . . . . . . . . . . . . . . . . . . . 132.3.3 Feature extraction . . . . . . . . . . . . . . . . . . . . 132.3.4 Road model . . . . . . . . . . . . . . . . . . . . . . . . 142.3.5 Model fitting . . . . . . . . . . . . . . . . . . . . . . . 152.3.6 Time integration . . . . . . . . . . . . . . . . . . . . . 152.3.7 Lateral control . . . . . . . . . . . . . . . . . . . . . . 15

2.4 Platooning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.5 OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.6 Mixed-criticality systems . . . . . . . . . . . . . . . . . . . . . 17

iii

Page 7: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

2.6.1 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . 17

3 Implementation 19

3.1 Altens mixed criticality platform specification . . . . . . . . . 193.2 Raspberry pi . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2.1 Pi camera . . . . . . . . . . . . . . . . . . . . . . . . . 213.3 System architecture . . . . . . . . . . . . . . . . . . . . . . . . 223.4 System identification . . . . . . . . . . . . . . . . . . . . . . . 223.5 Lane detection algorithm . . . . . . . . . . . . . . . . . . . . . 253.6 Lateral control . . . . . . . . . . . . . . . . . . . . . . . . . . 293.7 Integration with Altens mixed criticality platform . . . . . . . 31

3.7.1 Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.7.2 Priority . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4 Results 33

4.1 Evaluation of algorithm speed . . . . . . . . . . . . . . . . . . 33

5 Discussion 36

5.1 Demonstrator . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.2 Lane keeping system . . . . . . . . . . . . . . . . . . . . . . . 365.3 Camera input . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.4 Research questions . . . . . . . . . . . . . . . . . . . . . . . . 37

6 Future work 38

6.1 Zynq-7000 integration . . . . . . . . . . . . . . . . . . . . . . 386.2 Image acquisition . . . . . . . . . . . . . . . . . . . . . . . . . 386.3 Variable speed . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

iv

Page 8: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

List of Figures

1.1 Software architecture of Altens mixed criticality platform . . . 3

2.1 Flowchart of a general lane detection system . . . . . . . . . . 122.2 Road model in two different perspectives . . . . . . . . . . . . 14

3.1 Raspberry Pi 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 203.2 System architecture . . . . . . . . . . . . . . . . . . . . . . . . 223.3 Angle measurement . . . . . . . . . . . . . . . . . . . . . . . . 233.4 PWM and angle correlation . . . . . . . . . . . . . . . . . . . 243.5 Developed system . . . . . . . . . . . . . . . . . . . . . . . . . 253.6 Input image before and after grayscale filter . . . . . . . . . . 263.7 Tresholded image using canny filter . . . . . . . . . . . . . . . 273.8 Detected lines . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.9 False positive line . . . . . . . . . . . . . . . . . . . . . . . . . 293.10 Angle calculation . . . . . . . . . . . . . . . . . . . . . . . . . 303.11 Sequence diagram . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.1 Average and WCET timings of the different elements in thealgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.2 The modified RC-car . . . . . . . . . . . . . . . . . . . . . . . 35

v

Page 9: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

List of Tables

2.1 SAE levels description . . . . . . . . . . . . . . . . . . . . . . 8

3.1 Raspberry Pi 3 Model B specifications . . . . . . . . . . . . . 203.2 Steering angle and direction . . . . . . . . . . . . . . . . . . . 24

4.1 System End-to-end time . . . . . . . . . . . . . . . . . . . . . 33

vi

Page 10: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Abbreviations

Abbreviation Description

ADAS Advanced Driver Assistance Systems

ASIL Automotive Safety Integrity Level

ECU Electric Control Unit

GPS Global Positioning System

HDV Heavy Duty Vehicle

HT Hough Transform

LIDAR Light detection and ranging

MC Mixed Criticality

ROI Region Of Interest

RTOS Real Time Operating System

SAE Society of Automotive Engineers

V2V Vehicle-to-vehicle

WCET Worst-Case Execution Time

vii

Page 11: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 1

Introduction

This chapter will introduce the subject of road and lane detection, and mixedcirticality to the reader. The problems that exist in the field and what thepurpose of this degree project is.

1.1 Background

There is a global trend to make vehicles more autonomous to reduce humanerror and workload. Most modern vehicles include safety-critical systemswhere a failure can cause great damage to both humans and the environ-ment. When the implementation is a safety-critical system it is importantto be aware of the risks that are present and how to cope with them. Oneother increasingly important trend in the design of real-time and embeddedsystems is the integration of components with different criticality onto thesame hardware platform [14].

The EMC2 project [2] is an initiative to drive the development of ”Em-bedded Multi-Core systems for Mixed Criticality applications in dynamicand changeable real-time environments”. One focus of the project is on au-tomotive applications for example: ”Advanced Driver Assistance Systems”(ADAS). ADAS are systems designed to help the driver and to increase thesafety when driving. One example is the lane detection system to help keepthe car within its lanes [11]. What differs mixed criticality systems from reg-ular systems is that two components with different criticality are run on thesame hardware platform. One example could be to run the ADAS and theinfotainment system of the vehicle on the same electric control unit (ECU).

”Road vehicles -functional safety”, ISO 26262 is an international standard

1

Page 12: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 1 1.2. Problem statement

for the automotive industry regarding the electronic systems of the vehicles.ISO 26262 defines four automotive safety integrity levels, ASIL A, B, C andD. ASIL A has the lowest integrity requirements and ASIL D has the highest.The problem when implementing two applications of different criticality onthe same platform is that both applications need to be certified for the levelof the applications with the highest safety requirements. This means that inthe case of integrating the ADAS and the infotainment system on the samehardware platform then one would need to certify the infotainment to ASILD, which is a very tedious and thus expensive task.

If it would be possible to isolate the two applications using a technique calledvirtualization, where applications are run on virtual hardware rather than onbare metal. This approach would not require any extra work on certificationcompared to running the applications on separate ECUs.

The work performed in this thesis project aims at implementing a lane de-tection system for an autonomus vehicle in a mixed criticality platform.

1.2 Problem statement

Today there is a lot of research on ADAS where everything from ”Lane De-parture Warning (LDW)” to ”Full autonomous driving” is investigated [11],[26], [22].

However, there is a need for research about the integration of safety crit-ical applications and non-safety critical applications on a mixed criticalityplatform where the two applications are isolated from each other using virtu-alization. For an example Autosar, which is a partnership for development ofsoftware founded by major players in the automotive industry does addressmixed criticality systems in the sense that they recognize that the standardsmust be supported on their platforms [14] [1].

This thesis will investigate different techniques for road and lane detectionand how they can be implemented on the real-time operating system (RTOS)of a Mixed Criticality System.

2

Page 13: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 1 1.3. Purpose

1.3 Purpose

The purpose of the literature study is to give insight in the subject andanswer the research question:

1. How does a modern lane keeping system function, and how do differentsystems compare to each other?

After the literature study is done the information will be analyzed and conclu-sions will be drawn and hopefully above research questions can be answered.From that an implementation phase will begin where the lane detection al-gorithm should be implemented on Altens mixed criticality platform whichconsist of two operating systems on a Xilinx Zynq-7000 board. Figure 1.1shows how the software architecture of Altens mixed criticality platform isset up. Arm Trustzone is a hardware isolation which does not allow non-secure software from the Linux OS to access secure memory resources whichare available for the RTOS. This guarantees that Linux can not interfere withthe RTOS called FMP [27]. SafeG is a hypervisor which decides which op-erating system should run and when. SHAPE is a cloud for communicationbetween different nodes. Currently SHAPE only works for the Linux OS.

Figure 1.1: Software architecture of Altens mixed criticality platform

The goal of the implementation phase is to evaluate how well a practicalimplementation of lane detection system can perform on a mixed criticalityplatform. Questions to be answered after the implementation:

3

Page 14: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 1 1.4. Goals

1. How can we guarantee the performance of the lane detection system?

2. What frame rate can a lane detection system achieve on Altens mixedcriticality platform?

1.4 Goals

In this project there are five master thesis students working together on thesame demonstator. This means that there are both individual goals and teamgoal which do not necessarily align with each other.

1.4.1 Team goal

The team goal is to develop a demonstrator which consists of two small RC-vehicles that are supposed to group into a vehicle convoy where the firstvehicle follows a path marked on the ground and the other vehicle followsthe first to demonstrate platoon driving.

1.4.2 Individual goal

The individual goal and expected outcome from this thesis is a study ofexisting road and lane detecting systems. Then comparing different systemsto determine which is suitable for implementation in the safety critical systemthat the group is developing. The last part of the project is to implementthe lane keeping algorithm on the RTOS of the mixed criticality system todemonstrate the functionality.

1.5 Use case

This thesis project is part of a larger project conducted by Alten whichaims at developing a complete prototype of an intelligent transport system(ITS). In this ITS there will be two vehicles showing the concept of vehicleplatooning. The vehicles will be fully autonomous and connected to the in-fratructure. The project is part of a large EU-project called Safecop whichstands for Safe Cooperating Cyber-Physical Systems using Wireless Com-munication. By using wireless communication one can send commands tothe vehicles in the platoon. For example if the conditions are satisfiened, e.g. good connection the ITS can send a command to the vehicle to engagein platoon mode. When they are in platoon mode and one vehicle detectsslippery road surface, it can communicate it to the rest of the platoon and

4

Page 15: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 1 1.6. Delimitations

the distance between the vehicles can be increased to some predefined safetydistance.

The vehicles main computing board will contain components of different criti-cality which means that it is a mixed-criticality system. In a mixed criticalitysystem it is important that the non-safety critical components can not in anyway interfere with the safety critical components.

This thesis project focuses on the perception problem and the lateral controlof the vehicles. The goal is to develop a system that can keep the vehicleswithin the lane boundaries while keeping satisfactory speed forward. Theinvestigation will be of experimental nature and an evaluation if this platformis appropriate for future use in similar applications will be carried out.

1.6 Delimitations

The thesis is produced at Alten. Constrained to the Xilinx Zynq-7000 1. Thescope of this work extends to investigating lane detection and platoon drivingfor small vehicles operating in a constructed environment. Machine learningapproaches for lane detection are not within the scope of the project. Theresults will to some extent depend on the platform that the use case is builtupon. In the case of objects on the track some collision avoidance systemwill be developed and will initially only consist of an emergency break of thevehicle.

1.7 Method description

This degree project will comply with the applied research methodology whereinformation is gathered from accepted and well-known sources and applied tosolve specific problems [17]. To gain knowledge in the field of lane detectionsystems a literature study will be performed which will guide the develop-ment direction of the project.

According to Hakansson [17] an experimental research method is often usedand well suited when investigating systems performance. In this degreeproject the data will be measured and results evaulated on the developeddemonstrator.

1https://www.xilinx.com/products/silicon-devices/soc/zynq-7000.html

5

Page 16: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 1 1.8. Ethical considerations and sustainablility

1.8 Ethical considerations and sustainablility

The work performed in this thesis project is carried out in an as ethical andsustainable way as possible. As always when dealing with automation, it canbe important to consider how the system will be used and how the peopleinvolved will be affected. One big concern when dealing with automatedvehicles is how the decisions are made in situations where accidents occur.In fact there will not even be accidents, but rather decisions made by thecomputer in the car that led to the situations. The era of automated ve-hicles will also introduce completely new security threats as the computersin the vehicles can be hacked and overtaken which can lead to injuries anddeath. Only when the system has been confirmed as safe and secure it canbe deployed to real production vehicles.

6

Page 17: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2

Literature review

This chapter will present the literature study that has been carried out andpresenting the different topics to the reader.

2.1 SAE level

When talking about Advanced Driver Assistance Systems (ADAS) and au-tonomous vehicles it is important to define what it actually means. SAEInternational is a professional association and standards developing organi-zation for transport industries. They have developed a new standard forautonomous driving called ”J3016: Taxonomy and Definitions for Terms Re-lated to On-Road Motor Vehicle Automated Driving Systems,”. This stan-dard defines six levels of driving automation, from no automation to fullautomation and is described more in detail in table 2.1 below [6] [7].

7

Page 18: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2 2.1. SAE level

Table 2.1: SAE levels description

Level Name Description

0 No Automation The human does all the work

1 Driver AssistanceThe vehicle help out by doing a singletask. One example is a cruise controlwhere the car holds a reference speed

2 Partial Automation

The first level that is considered as anautomated driving system. In this levelthe vehicle is able to make decisions asovertaking other vehicles and navigat-ing. In this level humans are only thefall-back option if something fails thevehicle will request the human to inter-vene

3 Conditional Automation

The first level that is considered as anautomated driving system. In this levelthe vehicle is able to make decisions asovertaking other vehicles and navigat-ing. In this level humans are only thefall-back option if something fails thevehicle will request the human to inter-vene

4 High Automation

In level 4, the vehicle is able to oper-ate entirely by it self for the first time,there does not need to be any humanbehind the wheel as a fall-back. Whatdiffers this level from full automationis that it is on a geographically limitedarea like a center of a town, companyarea or college campus.

5 Full Automation

Level 5 is where full automated drivingis reached. The vehicle can handle alloperating modes. There is no steeringwheel nor pedals. Just let the vehicleknow where you want to go.

When developing automated vehicles there are many functional safety re-quirements that must be fully verified and validated. One importance area isthe vehicle actuation systems which are totally controlled by electronic sys-tems. As the actuators are controlled by electronic systems they are strongly

8

Page 19: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2 2.2. Lane keeping

linked to other so called by-wire systems. Two examples are drive-by-wireand brake-by-wire. These systems do not have any mechanical coupling be-tween the different elements but instead utilize sensors that read positionof the brake pedal or steering wheel. In the development of these systemsaspects as redundancy of the ECUs, sensors, actuators and power supply isrequired [24]. The standard ISO 26262 is the most recent standard availableconcerning functional safety of electrical systems in the automotive industry.The standard requires determination of safety goals as part of hazard analy-sis and risk assessment. Once all the safety goals are defined, then functionalsafety requirements can be formulated.

According to Stolte [24] it is needed to adopt measures that go beyond thestate-of-the-art of modern production vehicles for ensuring functional safetyof automated vehicles. The authors point out that despite the importance ofseries deployment of automated vehicles, there is not much discussion aboutsafety requirements within the ITS community.

2.2 Lane keeping

In its basic setting the lane detection problem seems like a simple one. Theonly thing needed is to detect a host lane and only for a short distance ahead.For a human driving may seem like a simple process where two basic tasksare involved. The first is to keep the vehicle on the road and the secondto avoid collisions. But in reality driving is not so trivial, a driver need tocontinuously analyze the road scene and choose and execute the appropriatemaneuvers to deal with the current situation. To help the drivers do thesetasks Driver Assistance Systems (DAS) have been developed. These systemscan help the driver to perceive the blind area in the road for an example.An extension is the Advanced Driver Assistance System (ADAS) which canperform basic tasks like: Lane following, Lane keeping assistance, Lane de-parture warning, lateral control, intelligent cruise control, collision warningand ultimately autonomous driving.

The main bottleneck in the development of ADAS systems is the perceptionproblem, which has two elements: road and lane perception, and obstacledetection. This degree project focuses on the first element and investigatesthe current state of the art research.

A simple Hough transform-based algorithm solves the problem in 90% ofhighway cases [11]. But the impression that the problem is easy is mislead-

9

Page 20: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2 2.2. Lane keeping

ing and building a useful system requires a huge R&D effort. One of thereasons is the high reliability demands. In order to be useful the systemneeds to reach very low error rates. The exact amount of acceptable falsealarms of a lane departure warning is still a subject of research [11]. At 15frames per second, 1 false alarm per hour means only one error in 54,000frames.

One factor that makes ADAS hard to implement on large scale is the largeamount of different conditions that has to be taken care of. The main sourcesfor condition diversity are:

• lane and road appearance

• image clarity issues

• poor visibility conditions

When driving on freeways or large highways the road scene appearance diver-sity is minimized which makes it easier to implement lane detection functionsand ultimately automated driving. This is one of the reasons why long haultrucks are the focus of a large portion of the research concering autonomousdriving.

2.2.1 Modalities for environment perception

In this section the modalities used for road and lane detection will be de-scribed more in detail.

Today there are several different sensing modalities used for lane detection.Some examples are monocular vision, stereo vision, LIDAR, IMU data, GPS.

Monocular vision

Vision is the most prominent research area due to the fact that road signs/markingsare made for human vision. Vision sensors provides good position estimationon the road without the need for any other modalities. However, there aresituations when vision sensors simply cannot perform well, for example in ex-treme weather conditions or when driving off-road. In this kind of situationsit is possible to use sensor fusion with other sensor modalities to providea better position estimate, and it is the reason why LIDAR and GPS areimportant compliments to vision for reaching full autonomous driving.

10

Page 21: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2 2.2. Lane keeping

The monocular vision system is frequently used for road and lane detection.It is more simply put one camera mounted on the vehicle. The requiredresolution can be derived from

Np =Cd

w

where Np is the number of horizontal pixels, C is the camera field of viewwidth in radians. w is the lane mark width in meter [11].

When humans drive we continuously look at the road boundaries, the lanemarkings and the road texture among other things. These road boundariesare designed so that they should be visible for human drivers in all drivingconditions. Self driving vehicles that are supposed to share the road withhuman drivers will therefore most likely have to rely on the same perceptualcues as humans.

LIDAR

Light detection and ranging (LIDAR) is a modality that has been used toa large extent in the development of autonomous vehicles for research pur-poses. The LIDAR measures the environment around the vehicle in 3D. TheLIDAR sends out light pulse and measures the time for it to come back. Asit is an active light source it is not dependent of having good natural lightingas with a regular camera.

LIDAR sensors can perform well in certain situations for example in ruralareas to detect road boundaries, [11] but are not well suited for multilaneroads without vision data. As the LIDAR only measures 3D structures it isnot able to detect road markings, although some research have been done onintensity measurement with LIDAR [18] [20] which would make it possible todetect line markings to some extent. One huge drawback with this modalityis that the sensors are still very expensive and thus not yet an alternative forimplementation in regular passenger vehicles.

Stereo imaging

Stereo imaging is the use of two cameras instead of one camera in orderto obtain 3D information about the surrounding. It is a step in betweenmonocular vision and LIDAR as it is much cheaper to implement than LIDARbut generally performs less good in terms of accuracy and reliability. Alsothe stereo imaging system generally requires greater processing power and ismore prone to errors compared to LIDAR.

11

Page 22: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2 2.3. Flowchart of image processing

GPS, IMU

Geographical information system (GPS) is currently widely used for naviga-tion systems. According to Wing [25] current commericial consumer gradeGPS recievers can achieve an accuracy of 1.5-5 m. Which works sufficientlyfor map navigation when a human drives the vehicle but is simply not accu-rate enough to fully control a vehicle only based upon the GPS. Also the GPSdoes not give any information about the environment, e.g. other vehicles orpedestrians. This means that GPS will always need to be supplemented bya camera or LIDAR.

One problem with GPS is the reliability. GPS requires connection withenough satellites to function properly and that connection can be lost dueto many reasons. Some level of lost connection can be tolerated by usinginertial measurement unit (IMU). With the IMU it is possible to calculatecurrent position and integrate with the GPS to get a more reliable estimationwhen the connection to satellites is weak.

2.3 Flowchart of image processing

Figure 2.1: Flowchart of a general lane detection system

12

Page 23: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2 2.3. Flowchart of image processing

2.3.1 Image acquisition

The image acquisition typically comes from a camera that is mounted in thecenter of the vehicle.

2.3.2 Preprocess

The preprocess is a step where the image is prepared for the next steps interms of image resolution, where a lower image resolution is often preferreddue to the high computational load that high resolution images bring [26].Everything that is not part of the region of interest (ROI) is often removed.This typically means removing the region above the horizon. Often gray-scaleimages are preferred over color images due to reduced data load [26]. Removalof unwanted disturbances such as shadows are often done in the preprocess.As mentioned in the section about road models, inverse perspective mappingis commonly done in the image preprocess to get rid of the perspective effect[12].

2.3.3 Feature extraction

There are several features that can be used for road and lane detection.The most typically used are color, texture and edges. For structured roadswith clear line markings the edges are the most common feature used forlane detection. An edge is defined as the gradient of the intensity func-tion [26]. The output of an edge based method are candidates for lane bound-aries, since edge based methods are able to find where the image brightnesschanges sharply. There are some well known edge detection methods (Pre-witt, Roberts, Sobel), but one method called Canny edge detector standsout and is still, 30 years after it was first developed, considered a state-of-the-art edge detector, some even mean that it is an optimal edge detectoralgorithm [13]. A general process of operations that occur in the canny edgedetector starts with applying a gaussian filter to smooth the image in orderto remove noise. The next step is to scan the image for intensity gradientswith a gradient operator and then apply a filter to supress noise but keepedges in the image. Then the image is analyzed for non-maxima points tofurther remove pixels that are not actual edges in the image. Next step is totreshold the image and lastly finalize the detection of edges by a hysteresistreshold which supresses all the weak edges that are not connected to otheredges.

Hough transform (HT) is then applied to the image in order to determine

13

Page 24: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2 2.3. Flowchart of image processing

if there is an edge at one certain pixel or not. The Hough transform wasoriginally invented 1962 and has since then been refined to the way it is uni-versally used today. The HT works by converting the white pixels from thethresholded input image to points in the parameter coordinate space, mean-ing they will be represented using a direction theta and distance r instead ofx an y.

Each point in the parameter space has a count, and each point in the imagespace has a vote. Edge pixels with the same theta and r value are assumedto define a line in the image. To compute the frequency of each line thetaand r are put in a number of so called bins. When all the edge pixels havebeen converted to parameter space these bins can be analyzed and the oneswith the most amount of votes are the most prominent lines in the image.Usually a threshold is set where counts that does not exceed the thresholdare ignored and only the most prominent lines are accepted [16].

2.3.4 Road model

The majority of lane detection systems initially propose a model of a road.This model can be both something simple as straight lines or more complexsplines. Some researchers make the assumption that the road is two parallellines in the image, this can be done after an operation called inverse perspec-tive mapping, which produces an bird’s-eye view perspective [12]. One othercommon method is to assume that the lanes have a common vanishing pointwhere the both lanes meet and use that as a reference for the lines in theimage [26] [19]. Both these two perspectives can be seen below in figure 2.2

(a) Inverse perspective mapping (b) Vanishing point

Figure 2.2: Road model in two different perspectives

14

Page 25: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2 2.4. Platooning

2.3.5 Model fitting

As mentioned in the road model section, very often a road model is used andfitted to the observed information from the feature extraction section.

The extracted data from the previous step would typically contain both in-liers, i.e. data that can be fitted to a line, and outliers, i.e. data thatcannot be fitted onto the same line [23]. Assuming that the extracted datafrom previous steps contains data that can be fitted to one of the modelschosen initially, several different approaches have been proposed for modelfitting [11]. Some researchers use least squares method, which is a math-ematical procedure to fit a set of observed values to a function. The ideabehind the method is to construct a function in such way so the sum of thedifference between the observed value and the data points is minimized [3].

Other research propose the use of ”RANdom SAmple Consensus”, known asthe RANSAC algorithm [18] [10] [21]. This method is stated to be superiorto least squares method due to its ability to fit a line to the inliers only,without any influence of the outliers on the result. The disadvantage withthis method is that the computational time usually is longer compared toleast squares method and is very dependent on the amount of outliers in theimage [5].

2.3.6 Time integration

The last step that is important for a reliable lane detection system is to beable to incorporate some knowledge from previous frames. This is done inorder to increase the reliability of the system and decrease the computationalload.

2.3.7 Lateral control

The lateral control task makes use of all the knowledge gathered from theprevious steps to actually steer the vehicle and keep the vehicle within thelane boundaries.

2.4 Platooning

As the traffic intensity increases in the world, the problem with traffic con-gestion comes with it. In and around large cities today there are already hugeproblems due to heavy traffic. The situation leads to increasing emissions

15

Page 26: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2 2.5. OpenCV

of greenhouse gases such as carbon dioxide. One way to ease the problemof traffic congestion and reduce the fuel consumption of vehicles is vehicleplatooning. The concept of vehicle platooning is to reduce the distance be-tween the vehicles on the road and thus reducing the wind resistance actingon each vehicle. Today most of the research cover the topic of heavy dutyvehicles (HDV), where trucks are used to form a platoon on highways. Mod-ern commercially available driver assistance systems such as adaptive cruisecontrol use radar measurements to know relative distance and velocity topreceding vehicle and adjusts its own velocity automatically. This strategyworks sufficiently good if the distance between the vehicles is long enoughdue to delays from measurements of the preceding vehicle to actuation ofaccelerating or braking torque at the wheels.

One effort to reduce the distance between the vehicles in the platoon whilemaintaining the safety requirements is to send a brake signal through wirelesscommunication to the other vehicles in the platoon. This would allow for afaster actuation of the brakes compared to only using radar. In research doneby [9], it is stated that if two identical vehicles are in a platoon on a highwayroad driving 90 km/h they can hold a minimum relative distance from eachother of 1.2 m without endangering safety. In a scenario where a worst casedelay of 500 ms is present in the system a minimum of 2 m distance shouldbe kept. This distance is significantly shorter than what a modern adaptivecruise control achieves in order to keep a safe distance to preceding vehicle.

2.5 OpenCV

OpenCV is an open source computer vision and machine learning softwarelibrary. The library has a large amount of optimized algorithms for com-puter vision. A few areas where OpenCV is used are face recognition, objectdetection, tracking of moving objects and lane detection. Because OpenCVis a BSD-licensed product, it is free to both utilize and modify the codeby companies all over the world. Companies like Google, Microsoft, Intel,Honda and Toyota employ the library for use in various different applications.OpenCV has C++, C, Python, Java and Matlab interfaces and supports allthe major operating systems [4].

In this project several OpenCV funtions have been utilized mainly in theimage processing part. More information about how it is implemented comesin next section.

16

Page 27: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2 2.6. Mixed-criticality systems

2.6 Mixed-criticality systems

A trend in modern embedded systems is to take advantage of all the pro-cessing power available in a multicore processor chip. This can be done bycombining different subsystems onto one chip which makes it possible toachieve higher CPU utilization and thus reduce hardware cost and powerconsumption. Sometimes these embedded systems contain components ofdifferent criticality. For the automotive industry that this thesis focuseson one example could be to run the ADAS and the infotainment systemof the vehicle on the same electric control unit (ECU). If these two com-ponents are integrated onto a single hardware platform, the response timeof the ADAS system should not be affected of the infotainment system. Byscheduling these two components onto the same computing platform one cre-ates a mixed-criticality system.

Each industry field (automotive, aerospace, railway, etc) has certain safetyand security regulations that the mixed criticality systems needs to complywith. There are several different criticality levels in each industry that dependon elements such as environment of operation and danger to human life [27].According to Thane [8], safety can be defined as the absence of unacceptablerisk. A system is safe if the risk associated with the system is acceptable.

2.6.1 Scheduling

Every task that is implemented has a worst-case execution time (WCET).This is the maximum amount of time that the task can take to execute on thehardware platform. The WCET is used for guaranteeing that the temporalconstraints will not be violated.

A scheduler can be either preemptive or non-preemptive. If the scheduleris preemptive, it can interrupt a task during execution if a task with higherpriority is ready for execution, a non-preemptive scheduler will wait for thetask to complete [15]. Some of the most common scheduling algorithms are:

Fixed priority

Every task has a fixed priority assigned by the developer and the processorwill execute the highest priority task of those that are ready to be executed.

17

Page 28: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 2 2.6. Mixed-criticality systems

Earliest deadline first

Earliest deadline first is a dynamic scheduling algorithm that always checkswhich task in the queue has the shortest time to its deadline and executesthat task next.

Rate-monotonic

Rate-monotonic scheduling is an static priority scheduler where the priorityof the tasks are assigned according to the task cycle time. The highestpriority task will be the one with shortest cycle time and the lowest prioritytask will be the one with longest cycle time.

Deadline-monotonic

Deadline-monotonic priority is just like rate-monotonic a static priority sched-uler with the difference that the priority is assigned according to deadlineinstead of cycle time. The task with the shortest deadline will be the onewith highest priority.

18

Page 29: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3

Implementation

This chapter will present the implementation phase of the thesis project andexplain the main concepts in the implementation.

The literature review provided good insight in the subject of lane detectionsystems and paved the way for starting the implementation phase of theproject. However, one important issue was brought up regarding the imageacquisition on the Zynq-7000 board due to lack of camera drivers. A deci-sion to use a separate node for the image processing part was made and isdescribed more in detail in this section.

3.1 Altens mixed criticality platform specifi-

cation

The mixed criticality platform that Alten uses can either be a Zedboard oran EMC2-board. Both these boards hardware consists of a Xilinx Zynq-7000 which consists of an dual-core ARM-Cortex-A9 processor as well as anprogrammable logic part (FPGA).

3.2 Raspberry pi

This section will describe the single board computer that is used for lanedetection in this degree project. The chosen board is a Raspberry Pi 3 whichis a credit card sized computer. The third generation of the raspberry hasseen some major hardware updates compared to earlier versions. The oneused in this project has the following specifications:

19

Page 30: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.2. Raspberry pi

Table 3.1: Raspberry Pi 3 Model B specifications

Model: Raspberry Pi 3 Model BOperating system: Rasbian-JessieProcessor: ARM Cortex-A53 1.2 GHz 64-bit quad-coreHardware Ports: 40 GPIO pins, 4 USB ports, HDMI port, Ethernet port,

3.5 mm audio jack, Camera interface, Display interface,Micro SD card slot

The main computer in this project is the Zynq-7000 board and thus it wouldbe preferred to utilize it for the lane detection as well. But due to no cameradrivers available for the Zynq-7000 board it would be difficult to manage theimage acquisition. A search for other hardware that is more suitable for thetask was carried out and the Raspberry Pi was chosen due to the fact that itis widely used in computer vision projects and because of its affordable pricepoint.

Figure 3.1: Raspberry Pi 3

20

Page 31: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.2. Raspberry pi

3.2.1 Pi camera

The Raspberry Pi camera module has been chosen as image acquisition devicefor this project. The camera module has a five megapixel image sensor anda maximum resolution of 2952 x 1944 pixels. This camera was chosen duethe fact that it is made specifically for the raspberry pi and is very easy touse. The one used has a very wide angle lens which turned out to entailboth advantages and disadvantages. The positive thing with a wide anglelens is the wide image that the camera can capture so it can see the road inalmost all angles. The negative thing with the wide angle lens is that theimage is quite distorted at the edges, which makes the angle calculations lessaccurate.

21

Page 32: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.3. System architecture

3.3 System architecture

In figure 3.2 below all components implmented on the demonstrator vehicleare shown.

Figure 3.2: System architecture

The focus of this thesis is the lateral control task and more specific lanedetection. The process of implementing the algorithms is described later inthis chapter.

3.4 System identification

To know how the system will behave when being fed with different PWMvalues an experimental setup has been developed and many (136) differentPWM inputs and steering angle outputs have been measured.

22

Page 33: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.4. System identification

Figure 3.3: Angle measurement

The mean value of the angle outputs have been calculated and and a firstorder polynomial have been fitted to the data points using the least squaresmethod. The line calculated is shown in figure 3.4 and has the equation

y = −14.26x+ 195.96

23

Page 34: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.4. System identification

Figure 3.4: PWM and angle correlation

The angles in figure 3.4 are defined as in table 3.2 below.

Table 3.2: Steering angle and direction

Angle Direction

<90 Right turn90 Straight forward

>90 Left turn

The line equation calculated is used in the steering control system to eval-uate the steering angle compared to the identified line angles. This processis described later in this chapter. This is used in combination with the posi-tion control to eliminate the deviation from the centerline and thus improveperformance of the system.

24

Page 35: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.5. Lane detection algorithm

3.5 Lane detection algorithm

This section will describe the algorithms used for lane detection that havebeen implemented on the demonstrator of this project.

The algorithm that has been implemented on the demonstrator so far consistof the following steps shown in 3.5

Figure 3.5: Developed system

1. The lane detection process starts with grabbing a frame from the Rasp-berry Pi camera and applying a few preprocessing steps to the image.

2. The first step is to crop the image to only contain the region of inter-est (ROI). This is a camera setting that can be predefined so that thecamera only grabs the ROI and thus it is not needed to crop it afterthe frame is grabbed.

3. The following step is to convert the image to gray scale to prepare itfor next coming operations. Figure 3.6 below shows how the acquiredimage looks in the first stages of the lane detection process.

25

Page 36: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.5. Lane detection algorithm

(a) Input image (b) Converted to grayscale

Figure 3.6: Input image before and after grayscale filter

The gray scale image is the input to the canny edge detection function. Asdescribed in the state of the art section the output of the canny function is athresholded image where all the pixels that are part of edges are set to whiteand all pixels that are not part of edges are set black. Using the OpenCVlibrary function Canny, figure 3.6 is obtained.

26

Page 37: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.5. Lane detection algorithm

Figure 3.7: Tresholded image using canny filter

This thresholded image is used as an input to the Hough transform functionthat is used for line detection.

The two figures below show the input image with lines drawn in differentcolors. The different colors of the lines indicate what kind of line it is. Thered lines in the image are all the lines that the lane detection algorithm finds.From the red lines that are close to each other, blue lines indicate the centerof the road marking. The green line show the center of the road lane. Theconcept behing the lane detection algorithm is described below figure 3.8

27

Page 38: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.5. Lane detection algorithm

Figure 3.8: Detected lines

The were a lot of problems when developing the algorithm in terms of falsepositives when evaluating the lines in the image. A solution was developedin order to eliminate the false lines in the image and only keep the lines thatare part of a road lane.

The concept behind this lane detection algorithm solution is to group lines inthe image that are very close to each other. So for instance if we find severallines on both the left and right lane of the road, these form two groups oflines because the lines are close to each other. If there are other lines in theimage that are not very close to these two lanes, they will be put in separategroups. There can be multiple groups depending on how many false linesthat are detected. In the end the groups are evaluated and the two groupswith the most number of lines in them are the one that are regarded as lanes.The short red horizontal line visible in 3.8 is the threshold distance for linesto be grouped in the same group.

This gives a very robust lane detection algorithm that disregards false posi-

28

Page 39: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.6. Lateral control

tives from the hough transform.

What happens if a new line is introduced in the image that is not part of theroad lanes?

Figure 3.9: False positive line

In the image above one extra line has been introduced and is detected by thesystem. But due to the concept of grouping lines and evaluating which onesare the most prominent, it is clear in this case that the system disregardsfrom the new line and indicates the center and direction of the road correctly.

3.6 Lateral control

Now the lines are detected and the vehicle need to be controled in some wayusing the information from the lane detection. So far all of these steps areall done on the Raspberry Pi thanks to its easy camera implementation andthat it supports OpenCV.

29

Page 40: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.6. Lateral control

There are two errors that are calculated and from which the vehicle is con-trolled by. The positional error is calculated by splitting the image into twohalves and making the assumption that we have one lane in each of the twohalves. A centerline is calculated from the two lanes and by measuring thedistance from the centerline to the middle of the image, the error is calculatedas

errorpos = center of camera− position of vehicle

The other error used for control is the actual angle by which the vehicle istravelling forward compared to the expected angle when looking at the road.

Figure 3.10: Angle calculation

In the captured images from the camera the angle of the centerline can becalculated using the known x and y values. The x in the picture is calculatedas

x = abs(x1− x2)

and y is calculated asy = abs(y1− y2)

and the angle is obtained using

tan(α) =y

x

30

Page 41: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.7. Integration with Altens mixed criticality platform

and thusα = arctan(

y

x)

This α angle is compared to the known angle from the system identificationand an errorangle is calculated

errorangle = α− (u ∗ (−14.26) + 195.96)

The two errors are sent to the Zynq-7000 via serial communication where thelateral control is scheduled and executed on the mixed criticality platform.

A PID controller for the steering servo is developed using z transform. Theoutput signal u to the steering servo is calculated as:

u =PWMmin + PWMmax

2− aangle ∗ errorangle + apos ∗ errorpos

3.7 Integration with Altens mixed criticality

platform

3.7.1 Tasks

As shown in figure 3.11 there are currently four tasks scheduled that run onthe real time operating system.

Figure 3.11: Sequence diagram

The tasks implemented on the board are described below.

31

Page 42: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 3 3.7. Integration with Altens mixed criticality platform

Communication

In this task all the vehicle-to-vehicle (V2V) communication and vehicle-to-infrastructure (V2I) communication is done. The information that is sentis

• Current state of vehicle

• Current speed of vehicle

• Distance to vehicle in front

• Longitudinal control signal

• Lateral control signal

Lateral control

In this task the lateral control described earlier is executed.

Data aggregation

The data aggregation task is combining sensor data in order to detect anoma-lies, which could mean slippage of one wheel or other information which canbe useful to share with other vehicles within the ITS.

Longitudinal control

This task controls the distance to the vehicle in front of the platoon. Thedistance is measured using a LIDAR.

3.7.2 Priority

All the tasks are scheduled with fixed priority. The prority of the tasks islisted below from high to low:

1. Data aggregation

2. Communication

3. Lateral control

4. Longitudinal control

32

Page 43: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 4

Results

This chapter will present the results to the reader.

4.1 Evaluation of algorithm speed

Table 4.1 shows the end-to-end time for the lane detection system currentlyimplemented on the Raspberry Pi. Since the lane detection system needto be run in real time the speed of the algorithm is of great importance.This measurement includes the time for the image processing steps as wellas the control structure and steering signal that goes to the servo motor thatcontrols the steering. The mean fps is calculated as 1/mean time.

Table 4.1: System End-to-end time

Image Resolution 384x288 640x480

Mean time 0.057 0.0786Mean fps 17.54 12.724

33

Page 44: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 4 4.1. Evaluation of algorithm speed

(a) Average timings

(b) WCET timings

Figure 4.1: Average and WCET timings of the different elements in thealgorithm

Figure 4.1 shows how long time the different elements of the lane detectionalgorithm takes. In the test the smaller resolution of 384x288 was used. In4.1a it is clear that it is the image processing parts that consume the largestamount of time. Figure 4.1b shows the worst-case execution time for the

34

Page 45: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 4 4.1. Evaluation of algorithm speed

same elements.

Figure 4.2: The modified RC-car

The figure 4.2 above shows the demonstator vehicle with the mounted LI-DAR and Raspberry Pi with camera. The main board behind the Raspberryis a Zedboard with the Xilinx Zynq-7000 chip.

35

Page 46: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 5

Discussion

In this section the results produced during this thesis project will be dis-cussed.

5.1 Demonstrator

First and foremost the performance of the lane keeping system on the demon-stator vehicle can be discussed. It works very good in low speeds and keepsthe vehicle within the lanes without much oscillation around the center line.The problems arises when the speed of the vehicle rises and naturally thecurves are the part of the road where the vehicle starts having problems andcannot fully keep within the lanes. The demo vehicle drives in a constantspeed with no regards to if there is a sharp curve ahead or not. A humandriver would naturally slow down before the curve and speed up again after.It would be interesting to integrate the speed of the vehicle as a parameterin the lateral control. The correlation should be that when the angle of thecenterline increases, the speed of the vehicle should decrease.

5.2 Lane keeping system

The method of the lane keeping system that is implemented can be improvedand developed to include even more of the functinalities of the state of theart lane detection systems described in the literature review. It has beenproven on the mixed criticality platform that non-critical components do notdisturb or interfere with critical ones [27]. It would be really interesting toimplement the edge detection filters on the FPGA of altens hardware plat-form which could reduce the computation time of the task.

36

Page 47: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 5 5.3. Camera input

5.3 Camera input

One thing that has been tricky is the wide angle lens of the Raspberry Picamera. The lens is convex which results in a distorted image at the edges.The effect is called barrel distorion and more commoly fisheye lens in cam-eras. This makes the angle calculations from the aquired images less accurate.The decision to use the wide angle lens camera has advantages also. One ofthe most important advantage and the deciding factor to use it is that thecamera on the demo vehicle is mounted close to the ground, and reqires awide vision to be able to see the the lanes of the road.

5.4 Research questions

One of the research questions stated in the beginning was how can we guar-antee the performance of the lane detection system. One thing that can besaid is that the deadline of the lateral control task is always met.

During tests it has been obvious that speed has a great impact on the lanekeeping performance. In lower speeds the vehicle has no problems keepinggood position on the road and managaes the curves in a good way, but inhigher speeds sometimes it looses track in the curves. I belive that this ismore of a control problem than a lane detection problem as it seems that itcaptures the lane properly, but cannot manage to control fast enough.

37

Page 48: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 6

Future work

This chapter will contain thoughts and ideas for future work building on thisthesis.

6.1 Zynq-7000 integration

One of the main and the obvious direction that this project should proceed inis to integrate the whole lane detection system in the Zynq-7000 platform toget one uniform system. This would not guarantee a better system though,as the image processing part is most computation heavy part of the lanedetection system and is now done in parallel to the tasks running on theZynq-7000 board.

6.2 Image acquisition

One thing that could potentially improve the performance of the vehiclesability to keep within the lane in higher speeds is to change the camera toone that distorts the image to a lesser amount.

6.3 Variable speed

As mentioned in the discussion section when evaluating the lane keepingsystem it was clear that the curves were the problem and that the car had noproblems to keep within the lane on straight parts of the track. One reason

38

Page 49: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 6 6.3. Variable speed

is that the speed of the car was constant with no regard to if there is a curveor a straight part of the track. This parameter would be very interesting tointegrate into the demo, e.g. the speed of the vehicle should be dependenton the error from the centerline.

39

Page 50: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Bibliography

[1] AUTomotive Open System ARchitecture. http://www.autosar.org.Accessed: 2017-02-28.

[2] Embedded Multi-Core systems for Mixed Criticality applications indynamic and changeable real-time environments. https://www.

artemis-emc2.eu/. Accessed: 2017-02-28.

[3] Least Squares Regression. http://www.itl.nist.gov/div898/

handbook/pmd/section1/pmd141.htm. Accessed: 2017-05-30.

[4] OpenCV. http://opencv.org/about.html. Accessed: 2017-04-18.

[5] RANSAC. http://soe.rutgers.edu/~meer/UGRAD/cv9lsransac.

pdf. Accessed: 2017-05-28.

[6] SAE International. https://www.sae.org/misc/pdfs/automated_

driving.pdf. Accessed: 2017-03-14.

[7] SAE International. https://www.sae.org/news/3550/. Accessed:2017-03-14.

[8] Safety Integrity. http://swell.weebly.com/uploads/1/4/3/4/

1434953/swell_safety_and_verification_20111007d.pdf. Ac-cessed: 2017-05-22.

[9] Assad Alam, Ather Gattami, Karl H Johansson, and Claire J Tomlin.Guaranteeing safety for heavy duty vehicle platooning: Safe set com-putations and experimental evaluations. Control Engineering Practice,24:33–41, 2014.

[10] Mohamed Aly. Real time detection of lane markers in urban streets. InIntelligent Vehicles Symposium, 2008 IEEE, pages 7–12. IEEE, 2008.

40

Page 51: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 6 Bibliography

[11] Aharon Bar Hillel, Ronen Lerner, Dan Levi, and Guy Raz. Recentprogress in road and lane detection: a survey. Machine Vision andApplications, 25(3):727–745, 2014.

[12] Massimo Bertozzi and Alberto Broggi. Gold: A parallel real-time stereovision system for generic obstacle and lane detection. IEEE transactionson image processing, 7(1):62–81, 1998.

[13] HS Bhadauria, Annapurna Singh, and Anuj Kumar. Comparison be-tween various edge detection methods on satellite image.

[14] Alan Burns and Robert Davis. Mixed criticality systems-a review. De-partment of Computer Science, University of York, Tech. Rep, 2013.

[15] Maryline Chetto. Real-time Systems Scheduling 1. John Wiley & Sons,2014.

[16] E Roy Davies. Computer and machine vision: theory, algorithms, prac-ticalities. Academic Press, 2012.

[17] Anne Hakansson. Portal of research methods and methodologies forresearch projects and degree projects. In Proceedings of the Interna-tional Conference on Frontiers in Education: Computer Science andComputer Engineering (FECS), page 1. The Steering Committee of TheWorld Congress in Computer Science, Computer Engineering and Ap-plied Computing (WorldComp), 2013.

[18] Albert S Huang, David Moore, Matthew Antone, Edwin Olson, andSeth Teller. Finding multiple lanes in urban road networks with visionand lidar. Autonomous Robots, 26(2):103–122, 2009.

[19] Wang Jingyu and Duan Jianmin. Lane detection algorithm using van-ishing point. In Machine Learning and Cybernetics (ICMLC), 2013International Conference on, volume 2, pages 735–740. IEEE, 2013.

[20] Soren Kammel and Benjamin Pitzer. Lidar-based lane marker detectionand mapping. In Intelligent Vehicles Symposium, 2008 IEEE, pages1137–1142. IEEE, 2008.

[21] Hao Li and Fawzi Nashashibi. Lane detection (part i): Mono-visionbased method. PhD thesis, INRIA, 2013.

[22] Joel C McCall and Mohan M Trivedi. Video-based lane estimation andtracking for driver assistance: survey, system, and evaluation. IEEEtransactions on intelligent transportation systems, 7(1):20–37, 2006.

41

Page 52: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

Chapter 0 Bibliography

[23] Rahul Raguram, Jan-Michael Frahm, and Marc Pollefeys. A compara-tive analysis of ransac techniques leading to adaptive real-time randomsample consensus. In European Conference on Computer Vision, pages500–513. Springer, 2008.

[24] Torben Stolte, Gerrit Bagschik, and Markus Maurer. Safety goals andfunctional safety requirements for actuation systems of automated ve-hicles. In Intelligent Transportation Systems (ITSC), 2016 IEEE 19thInternational Conference on, pages 2191–2198. IEEE, 2016.

[25] Michael G Wing. Consumer-grade gps receiver measurement accuracyin varying forest conditions. Res J For, 5(2):78–88, 2011.

[26] Sibel Yenikaya, Gokhan Yenikaya, and Ekrem Duven. Keeping the ve-hicle on the road: A survey on on-road lane detection systems. ACMComput. Surv., 46(1):2:1–2:43, July 2013.

[27] Youssef Zaki. An embedded multi-core platform for mixed-criticalitysystems : Study and analysis of virtualization techniques. Master’s the-sis, KTH, School of Information and Communication Technology (ICT),2016.

42

Page 53: Comparative study on road and lane detection in mixed ...1156730/FULLTEXT01.pdfDe sto¨rsta ka¨llorna till olikheter ¨ar utseendet p˚a ko¨rfa¨ltet och va¨gen, d˚aliga siktfo¨rh˚allanden

TRITA MMK 2017: 156 MDA 614

www.kth.se