desk demo poster book

32

Upload: hoanglien

Post on 08-Dec-2016

229 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Desk demo poster book
Page 2: Desk demo poster book
Page 3: Desk demo poster book

2012 2014

ADAS functions

Platform specifications

2016

Vehicle prototypes

2018 2020 2022

DESERVE DEPLOYMENT

Standardisation and marketing

Industry implementation

2nd generation tool available

Design and develop a Tool Platform for embedded ADAS

• exploiting the benefits of cross-domain software reuse

• standardising the automotive software component interfaces

• easy integration of heterogeneous modules Low cost, highly reliable, standardized Tool Platform for can seamlessly integrating different functions, sensors, actuators and HMI to enable the development of a new generation of ADAS applications.

•Methodology for the common software platform •Development of the selected ADAS functions •HMI design, driver model and driver monitoring •Verification of the tool platform methodology • Implementation of the 5 demonstrators

Budget 25 M€ EU Funding 4.2 M€ National funding 7,2 M€ Schedule Sep 2012 – Feb 2016 Programme ECSEL Joint Undertaking Coordinator Matti Kutila, VTT Partners 23 partners from 9 countries Contact [email protected] Website www.deserve-project.eu

Page 4: Desk demo poster book

Frank Badstübner [email protected]

DESERVE Platform:

„The journey is the reward“

ADAS application portfolio(compliant with DESERVE module concept)

Common Modules:

Lane course,

VRU detector,

Vehicle detector, ….

Functions:

Emergency brake,

VRU protection,

InterUrban Assist, ….

ADAS rapid

prototyping

framework

Specifications and

requirements for

System on Chip(SoC)

next generation embedded ADAS systems

Model Based DSEMatlab/Simulink/ADTF/RTMaps

Validation

and Test

Cost prediction

(silicon area,

throughput, Pv,…

HIL, MIL, PIL test bench

MicroAutoBox, FPGA-Board,

Embedded PC, Aurix,…

Ite

rati

ve

pro

ce

ss

> 1

5%

sp

ee

du

p

A Generic Platform Concept

Development of future ADAS systems with maximum reuse of modules and components due to well-defined processes and standardi- sations on architecture • Provides flexible framework reaching

from PC based pre-development to close-to-production HW implementation

• Constructs tool chain to allow for model- ling and evaluation via virtual testing

• Common in-vehicle platform for furture ADAS functions

• Enables the integration of safety (ISO26262) and security (ISO27001) mechanisms for pre-certification

DESERVE platform concept to speed up ADAS development process by more than 15% compared to state of the art ADAS development levels

Page 5: Desk demo poster book

Design and Development Process

• Development framework to seamlessly support the ADAS developments levels

• Infineon focus: level 3

model based design

software implementation

Embedded sw / programmable hw

implementation

Hardware-in-the-Loop testing

Software-in-the-Loop testing

Model-in-the-Loop testing

user acceptance

business requirements

Compilation and configuration

Driver assistance design

Function design and development

Function Configuration

Function testing & validation

is this the driver assistance we want?

does the function behave as it should?

does the function perform in real-time?

does the vehicle driver accept and use the assistance?

Level 1: PC platform

Level 2: Rapid prototyping

Level 3: Fully embedded,

AUTOSAR compatible

DESERVE platform enabled design and development process DESERVE platform enabled design and development process

Dedicated requirements for embedded multicore platform with FPGA

Development level 3 goes significantly ahead compared to levels 1 and 2 in terms of fulfilling „critical“ requirements • Autosar compatibility • SPICE compliance • Functional safety (ISO 26262, ASIL D

requires FMEDA and safety concept) • Reliability

Key achievements • FMEDA for multicore processor for use in DESERVE

development levels 3 (and 4) done • Basis elements of the safety mechanisms published

(e.g. hierarchical modularization, Safety Element out of Context – SEooC)

• Work done for „non-industrialised“ platform can be carried over to the „industrialised“ DESERVE platform

Frank Badstübner [email protected]

Page 6: Desk demo poster book

David González

INRIA [email protected]

Arbitration process applied to automated driving -WP24 Generic ADAS Control-

Context

An algorithm for control sharing between the driver and the automated control is presented. Once the limits for a safe driving are defined, the automated vehicle can act as a supervisor of the driver performance. Haptic feedbacks are given to the driver through the steering wheel, where the force of those feedbacks is determined by the risk assessment and the attentiveness of the driver.

Haptic Steering Wheel -communications bridge-

Arbitration and control modules

Conclusions

Our approach can effectively determine the risk of the situation in hand, evaluating the scenario and the driver status. Haptic feedbacks inform the driver of the decision of the automated systems, allowing smooth interaction between the two and enhancing safety.

Data from CRF demo vehicle

(DESERVE)

Page 7: Desk demo poster book

David Gonzalez

INRIA [email protected]

Path planning with obstacle avoidance and different

speed profiles

-WP24 Arbitration/Control and WP44 Automated functions- Context

A continuous curvature planning algorithm with obstacle avoidance capabilities is presented, considering different speed profiles for the improving of the comfort and reducing lateral accelerations in the driving process.

The automated system generates a collision free path that considers vehicle’s constraints, the road and different obstacles inside the horizon of view. The path planning was developed in the framework of the WP24 and WP44. The results were simulated in PROSivic and RTMaps, using real data.

Expetimental Results Conclusions

Implementing parametric curves, different intersections are handled. By searching the position of the Bezier control points, a smooth path that fits the vehicle and road constraints can be planned. The user is able to set a level of comfort in the navigation system, as this modules calculates the longitudinal speed profile.

Page 8: Desk demo poster book

André Rolfsmeier

Christian Lindemann

dSPACE GmbH [email protected]

[email protected]

DESERVE Platform Concept

ADAS Development System

ADAS Development Process

Page 9: Desk demo poster book

Romain Rossi

Clément Galko

Hariharan Narasimman

Xavier Savatier [email protected]

[email protected]

Page 10: Desk demo poster book

Jens Klimke [email protected]

Frederic Christen [email protected]

Context

PELOPS (PC)

Driver Model (Virtual driver)

Environmental Model (Virtual environment)

Vehicle Model (Virtual vehicle)

DESERVE platform

Perception Platform

Application Platform

IWI Platform

Control function

CA

N C

AN

Implementation

Requirements • Inter-Urban driving behaviour (incl. safe passing) • Route-motivated lane changes • Full intersection behaviour without pedestrians • Predictive driving (speed adaption, ...) • Realistic mapping of the human driving motivations • Re-usage of validated driver model approaches • As Simulink model (easy to implement, easy to debug)

Motivation Conscious Unconscious Action

Free moving Desired velocity Acceleration Pedal value

Following Desired distance Acceleration Pedal value

Lane keeping Fix-points Yaw rate Steering wheel angle

Lane change Two-step LC

Lat. offset/ Fix points

Lat. velocity / yaw rate

Steering wheel angle

Stopping Stop distance Acceleration Pedal value

Standing - Pedal value -

Safe passing Fix points Yaw rate Steering wheel angle

Concept

• Well-known structures Donges, Rasmussen • Validated driver models Wiedemann, Ehmanns, ... • Global parameterisation • Transparent data

management • Separation between

panning, manoeuvre, conscious and unconscious decisions

Navigation

Guidance

Stabilisation

Veh

icle

HM

I State

Ped

als/Steering

Drive

r

Action

Action

Action

Action

Memory State Parameters

Perception

Radio/ Global traffic information

Traffic/ road/

environment

Vehicle

Info processing

Manoeuvre decision

Conscious implementation

Unconscious implementation

Planning

Route

Manoeuvre

Reference variables

Enviro

nm

ent

Enviro

nm

ent/R

oad

/Traffic Results 1. Driver behaviour prediction

2. Simulation driver model

Page 11: Desk demo poster book

Context

PELOPS (PC)

Driver Model (Virtual driver)

Environmental Model (Virtual environment)

Vehicle Model (Virtual vehicle)

DESERVE platform

Perception Platform

Application Platform

IWI Platform

Control function

CA

N C

AN

Requirements • Control mechanisms with the usage of longitudinal and

lateral actuators • Demonstration of the usability and increase of

development performance with the DESERVE platform

Architecture

Prototype definition • Development close to the DESERVE hardware (here:

Matlab/Simulink model on the dSpace Micro Autobox II) • Inter-urban ACC with advanced lane keeping control

(Control of target distance, speed limits, curve speed, passing distance, passing speed, curve cutting)

• Hand-over to the driver before intersection areas • Trajectory based combined longitudinal and lateral

control

Toolchain: • MiL/HiL Simulation with PELOPS • Implementation of a realistic driver model for intersections • Implementation of Driver-ADAS-Interaction • Sensor data simulation • Controller optimisation with AVL Cameo

Jens Klimke [email protected]

Frederic Christen [email protected]

Display

Brake

Vehicle control

Target selection/

classification

IWI manager

Radar

Perception Application IWI

Frontal Object Perception

(FOP) VRU

Lane recognition (LR)

Vehicle trajectory calculation

GPS

Map

ADASIS Horizon (AH)

Powertrain

Enhanced Vehicle

Position (EVP)

Relative Positioning to the Road (RPR)

Vehicle Filter/State (VFS)

Vehicle Data

Frontal Camera

Lane course (LC)

Strategy/ Trajectory

Lane asphalt potential

Curve potential

Res. potential

Lane asphalt potential Obj. potential Res. potential

Page 12: Desk demo poster book

Fabio Tango – CRF [email protected]

Mauro Baldi - POLITO [email protected]

Chiara Ferrarini - ICOOR [email protected]

To develop a driver model by starting from the analysis of existing solutions. The driver model modules consists of two elements: • Virtual Driver: will be used to

faithfully simulate the driver behaviour and as a tool to test the DESERVE development platform.

• Driver Intention Detection Module (DIDM): will be used within the platform to predict driver’s intentions.

Aim The DIDM test is aimed at collecting driving data by means of car log files and Socioanagraphic & driving style questionnaires for 46 users. The test focuses in gathering data about complete 900 overtaking manoeuvres (including 2 Lane Change (LC) manoeuvre) using CRF DEMO vehicle.

DIDM test

Preliminary results were computed in Matlab with the help of the Pattern Recognition Toolbox (PRT). We trained the following classifiers on a large dataset with 3 features, binary and multiple target variables, 1035 rows for the Training Set and 450 rows for the Test Set: • Support Vector Machines (SVM) with Gaussian kernel • Relevance Vector Machines (RVM) • Hidden Markov Models (HMM). The dataset has the following features: • Speed • Steering angle • Time to collision. The effectiveness of the classifiers is expected to be > 80% Classification on data from the selected drivers is underway.

Preliminary results

Driver Model

DIDM will be used within the platform to predict driver’s intention with respect to some specific manoeuvres: • Lane-change/Overtaking • Car-following Inputs: vehicle dynamics data outputs from FOP module direction of user’s head from internal camera. Outputs: probability of the next manoeuvre the driver intends to perform and a level of confidentiality.

DIDM

Manual calibration of the SVM with RBF kernel

2 4 6 8 10

1

2

3

4

5

6

7

8

9

10

0.95

0.955

0.96

0.965

0.97

0.975

0.98

0.985

0.99

0.995

1

Page 13: Desk demo poster book

SomnoAlert® Sensor+ for Smartphone is an app developed to detect “not apt to drive” states using physiological signals such as thoracic effort signal.

An external thoracic effort sensor sends the respiration data to the Smartphone, where it is processed to evaluate the attention state and warn if the drowsiness state of the driver becomes dangerous

Additionally, this system works connected to a server that allows the on line control and management of a professional fleet. The fleet can use this tool to improve general safety by analyzing the drowsiness status, routes, working hours, working shifts… of the different professional fleet drivers.

•It evaluates the level of attention with physiological signals in order to achieve an objective measurement of attention •Detection of patterns to identify drowsy or inattentive states while driving. •Memory of basal thoracic effort signal in a wake state to analyse its degradation over time. •It’s car-independent.

•It can reduce accidents.

Base concept

Based on a characterization of different Thoracic Effort patterns, related with the driver state, the drowsiness level with high specificity can be estimated. The thoracic effort signal, both extracted from video signal or adquired by inductive band, is analyzed with algorithms based on the quantification of the respiratory rate variability.

Specificity 0.98

Sensitivity 0.94

Phase 0 (Alert)

Characterized by symmetry and stability in amplitude and frequency

Phase 1

(Fatigue)

Characterized by loss in amplitude and appearance of sighs and yawns

Phase 2

(Drowsy)

Characterized by high variability patterns

Benefits

Description

SomnoAlert® Sensor+ for Smartphone is an app developed to detect “not apt to drive” states using physiological signals such as thoracic effort signal.

An external thoracic effort sensor sends the respiration data to the Smartphone, where it is processed to evaluate the attention state and warn if the drowsiness state of the driver becomes dangerous

Additionally, this system works connected to a server that allows the on line control and management of a professional fleet. The fleet can use this tool to improve general safety by analyzing the drowsiness status, routes, working hours, working shifts… of the different professional fleet drivers.

Description

Detect inadequate driver 's state based

on thoracic effort information

SomnoAlert® Sensor + Detect inadequate driver 's state based

on thoracic effort information extracted from video

SomnoAlert® Contactless

HW and SW features

HW and SW features

•Route information via GPS • .It send the data to a server in order to

evaluate and help the driver •A database server storages information

sent by android application, for off-line processing

•Mobile shows information directly to the driver about their attention state

•Websites shows an historic of travelled routes and drowsiness state along the route.

The Somnoalert® Sensor+ device comprises an inductive band located around the thorax and connected to a transmitter that sends the respiratory data to the Smartphone where it is analyzed with algorithms based on the quantification of the respiratory rate variability. As a SW features the system includes:

The Somnoalert® Contactless device comprises a video camera located in

fron of the driver and recording an specific section of the thorax .

The camera is connected to a PC that analizes the video data in order

to extract the respiration signal. Once the respiration signal is extracted, it is analyzed with

algorithms based on the quantification of the respiratory

rate variability.

Adasens PAC16 camera device located in front of the driver

recording the upper body and connected to a thoracic effort signal

extraction algortihm running in RTMaps that provides the

respiration signal. Variability quantification algorithms

analyses the respiration signal and provides an estimation of the

attention degree of the driver.

Objective: Improvement of a general driver model module that will be integrated into the DESERVE platform.

The module combines a Distraction Detector and a Drowsiness Detector based on eyelids movements from partner CONTI with the drowsiness detector from partner FICOSA based on the analysis of the Driver’s respiratory rate.

FICOSA Drowsiness Detector (SomnoAlert®) comes in two different versions:

- SomnoAlert Sensor + Driver monitoring system based on a contact band thoracic effort sensor

-- SomnoAlert Contactless Driver monitoring system based on a contactless video respiration sensor

WP3: DRIVER MONITORING

Vehicle data

Audio

Display

Brake

Haptic-system

Vehicle control

Target selection

Threat assessment

Driver intention - overtaking/ lane change

IWI manager

Front camera (LDW) CONTI

Impairment detection algorithm

Eyelid motion CONTI

Drowsiness detection by

respiration rate FICOSA

Eye gaze Head pose

CONTI

Image respiration

extraction(opt) FICOSA

Bluetooth communication

FICOSA

Drowsiness detection through

eyelid motion CONTI

Driver im-pairment diagnostic

Drowsiness detection

fusion

Driver Monitoring

Inductive band

FICOSA

Interior Camera (thorax) FICOSA

Distraction detection

CONTI

Perception platform Application platform IWI platform

Interior Camera Simulations Inductive band Demo vehicle

Interior Camera

(driver face) CONTI

Andrea Saccagno P.M. Research - ADAS FICOMIRRORS ITALIA S.r.l. C.so Cuneo, 15 10078 Venaria Reale (TO) - Italy [email protected] Tel. +39-011- 0130165 -

Page 14: Desk demo poster book

Focus group participants expressed their opinion on the proposed DESERVE HMI concepts evaluating the strategy for drowsiness alert (explicit vs implicit).

The 2 HMI concepts identified were tested in a driving simulator with 30 users to identify the preferred one. Furthermore 9 icons to express DROWSINESS concept have been tested with 63 users.

HMI CONCEPT 2 – IMMERSIVE HMI The interaction is distributed along the dashboard and the windscreen.

HMI CONCEPT 1 – HOLISTIC HMI All the HMI elements are centralized in front of the driver. The cluster is the main visual output channel, while the steering wheel is the main input channel.

Elena Maria Bianco CRF [email protected]

Chiara Ferrarini ICOOR [email protected]

Elisa Landini RE:LAB [email protected]

Eva M. García Quinteiro CTAG [email protected]

.

The HMI FINAL CONCEPT with explicit drowsiness merges winning features from concept 1 – unique display and 2 - central warning info.

HMI CONCEPT 1 with explicit drowsiness

HMI CONCEPT 2 with explicit drowsiness

HMI CONCEPT 3 – SMART HMI The dashboard display is replaced with a nomadic device (e.g. smartphone/ tablet).

Page 15: Desk demo poster book

Dr.-Ing. Martin Kunert

Robert Bosch GmbH [email protected]

Modular FPGA-based, embedded Radar Prototyping Platform

Acronyms: FDM-Frequency Division Multiplexing OFDM-Orthogonal FDM MIMO-Multiple Input Multiple Output CFAR-Constant False Alarm Rate RCS-Radar Cross Section (★) source: Prof. Blume | Institute for Microelectronic Systems, Hannover (Germany)

Objectives and Task Description Methods and Steps Results and Examples

Higher sensor raw-data rates up to tens

of Gbit/s are new aspects that can’t be

handled on PC-based development

systems

Real-time signal processing on a

powerful and flexible prototyping

framework with high percentage of

transfer to the final target hardware is

required for future radars

Both low-level, massive parallel number

crunching work and very complex, more

stream-oriented and often branching

high-level tasks require application

specific, mixed processing architectures

Cost model based design space

exploration (DSE) methods will enable

the definition of the final hardware

architecture in a very early development

stage

Future automotive radar systems need

much higher angular resolution,

realized by

Higher speed resolution

Range-Doppler

processing

Novel modulation

concepts like fast-chirp,

FDM, OFDM

New antenna concepts

like

MIMO antennas

Conformal antennas

In MIMO chirp-sequence radars data

rates turn out to become extremely

demanding

With the flexible and powerful DESERVE

development framework this challenge

can be solved

An operative MIMO radar prototype

frame-work is operational at Bosch

within the DESERVE project

High-resolution range-Doppler

measurements are no w possible to

enable radar based road users

classification

Fig. 1 Trend towards heterogeneous HW platforms (★)

Fig. 2 Design space exploration for digital signal processing (★) Fig. 5 Block diagram of a three-phase MIMO radar framework

Fig. 4 Data rates w.r.t. the key radar system parameters

Fig. 3 Lateral speed requirements

Fig. 6 The DESERVE MIMO Radar prototypes

Fig. 7 Range-Doppler measurement of different road users

Fig. 8 Fast chirp MIMO radar signal data evaluation chain

Fig. 9 Wheel motion detection of an approaching bicycle

8TX/16 RX Frontend

with MATLAB interface

4TX/ 8 RX Frontend

with real-time FPGA

signal processing

both wheels

are rotating

rear wheel

is locked

both wheels

are rotating

T1 T2 T3

Range-Doppler Spectrum CFAR Threshold Peak Selection Angle determinationReal World Scenario

2 4 6 8 10 12

Range [m]

4

3

2

1

0

-1

Velo

city [

m/s

]

2 4 6 8 10 12

Range [m]

4

3

2

1

0

-1

Velo

city [

m/s

]

2 4 6 8 10 12

Range [m]

4

3

2

1

0

-1

Velo

city [

m/s

]

0

-10

-20

-30

-40

-50

RCS [dBsqm]

Dipl.-Ing. Frank Meinl

Robert Bosch GmbH [email protected]

Page 16: Desk demo poster book

Main objectiveAssist the driver on connection roads between cities (highways, country roads, rural roads), especially during the night.

The Inter-Urban Assist uses state-of-the-art digital maps, radar and image processing to provide advanced vision enhancement.

It incorporates modules already in series production and – additionally – newly developed functionalities.

Functionalities

Inter-Urban Assist

IUA Demonstrator

Augmented night vision system

N. von [email protected]

Mercedes Benz S-Class.

Demonstrator vehicle

Module architecture

Radar

Front

Cameras

Digital Map

Lane Course

Scene

Labeling

3D

Reconstructio

n

Self

Calibration

Vehicle Data

raw

raw

calib

data

labeled

image

trajectory

spatial

structure

feedback

ADASIS

Road course shown in

night view display

� Driver guidance on

curvy rural roadsRealized by localization in digital

map, based on Scene Labeling

Pedestrians overlaid with

distance colouration

� Driver warning for

immediate perceptionClassified by Scene Labeling,

reconstructed with Stereo

High beam turns according to road course

� Perfect illumination on curvy rural roadsRealized by localization in digital map, based on Scene Labeling

The Inter-Urban Assist is a complex

multi-platform system, realized in a

highly heterogeneous architecture.

Well-defined interfaces and the ADTF

message bus ensure the smooth

communication between modules,

across multiple instances of ADTF

running on multiple PCs.

ADASIS

LR

ARS310

NV3_CAM1

NV3_CAM2

VEHICLE

COMBI

HLC

NV3-BoB

ARS310-BoB

SYNCBOX

IMG_HUB_CUT

MAP-PC

MPC

PC_INTFCE

MAP_INTFCE

RADAR_INTFCE

VEH_INTFCE

IMG_INTFCE

HLC_INTFCE

DISP_INTFCE

LC1_SP LC2_UM

LC3_MM

LC4_EM

LC5_PC

AUGM

3D-R1_R

SGM_INTFCE

PC1

PC2

3D-R2_SM

SGM-FPGA

SL1_PP

SL2_NR

SL3_CMLGC

SC1_FE

SC2_FM

SC3_BA

FPGA

Predictive front light

Dr. R. [email protected]

Dr. L. Krü[email protected]

Page 17: Desk demo poster book

Main objective Scene Labeling is used to detect pixel regions in cameras images. It is used by the augmented night vision system to detect pedestrians and the road for lane course estimation. Scene Labeling is achieved by a per pixel classification using a Multiscale Convolutional Neural Network, which is trained on several thousands hand labeled images in an offline process.

Scene Labeling Workflow based on Deep Learning

Scene Labeling for the IUA

M. Limmer [email protected]

Input from camera

Image Pyramid construction

Application of the multiscale CNN Class membership probability maps Pixel classification Preprocessing

Upsampling and Classification

Local Normalization

Convolution Layer

Pooling Layer

Applications

Implementations

Lane course estimation Pixels classified as road are fitted into a relative localization model to estimate the close range lane course

Object detection Coherently classified pixel regions are segmented into objects, tracked and highlighted.

FPGA Implementation Efficient implementation on a Virtex 6 based ML605 evaluation board from Xilinx (IMS Leibniz Universität Hannover)

GPU Implementation High speed implementation for the CUDA based GPU nVidia GTX Titan (driveU Ulm University)

N. von Egloffstein [email protected]

Decomposition of the multiscale CNN Further identification and decomposition of the above mentioned Convolution, Pooling, Upsampling and Classification Layers into generic building blocks was performed. These are used to implement a hardware realization after performing a cost evaluation and design space exploration based on quantitative cost models. (IMS Leibniz Universität Hannover)

Dr. R. Schweiger [email protected]

Page 18: Desk demo poster book

Connection of ADTF and FPGA Emulation Board Model-Based Design Space Exploration

Dipl.-Ing. Nico Mentzer

Institute of Microelectronic Systems

[email protected]

Prof. Dr.-Ing. Holger Blume

Institute of Microelectronic Systems

[email protected]

Jun.-Prof. Dr.-Ing. Guillermo Payá Vayá

Institute of Microelectronic Systems

[email protected]

Institute of Microelectronic Systems

M. Sc. Florian Giesemann

Institute of Microelectronic Systems

[email protected]

PC

ETH

ADTF

Filter Filter Filter

ETH

MEM

FPGA Board

FPGA

PE PE

PE

MemCtrl

PE

Sensor

Altera DE2 Xilinx ML605 Altera PCIe 385N Xilinx VC707

Youtube video[2]

Hardware in the Loop [1]

Continuous improvements in advanced driver assistance systems (ADAS) require

increasingly sophisticated multimedia algorithms, which rely on enormous

processing power

A high innovation rate in automotive applications leads to to rapid changes and

many different ADAS algorithms

State-of-the-art software environments, e.g., ADTF, facilitate ADAS implementation

The development process for automotive applications and the HW/SW-co-design

process can be accelerated by implementing complex algorithms on FPGA-based

rapid prototyping platforms

Integration of FPGA emulation platforms in ADTF software development framework

increases simulation speed for algorithms and thereby supports the development

process

ADTF filter with underlying FPGA board (orange filter)

ADTF software development framework with case study application

Connection realized via Ethernet

ADTF filter communicates with

emulation board

Ethernet module in Hardware

Communication library for

software

Underlying framework

supports different

boards and is extensible

algorithmic constraints

architectural constraints

device constraints

technology constraints

Model Library

quantitative cost models

algorithmic characteristics

architectural characteristics

device characteristics

technology characteristics

intrinsic requests

extrinsic requests

intrinsic characteristics

extrinsic characteristics

For a rapid evaluation of hardware platforms by comparing different realizations and

to find the best possible implementations with fixed resources, a design space

exploration is necessary

The evaluation of hardware platforms and derivation of associated hardware costs is

a laborious and time consuming task

To shorten the design space exploration, quantitative cost models of hardware

modules can be employed

Results are intrinsic and extrinsic characteristics of a possible design, which lead

to a reduced design space

Intrinsic requests

Properties, which are predetermined

by the application itself

Algorithmic contraints

Architectural contraints

Extrinsic requests

Properties which are given by the used

device and the employed technology

node

Case Study – Cost Models for SIFT Feature Extraction

Support of multiple FPGA Evaluation Boards

MCPA board BEEcube BEE4

Compose application from building blocks

Model component with several model functions

0

50

100

150

0 2.000 4.000 6.000 8.000 10.000

Design space for LUTs Exemplary design space for the number of LUTs related to the execution cycles. The design specifications area indicated by the orange lines. The possible hardware setups are highlighted by the orange box. The theoretical design optimum is depicted by the purple circle.

design opt.

[2] https://www.youtube.com/watch?v=pX-fFX7kr5M

#exec. cycles 106

103 #LUTs

[1] F. Giesemann, G. Paya Vaya, H. Blume, M. Limmer, W. Ritter: A Comprehensive ASIC/FPGA Prototyping Environment for Exploring Embedded Processing Systems for Advanced Driver Assistance Applications, SAMOS 2014

Page 19: Desk demo poster book

FPGA-Based Extraction and Matching of Image Features

Application and Algorithm SW/HW-Codesign ADTF Integration and Results

Prof. Dr.-Ing. Holger Blume

Institute of Microelectronic Systems

[email protected]

Jun.-Prof. Dr.-Ing. Guillermo Payá Vayá

Institute of Microelectronic Systems

[email protected]

Dipl.-Ing. Nico Mentzer

Institute of Microelectronic Systems

[email protected]

Institute of Microelectronic Systems

Performance

SIFT SW-mode ~ 1 fps @ 95 W SIFT HW-mode ~ 7 fps @ <4 W Matching SW-mode > 1.000 fps @ 95 W

FPGA Ressources* (build. of pyramids, loc. of feature points, comp. of gradients)

Wide-Baseline Camera Systems in Vehicles [1][2]

The motion and vibration of a vehicle leads to

decalibration of wide-baseline camera systems

State-of-the-art algorithms for the calibration of wide-

baseline camera systems rely on robust sets of

corresponding 2D image points of the same unknown

3D points

Computation of sparse pixel correspondence maps by

extraction and matching of robust SIFT-image features for

online camera calibration

Building of gaussian scale

space

Location of feature points

in the scale space

Assignment of orientation

based on local image

gradient directions

Computation of keypoint

descriptor based on local

image region around

keypoint positions

Extraction of Image Features

Extraction of Image Features

Geometry Based Feature Matching

Online Camera Calibration

Sparse Pixel Correspondences

SIFT Feature Extraction

Building of Pyramid

Locating of Feature Points

Assignment of Orientation

Computation of Descriptor

SIFT-Algorithm regular arithmetic

control complexity

ML605 Evaluation Board with Xilinx Virtex6 FPGA

Heterogenous SIFT-System

FPGA Building of Pyramid

Locating of Feature Points

Computation of Gradient Angle

and Gradient Magnitude

ETH

Processor

Refinement of Feature Points

Assignment of Orientation

Computation of Descriptor

ETH

Geometry Based Feature Matching

SIFT-Filter

SW/HW-Mode

Adjust SIFT-parameters for algorithmic

examination

Matching-Filter

Choose out of 5 different matchers

Adjust matching threshold for algorithmic

examination

Adjust size of search window for algorithmic

examination

Visualization

Display execution times of feature extraction &

matching

Choose between 3 different visualization modes

ADTF Filter

#Reg 19,167 6.3% #LUT 26,995 18.6% #RAMB8 162 19,5% #DSP48 176 23.0% 100 MHz system clock

*Xilinx Virtex-6 XC6VLX240T, PAR-report

Improvement of quality for feature matching

Global matching Geometry based matching

right input image left input image

Overlay of input images with correct and false

matches

Color code: yellow – correct | blue – false matches

Significant increase of matching quality leads to

robust postprocessing of pixel correspondences

Computational intensive tasks on FPGA

Control intensive task on processor

Accelerated computing of SIFT features by

exploiting plattform specific advantages

DESERVE Platform

Flexible DESERVE platform leads to

interchangeability of functional modules

Rapid prototyping approach supported by FPGAs

ensures sufficient processing power for state-of-

the-art and future ADAS algorithms

[1] N. Mentzer, G. Payá Vayá, H. Blume: Analyzing the Performance-Hardware Trade-off of an ASIP-based SIFT Feature Extraction, Journal of Signal Processing Systems 2015

[2] N. Mentzer, G. Payá Vayá, H. Blume, N. von Egloffstein, W. Ritter: Instruction-Set Extension for an ASIP-based SIFT Feature Extraction, SAMOS 2014

2nd set of features

Page 20: Desk demo poster book

FPGA-Based Scene Labeling using Neural Networks

Scene Labeling Soft-Core Implementation ADTF Integration and Results

M. Sc. Florian Giesemann Institute of Microelectronic Systems [email protected]

Jun.-Prof. Dr.-Ing. Guillermo Payá Vayá Institute of Microelectronic Systems [email protected]

Prof. Dr.-Ing. Holger Blume Institute of Microelectronic Systems [email protected]

Institute of Microelectronic Systems

Scene Labeling Classification of every pixel in a scene Object/Obstacle detection Road and Road-lane detection Night-vision augmentation

conv

conv

conv

dataS

dataM

dataL

dataS

dataM

dataL

pool

pool

pool

dataSdataS

dataSdataS

dataSdataS

conv

conv

conv

dataSdataS

dataSdataS

dataSdataS

pool

pool

pool

fc

Convolutional Neural Networks State-of-the-Art classifier Layers of different operations

Convolution Pooling Squashing

Computation intensive 6.8×109 operations per frame

(image size 1024×512 pixels)

Instruction

MemoryPC

Instruction

Decoder

Multi-Shared Register File

64 registersX2

MO

DE

Issue 0

DMA

Controller

Write b

ack p

ath

s

IF/DE

Instruction

Decoder

Issue 1

X2

MO

DE

DE/RA

RA/EX1

RFUALUData

Memory

EX1/EX2

EX2/WB

Forw

ard

ing p

ath

s

Shared

Memory

On-Chip

Bus

On-the-fly

computations

Reconfigurable interconnection port matrix

RFU

Reconfigurable

co-processor

unit

MAC

Attached

reconfigurable

processing unit

Forwarding

Convolution

Engine

TUKUTURI1 Soft-Core Processor Generic Architecture template VLIW: Two issue slots SIMD: 64-bit registers

(1×64, 2×32, 4×16, 8×8) DMA for data transfer between external &

internal memory

ADTF-FPGA-Coupling Connection of ADTF filters with FPGA

development board Transfer filter input data to FPGA-board Computing intensive tasks implemented in

hardware

Implementation Results Speed-up with instruction-set extension ~5.1 Speed-up due to memory optimizations ~2.4 Current framerate 0.99 fps

Example of Labeled Image

Application-Specific Extensions Co-processor for convolution Programmable filter coefficients Computes 16 filters in parallel 4x16 subwords parallel Pipelined: One result per cycle

Extended DMA Controller Block transfers Queued transfers

Neural Network Topology

TUKUTURI Architecture and Instruction-Set Extension

ADTF Project for Scene Labeling

Performance of Hardware Implementation

0

0,2

0,4

0,6

0,8

1

1,2

Basic Conv. Eng. Mem. optimized

fps Framerate

[1] Payá Vayá, G.; Burg, R. & Blume, H.: Dynamic Data-Path Self-

Reconfiguration of a VLIW-SIMD Soft-Processor Architecture. Workshop on Self-Awareness in Reconf. Comp. Systems (SRCS) Int. Conf. on Field Programmable Logic and Applications (FPL 2012), 2012.

Giesemann, F.; Payá-Vayá, G.; Blume, H.; Limmer, M.; Ritter, W.: Deep Learning for Advanced Driver Assistance Systems, in: Towards a common software/hardware methodolgy for future advanced driver assistance systems – the DESERVE approach. Book to be published.

Page 21: Desk demo poster book

Joshué Pérez Rastelli

INRIA [email protected]

Trajectory Generator for urban intersections

-WP44 Automated functions- Context

An algorithm for dynamic path generation in urban environments is presented, taking into account structural and sudden changes in straight and bend segments.

The results present some improvements in path generation (previously hand plotted) considering parametric equations and continuous-curvature algorithms, which guarantees a comfortable lateral acceleration. This work is focused on smooth and safe path generation using road and obstacle detection information.

Segmentation process in intersections

Expetimental Results Conclusions

An Intelligent Trajectory Generator, which achieves a real time path generation engaging and safety planning to the specific characteristics of the vehicle and the intersections is presented. The longitudinal speed was adapted to keep the comfort acceleration in the experiments.

Page 22: Desk demo poster book

Abhishek Ravi

AVL LIST GMBH [email protected]

Model based Tuning of Map Adaptive ACC Function Velocity control to safely enter/exit

curves with comfort and in optimum manoeuvre time

Parameters Tuned: Max. Acceleration, Start of Deceleration and Control Gain

Page 23: Desk demo poster book

Dr. Matti Kutila Tel. +358 40 820 8334

Email: [email protected]

VTT test vehicle: Fiat 500L

Current DESERVE functions - Blind spot detection (VRU Safety) - Driver distraction warning - Low speed collision avoidance Also EU projects CoMoSef, VRUITS, and TEAM utilised the vehicle in co-operative function tests

Environment perception: - 1 Bosch LLR2 (16/4, 200m)

- 1 IBEO Lux Lidar (110/3.2, 200m)

- 1 FLIR PathFinder 320x240 (24/18)

- 1 Vislab 3DV-E29 640x480 (72/54, 65m)

- 2 Continental SRR 208-2 (150/12, 50m)

Driver monitoring: - 2 full-HD autofocus cameras

Driver HMI: - Display

Devices

Backoffice

GPS

RTK-GPSBase station

HMI

V2V & V2I co-operative

functions 802.11p

4G (LTE)

WLA

N

Stereo

camera

Thermal camera

Radar 77GHz

Laser scanner

HMI No pedestrians nor obstacles

Perception working OK

All obstacle arrows OFF

Pedestrian(s) and obstacle(s)

Pedestrian warning

Relevant arrow(s) ON

Only obstacle(s)

Relevant arrow(s) ON

General warning

Page 24: Desk demo poster book

Erik Nordin

Volvo Technology [email protected]

Volvo driving simulator: Volvo FH 6x2

DESERVE functions - Adaptive Cruise Control (ACC with S&G) - Collission Warning and Emergency Braking

(CWEB)

Context PreScan

DESERVE Platform

Driver Model Environmental

Model

Volvo FH Vehicle Model (Chassi and powertrain)

Perception Platform

Application Platform

IWI Platform

Page 25: Desk demo poster book

Erik Nordin

Volvo Technology [email protected]

Volvo concept truck: Volvo FH 6x2

DESERVE functions - Adaptive Cruise Control (ACC with S&G) - Collission Warning and Emergency Braking

(CWEB)

Platform Integrated

Page 26: Desk demo poster book

Mr. Aarno Lybeck Tel. +358 44 714 3763

Email: [email protected]

TTS concept vehicle: Iveco Stralis 560 e6

Current DESERVE functions - Blind spot detection - Driver distraction warning - Low speed collision avoidance DESERVE platform has been used in developing these functions for training professional truck drivers

HMI No obstacles, system working OK: Obstacle(s) or pedestrian(s):

Perception working OK

360° view, no side lights

All obstacle arrows OFF

Detections (person&pole) and side lights

General warning

Relevant arrow(s) and corresponding side light(s) ON

Obstacle illumination is part of the HMI; enlighted obstacles are more obvious in the driver display - especially in night-time. Driver gaze monitoring is utilised e.g. in factual feedback for driver candidates.

Environment perception: - 3 Vislab 3DV-E29 640x480 - 5 Continental SRR 208-2 - 8 Ultrasonic sensors

Driver monitoring: - 3 full-HD autofocus cameras

Driver HMI: - Obstacle warning screen - ASL 360 camera system screen - Iveconnect integrated screen

Devices

Page 27: Desk demo poster book

Nereo Pallaro

C.R.F. [email protected]

CRF concept car: FIAT 5ooL

CRASH

No Danger Pedestrian in danger detected Visual Warning + PreFill

(without distraction) No Driver reaction Autonomous brake

Function Autonomous Emergency Braking (AEB) for Pedestrian with integrated Driver Distraction

Enhanced Warning Strategies With integrated Driver Distraction

DESERVE platform From MIL/SIL simulation to on vehicle RCP

Audio-Visual Warning + PreFill

(with distraction)

Functional block diagram System architecture

C-CAN

B-CAN

LDW PRIVATE CAN

LOCAL CAN

Peak PCAN router

VGA

LDW CAMERA (CONTI)

INTERIOR CAMERA (CONTI)

MRR RADAR (SMS)

USB

DEMO DISPLAY

RS232

dSPACE

DIGITAL

Vehicle data

Audio

Display

Powertrain

Brake

Radar

Front

Camera

Interior

Camera

Driver

Distraction

APPLICATION PLATFORM IWI PLATFORMPERCEPTION PLATFORM

Vehicle

control

Threat

assessment

IWI manager

Obstacle

selectionVehicle

trajectory

Warning icons by CTAG

Page 28: Desk demo poster book

Toolchain for the integrated development & validation of ADAS functions

HIL – Step 1

Embedded Controller

Embedded PC

Mid-Size Vehicle

Dynamic Simulator

Perception Platform Application/IWI Platform

Prescan, ASM RTMaps Matlab/Simulink

SIM

ULA

TIO

N

TOO

LCH

AIN

PC PC

Mid-Size Embedded PC Embedded Controller

PC

Environment & vehicle dynamics

Function Autonomous Emergency

Braking (AEB) for Pedestrian.

Simple scenario

Complex scenario

Paolo Denti C.R.F. [email protected]

HIL – Step 2 MIL

Page 29: Desk demo poster book
Page 30: Desk demo poster book
Page 31: Desk demo poster book
Page 32: Desk demo poster book