[ieee 2010 7th workshop on positioning, navigation and communication (wpnc) - dresden, germany...

6
Multi-modal Sensor Data and Information Fusion for Localization in Indoor Environments Lasse Klingbeil * , Richard Reiner * , Michailas Romanovas *† , Martin Traechtler * and Yiannos Manoli *† * Hahn-Schickard-Gesellschaft Institute of Microsystems and Information Technology (HSG-IMIT) Wilhelm-Schickard-Straße 10, 78052 Villingen-Schwenningen, Germany email: [email protected] University of Freiburg, Department of Microsystems Engineering (IMTEK), Chair or Microelectronics Georges-K¨ ohler-Allee 101, 79110 Freiburg, Germany Abstract—The work presents the development of a framework for sensor data and complementary information fusion for localization in indoor environments. The framework is based on a modular and flexible sensor unit, which can be attached to a person and which contains various sensor types, such as range sensors, inertial and magnetic sensors or barometers. All mea- surements are processed within Bayesian Recursive Estimation algorithms and combined with available a priori knowledge such as map information or human motion models. I. I NTRODUCTION There has been a lot of interest in the localization of people in an indoor environment in recent time. This has been driven by international research programs in the area of health and elderly care [1] (patient tracking, monitoring of age demented people) as well as in the area of safety (localization of first responders [2], underground mining safety [3] etc.). There are numerous localization methods and techniques, including inertial navigation, multilateration or multiangulation based on radio or acoustic signals and optical methods, such as image processing or laser range scanning (see [4]). However, most of these methods are able to provide sufficient performance only for special types of applications within controlled or restricted environments. Localization of people in indoor environments has two challenging factors, that are ’people’ and ’indoor’. For example, the straightforward application of strapdown inertial navigation algorithms is difficult due to the complexity of human movements and accumulating errors. Also any sensor system needs to be wearable, which makes it difficult to use certain sensors such as laser range scanners as they are successfully used in robotic applications [5]. Radio based methods usually suffer from significant errors induced by multi-path wave propagation, which result in highly complex behavior for indoor applications. In the presented work we follow the approach that a robust and usable indoor localization system can not be based on one of those methods alone, but has to combine various comple- mentary sensor data and any other information available in order to provide a reliable position estimate. Therefore we developed a modular wearable sensor system to enable an easy incorporation of various sensors and supplemented it with a corresponding estimation algorithm for fusion of various sensor data with other a priori knowledge such as indoor maps and given motion constraints. II. SENSOR SYSTEM Figure 1 (left) shows the concept of the proposed sensor sys- tem. Several PCBs (’Bricks’), each designed to fulfill a certain task (e.g. providing power, enabling wireless communication) or to contain a certain sensor group (e.g. inertial sensors, GPS, ultrasound sensors), can be stacked together to form a setup best fitting the requirements of the particular application. New Bricks can be easily developed and attached, since all Bricks are connected to a common bus for data, signals and power. In the current version of the system (see figure (2) Fig. 1. Sensor System based on ’Bricks’. we use acceleration, angular rate and magnetic field sensors, batteries and a Nanotron TM radio transceiver module [6]. The Nanotron TM module enables simultaneous wireless communi- cation and time-of-flight based distance measurement between sender and receiver. The distance measurement accuracy is about 1m under ideal measurement conditions. We attach several of those units to known positions within the building (’anchors’) and one unit to the person to be tracked (figure 1 right). All sensor information, including range measurements 978-1-4244-7157-7/10/$26.00 ©2010 IEEE 187

Upload: yiannos

Post on 27-Mar-2017

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2010 7th Workshop on Positioning, Navigation and Communication (WPNC) - Dresden, Germany (2010.03.11-2010.03.12)] 2010 7th Workshop on Positioning, Navigation and Communication

Multi-modal Sensor Data and Information Fusionfor Localization in Indoor Environments

Lasse Klingbeil∗, Richard Reiner∗, Michailas Romanovas∗†, Martin Traechtler∗ and Yiannos Manoli∗†∗Hahn-Schickard-Gesellschaft

Institute of Microsystems and Information Technology (HSG-IMIT)Wilhelm-Schickard-Straße 10, 78052 Villingen-Schwenningen, Germany

email: [email protected]†University of Freiburg, Department of Microsystems Engineering (IMTEK), Chair or Microelectronics

Georges-Kohler-Allee 101, 79110 Freiburg, Germany

Abstract—The work presents the development of a frameworkfor sensor data and complementary information fusion forlocalization in indoor environments. The framework is based ona modular and flexible sensor unit, which can be attached to aperson and which contains various sensor types, such as rangesensors, inertial and magnetic sensors or barometers. All mea-surements are processed within Bayesian Recursive Estimationalgorithms and combined with available a priori knowledge suchas map information or human motion models.

I. INTRODUCTION

There has been a lot of interest in the localization of peoplein an indoor environment in recent time. This has been drivenby international research programs in the area of health andelderly care [1] (patient tracking, monitoring of age dementedpeople) as well as in the area of safety (localization of firstresponders [2], underground mining safety [3] etc.). Thereare numerous localization methods and techniques, includinginertial navigation, multilateration or multiangulation based onradio or acoustic signals and optical methods, such as imageprocessing or laser range scanning (see [4]). However, most ofthese methods are able to provide sufficient performance onlyfor special types of applications within controlled or restrictedenvironments. Localization of people in indoor environmentshas two challenging factors, that are ’people’ and ’indoor’. Forexample, the straightforward application of strapdown inertialnavigation algorithms is difficult due to the complexity ofhuman movements and accumulating errors. Also any sensorsystem needs to be wearable, which makes it difficult touse certain sensors such as laser range scanners as they aresuccessfully used in robotic applications [5]. Radio basedmethods usually suffer from significant errors induced bymulti-path wave propagation, which result in highly complexbehavior for indoor applications.

In the presented work we follow the approach that a robustand usable indoor localization system can not be based on oneof those methods alone, but has to combine various comple-mentary sensor data and any other information available inorder to provide a reliable position estimate. Therefore wedeveloped a modular wearable sensor system to enable aneasy incorporation of various sensors and supplemented it witha corresponding estimation algorithm for fusion of various

sensor data with other a priori knowledge such as indoor mapsand given motion constraints.

II. SENSOR SYSTEM

Figure 1 (left) shows the concept of the proposed sensor sys-tem. Several PCBs (’Bricks’), each designed to fulfill a certaintask (e.g. providing power, enabling wireless communication)or to contain a certain sensor group (e.g. inertial sensors,GPS, ultrasound sensors), can be stacked together to form asetup best fitting the requirements of the particular application.New Bricks can be easily developed and attached, since allBricks are connected to a common bus for data, signals andpower. In the current version of the system (see figure (2)

Fig. 1. Sensor System based on ’Bricks’.

we use acceleration, angular rate and magnetic field sensors,batteries and a NanotronTM radio transceiver module [6]. TheNanotronTM module enables simultaneous wireless communi-cation and time-of-flight based distance measurement betweensender and receiver. The distance measurement accuracy isabout 1m under ideal measurement conditions. We attachseveral of those units to known positions within the building(’anchors’) and one unit to the person to be tracked (figure 1right). All sensor information, including range measurements

978-1-4244-7157-7/10/$26.00 ©2010 IEEE 187

Page 2: [IEEE 2010 7th Workshop on Positioning, Navigation and Communication (WPNC) - Dresden, Germany (2010.03.11-2010.03.12)] 2010 7th Workshop on Positioning, Navigation and Communication

to the fixed sensor units, is transferred to a PC for furtherprocessing.

Fig. 2. Current version of the sensor unit, containing a Power Brick, anInertial Brick (accelerometers, gyroscopes and magnetic field sensors) and aWireless Brick (NanotronTM ranging and communication module).

III. ESTIMATION ALGORITHMS

A. Recursive Bayesian Estimation

Recursive Bayesian Estimation algorithms are often used[5], [7] to estimate the state xk of a system at the time tk basedon all measurements Zk = {z0, . . . , zk} up to that time. Thestate estimation is represented as a probability density functionp(xk|z0, . . . , zk) and can be calculated using Bayes’ Rule:

p(xk|Zk) =p(Zk|xk)p(xk)

p(Zk), (1)

This can be transformed into a recursive equation:

p(xk|Zk) =p(zk|xk)p(xk|Zk−1)

p(zk|Zk−1)

= η · p(zk|xk)p(xk). (2)

The term η = p(zk|Zk−1) is a normalization factor. The termp(xk|Zk) is called the a posteriori probability and describesthe current state estimate using all measurements up to now.The term

p(xk) = p(xk|Zk−1)

=

∫p(xk|xk−1)p(xk−1|Zk−1)dxk−1 (3)

is called a priory probability and describes the current estimateusing all but the current measurement. It contains the a posteri-ori estimation p(xk−1|Zk−1) of the last time step and the termp(xk|xk−1), which represents the process model describingthe knowledge about the dynamics of the system and thecorresponding uncertainties. The term p(zk|xk) represents themeasurement model and relates the observations of the system(the measurements) to the state, considering also the sensoruncertainties.

Having all these terms in mind, any recursive Bayesianestimation cycle is performed in two steps:

Prediction The a priori probability is calculated from thelast a posteriori probability using the process model (3).

Correction The a posteriori probability is calculated fromthe a priori probability using the measurement model (2) andthe current measurement.

Various implementations of recursive Bayesian estimationalgorithms differ in the way the probabilities are representedand transformed in the process and measurement models. Ifthe models are linear and the probabilities are Gaussian, theKalman Filter (KF) is an efficient and optimal solution in theleast square sense. If the models are nonlinear, Unscented(UKF) [8], [9] or Extended Kalman Filters (EKF) [5] canbe used. In the Extended Kalman Filter the models arelinearized using Jacobian matrices. In the Unscented KalmanFilter the probability distribution is approximated using a setof deterministically chosen points in the state space, whichconserves the Gaussian properties of the distribution undernonlinear transformations. In Sequential Monte Carlo Filters,such as the particle filter (PF), the probability densities arerepresented by a set of random state space samples drawn fromthe corresponding distribution. Although these filters requirehigher computational resources, they are especially useful fornon-Gaussian or unknown probability distributions and theincorporation of higher level or empirical information suchas maps.

B. Particle Filter based Localization

While the equations above describe the general concept ofrecursive Bayesian estimation, we now focus on its applicationto localization problems and on its implementation via particlefilter. The state x is represented by a number of weightedsamples x[i], i = 1, .., N with weights w[i] as:

p(x) ≈∑i

w[i]δ(x− x[i]), (4)

with δ(x) being the Dirac delta function. The state variablecontains all information necessary to describe the currentstate of the system and to predict future state based on theprocess model. For the simplest case of localization of anobject (e.g a person) in a single floor building these canbe just the 2D coordinates of the object x = (px, py)T .Depending on the application, the state could also contain full3D coordinates, velocities, accelerations, higher level motionstates such as ’walking’, ’standing’, ’flying’ or in the case ofa SLAM (Simultaneous Localization and Mapping) problem[10] information about the map built during the estimationprocess.

Starting from the initial probability p(x0), which maybe represented as uniformly distributed samples with equalweights, the recursive update is performed as follows:

Prediction Every sample (x[i]k−1, w

[i]k−1) of the last a poste-

riori probability p(xk−1|Zk−1) is replaced by a new sampleaccording to the process model p(xk|xk−1), resulting in a new

188

Page 3: [IEEE 2010 7th Workshop on Positioning, Navigation and Communication (WPNC) - Dresden, Germany (2010.03.11-2010.03.12)] 2010 7th Workshop on Positioning, Navigation and Communication

set of samples (x[i]k , w

[i]k ), representing the a priori probability

at time k.Correction The weight w[i] of every sample of the a priori

probability is updated according to the measurement model:

w[i]k = w

[i]k · p(zk|x

[i]k ),

∑i

w[i]k = 1. (5)

This reweighted set of samples now approximates the aposteriori probability.

Resampling In this additional step, a new set of samples isdrawn with replacement from the prior set with the probabilityof a sample being drawn given by its weight factor. The finalset represents the new posterior as well, but now the samplesare equally weighted. At this point we can start with a newprediction step.

During the resampling step, unlikely samples are omittedand replaced by more likely samples which leads to manyduplicates in the final set. To avoid ending up with onlya single sample (algorithm degeneracy), noise is introducedduring the prediction step to separate samples that have thesame values.

C. Process and Measurement Models

The main challenge in designing localization algorithms isthe formulation of the process and the measurement models.Various sensor modalities, as used in the sensor system de-scribed before, and their possible representation in the processand measurement models of a localization algorithm are listedbelow. It should be noted here that a sensor reading does nothave to be considered as measurement or observation, but alsocan be treated as a control input. In this case the sensor readingis part of the process model and therefore affects the predictionstep of the estimation cycle. This is often done in the case ofincremental sensors, such as odometers or angular rate sensors.Some sensor readings also do not have to be used directly: forexample the data of a set of inertial sensors can be processedin a separate algorithm to estimate the orientation of an object,which then can be used as an observation for a higher levelfilter.

1) Inertial Sensors: Inertial sensors (accelerometers andgyroscopes) can be used for various reasons in a localizationsystem. In general they can be employed to estimate the 3Dorientation of an object in space (mostly in combination witha magnetometer to compensate gyroscope drift) [11]. Thisinformation is helpful for the detection of pure translationalacceleration signal without the gravity components which arealso sensed by the accelerometers. This acceleration signalcan be integrated to a velocity estimate. The accelerometersignal can also be used to derive higher level information, suchas motion states (e.g. ’walking’, ’standing’, ’running’) [12],for switching between different process models or to detectsteps and the corresponding stride length for walking speedestimation [13]. Any orientation information in combinationwith a velocity estimate can be either used as a control inputor as an observation of the system, depending on the actualimplementation of the state variable and the process model.

2) Distance Measurements: Distance measurements be-tween the tracked object and known or unknown positions inthe environment can be performed with ultrasonic sensors [14],[15], laser range scanners, radio waves or more recently Time-of-flight cameras [16]. Radio based techniques include Ultra-wideband (UWB) ranging [17]–[19], Chirp Spread Spectrummodulation [20] (as used in the system described here) andrange estimation based on the Received Signal Strength (RSS)of any radio signal [21], [22]. Single distances or precomputedpositions based on multiple distances (multi-lateration) areconsidered as observations of the system and need to bedescribed in the measurement model.

3) GPS: A GPS receiver provides a 3D position andvelocity information of an object and is therefore directlyobserving the state (or parts of it) in a localization system.If the pseudo-ranges measured by the receiver are used inthe algorithm, the information can be considered as distancemeasurement. However, GPS receivers can not be consideredas main measurement modality for indoor localization systems.

4) Pressure Sensor: A pressure sensor can serve as abarometer estimating the vertical position of an object pro-viding a direct observation of a part of the state variable.

5) Camera: Cameras are often used to detect and trackmoving objects while observing the tracking volume fromfixed positions. This outside-in approach is not consideredhere, since all necessary information in the proposed systemis gathered by the sensing unit itself, while it is attached tothe object or person (inside-out). A moving camera can detectfeatures, such as corners, colors or optical markers. Combinedwith a camera model and a feature map, camera data areincorporated into the algorithm via the measurement model.There are also optical flow based methods, which can be usedas a control input in the prediction step (visual odometry [23]).

6) Map Information: Maps contain information about thetracking environment. This could be simply the positions ofanchor nodes when multilateration is used (similar to the GPSAlmanach, which contains the position of all satellites), or itcould be a grid covering the tracking area with informationabout free and occupied spaces, such as walls and doors(occupancy grid). Maps can also contain the positions offeatures detected by cameras or laser range scanners (e.g.optical markers or corners) or they describe the RSS profileof the tracking environment [24]. These maps are mostlypart of the measurement model, since they relate the stateprediction with sensor readings. A map can be either knownor determined before the actual estimation process or it canbe estimated as part of the state vector during the process(SLAM).

All sensors and measurement principles described abovehave been used in various localization systems. We propose,that in principle all of them can be employed in a singlesystem at once. That alone may not be very rewarding, butthe possibility to setup any combination of various sensingmodalities both from the hardware point of view and fromthe algorithmic point of view enables an easy access to thedevelopment, evaluation and adaption of localization systems

189

Page 4: [IEEE 2010 7th Workshop on Positioning, Navigation and Communication (WPNC) - Dresden, Germany (2010.03.11-2010.03.12)] 2010 7th Workshop on Positioning, Navigation and Communication

and algorithms for various applications. In the followingsection we describe the work with an early version of thesensor system, which uses radio based distance measurements,inertial and magnetic sensors and an indoor map to localizepeople in an office environment.

IV. IMPLEMENTATION EXAMPLE

The sensor unit contains 3-axis accelerometers, gyroscopesand magnetometers and a NanotronTM wireless transceiverchip. The unit is attached to or carried by the person to betracked. Four sensor units containing only the transceiver chipare fixed at known positions in the tracking area (anchornodes). While the person is moving through the building,the sensor data are locally preprocessed and then sent toa PC for further processing. At the same time consecutivedistance measurements are performed between the movingsensor unit and the anchor nodes using the transceiver chipand the data are also transferred to the PC. All informationis processed and visualized on the PC using MATLABTM .Since all measurements are gathered by the sensor unit, itis in principle possible to do all filtering locally and sendonly position data to a central station. However, this approachrequires higher computational resources on the sensor unit andis not considered at the current phase of the work.

A. Prediction

The state vector is given by the position of the person intwo dimensions. In the prediction step all samples are updatedusing the process model:

x[i]k =

(x1x2

)[i]

k

=

(x1x2

)[i]

k−1

+ v[i]k ·∆tk ·

(sin(h

[i]k )

cos(h[i]k )

), (6)

where v[i]k and h[i]k are velocity and the heading measurementsestimated using the inertial and magnetic sensors as explainedbelow. Together with the time step ∆tk they represent anincremental position step and are therefore treated as a controlinput to the process. As mentioned before, it is important tointroduce noise into the prediction model to avoid degeneracyof the filter. This is done in the natural way using theuncertainty of the velocity and heading information:

v[i]k = vk + n[i]v , n

[i]v ∼ N(0, σ2

v) (7)

h[i]k = hk + n

[i]h , n

[i]h ∼ N(0, σ2

h). (8)

The velocity vk is calculated using the accelerometers onthe sensor system. A simple integration of the accelerometersto determine velocity is not suitable in the case of walkingor running humans, since the needed ”forward” accelerationcomponent is disturbed by components such as orientationchanges, walking steps and other body movements, which areusually in the same frequency and amplitude range. Thereforea step detection algorithm in combination with an assumedfixed step length is used to estimate the speed of the person.

The description of this algorithm is beyond the scope ofthis paper, but is similar to the one used in [25]. There areother methods to derive the walking distance, such as doubleintegration of a food mounted accelerometer in combinationwith Zero Velocity Updates or step length estimation basedon the accelerometer signal [26]. They potentially provide abetter accuracy in estimating the traveled distance, but haven’tbeen implemented at this stage of the work.

The heading hk of the person is calculated from the orienta-tion of the sensor unit, which is estimated using a quaternionbased UKF as described in [11]. It should be mentionedhere, that using this three degrees of freedom orientationfilter only to estimate the heading of the person seems to bean unnecessary effort, knowing that the heading can also bemeasured by projecting the Earth magnetic field vector to theEarth tangential plane. However, the Earth magnetic field canbe heavily disturbed within buildings, and an inclusion of thegyroscope information helps to cope with certain short termmagnetic disturbances. Also it may be of general interest toknow the full orientation information of the device attachedto the person, such as the detection of motion states differentfrom walking (e.g. ’lying on the floor’).

B. Correction

During the correction step the range measurements and theindoor map are used to derive the a posteriori probabilityfrom the a priori one determined during the prediction step.According to (5), the weights w[i]

k of the samples are updatedusing the likelihood p(zk|x[i]k ) of a measurement zk, giventhe predicted state samples x

[i]k . For every sample x

[i]k of

the predicted sample set, its distance d[i]n,k = Xn − x

[i]k

to every fixed sensor node position Xn is compared to therange dn,k, which was actually measured. In the ideal case,when Gaussian zero mean measurement noise is assumed,the likelihood would be proportional to a normal distributionN(0, σ2). However, especially in indoor non line-of-sight

Fig. 3. Sketch of the distribution of ranging errors. Even in the line-of-sightcase, the errors are not normally distributed. The overlayed polar plot showsa subset of the range measurements distributed over the section where theyhave been taken at a fixed known distance of 5m.

190

Page 5: [IEEE 2010 7th Workshop on Positioning, Navigation and Communication (WPNC) - Dresden, Germany (2010.03.11-2010.03.12)] 2010 7th Workshop on Positioning, Navigation and Communication

environments the measured range is usually larger than theactual distance. This is due to the occlusions of the directpath through walls and corners and the resulting reflectionof the signal (multi-path propagation). Figure 3 illustrates thedistribution of ranging errors. Several range measurementshave been taken from various positions in the environmentat the distance of 5m to the anchor node. It can be seen,that even in the Line-of-sight case the errors are not normallydistributed. In fact they are time correlated because the errorsdue to multi-path propagation are similar within certain areas.This is not considered in our model at the moment, but as animprovement compared to the usual normal distribution, weassume an asymmetric Gaussian distribution N(0, σ2

1 , σ22) as

an error model:

p(zk|xk) = p(dk|xk, Xn)

{N(0, σ2

1), (dn,k − dn,k) ≥ 0

N(0, σ22), (dn,k − dn,k) < 0

(9)

For the positive part of dn,k − dn,k, that is the measurementbeing smaller than its prediction, we assume a relatively smallerror of about σ1 = 1m. For the negative part, we assumean error σ2 = 0.5 · dn,k, which is dependent on the measureddistance itself, allowing the measured range being much biggerthan the actual distance. Other error models for the distancemeasurement in multi-path environments and the prediction oferrors based on geometric constraints given by the environmentare subjects of our current research.

Fig. 4. Occupancy grid as representation of the indoor map.

We also use a map of the indoor environment as an addi-tional input to the localization algorithm. It can be assumedthat people usually do not walk through walls, so a plan ofthe existing walls in the environment can be used to naturallyconstraint the estimation of a persons position. In the presentedsystem this is done using an occupancy grid of the environment(see figure 4). The environment is represented as a matrixwith each entry corresponding to a 10cm x 10cm area. Thevalue ’0’ in the matrix means ’free’ or ’walkable’ (coded asblack in the figure) and ’1’ means ’occupied’ (coded as white).There are other methods to represent the environment, suchas topological maps [5], which are not considered here. Afterevery prediction step, the path increment x[i]

k −xk−1 for everysample is checked for crossing an ’occupied’ part of the map.If yes, the weight of that sample is set to a very small value,

otherwise it remains unchanged. This is done as part of thecorrection step and applied before the incorporation of therange measurements.

V. MEASUREMENTS

Figure 5 illustrates the principle of the particle filter and theinfluence of the occupancy grid on the particle distribution.The green dots are the particles and the blue dots mark the

Fig. 5. Estimated path, particle distribution, range measurements and headingat one particular time step.

estimated path so far, where the mean of all particles for everyestimation step has been calculated (more elaborate ways toextract the position from the sample set are possible, but notconsidered here). It can be seen, that the cloud is separated byan obstacle into two subsets representing two possible walkingtrajectories which correspond to the measurements. Eventuallyone cloud will disappear, if the measurements do not supportthe sample positions or, depending on the type of obstacle, theclouds may merge after passing it.

The arrow represents the heading measurement, which isassumed to be the moving direction. The red stars correspondto the positions of the anchor nodes and the red circles showthe current range measurements to that node. The particleswhich are not clustered around the current estimate are ’free’particles, which are distributed randomly over the wholetracking area in every prediction step to avoid the ’kidnappingproblem’ (the track of the object is lost and it appears in adifferent area without a chance to get tracked again) and toavoid an object being trapped in the corner of a room or behinda wall due to map constraints.

Figure 6 shows a number of estimated paths of a personwalking through an office environment. The same measure-ment data set is used, but the algorithm was repeated multipletimes. In (b) the map information is used in the correctionstep, in (a) this information is ignored. It can be seen, thatthe map improves the trajectory accuracy significantly. Despitethe map constraints there are still trajectories passing throughwalls. This is due to the fact that the constraints are applied tosingle particles and not to the mean estimation calculated fromthe particle cloud. Further improvements are possible here. Itis important to note that the variability of the trajectories isreduced. In particle filters, differently from Kalman Filters,random sample generation is involved, which could potentiallyleads to varying trajectories when repeating the estimation

191

Page 6: [IEEE 2010 7th Workshop on Positioning, Navigation and Communication (WPNC) - Dresden, Germany (2010.03.11-2010.03.12)] 2010 7th Workshop on Positioning, Navigation and Communication

(a)

(b)

Fig. 6. Multiple (10) algorithm iterations using the same dataset, in (a) usingmap information and in (b) without the map. The red line shows roughly thereference path. The plot covers an area of 35mx15m..

on the same data set. This effect is reduced due to the mapconstraints.

VI. CONCLUSION AND FUTURE WORK

We presented a system for sensor data and informationfusion to localize people in indoor environments. The systemconsists of both a modular hardware setup and a modularalgorithm framework, which can be adapted to the needs ofvarious applications. Some example measurements are shown,where a person is tracked in an office environment based oninertial, magnetic field and range sensors as well as on anindoor map.

Implementation of other sensors (GPS receivers for seam-less indoor/outdoor tracking, ultrasound sensors for higherranging accuracy or radar type information, barometer fortracking in multi-storey buildings) and various modificationsof the estimation algorithms (use of topological maps, bettermotion and measurement models) as well as an evaluation ofthe whole system are subjects of current an future work.

REFERENCES

[1] U. Varshney, “Pervasive healthcare and wireless health monitoring,”Mobile Networks and Applications, vol. 12 -2, pp. 113–127, 2007.

[2] M. Klann, “Tactical navigation support for firefighters: The lifenet ad-hoc sensor-network and wearable system,” in Mobile Response, ser.Lecture Notes in Computer Science. Springer Berlin/Heidelberg, 2009,pp. 41–56.

[3] A. Chehri, P. Fortier, and P. M. Tardif, “Uwb-based sensor networks forlocalization in mining environments,” Ad Hoc Networks, vol. 7, no. 5,pp. 987 – 1000, 2009.

[4] J. Hightower and G. Borriello, “Location systems for ubiquitous com-puting,” Computer, vol. 34, no. 8, pp. 57–66, 2001.

[5] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics (IntelligentRobotics and Autonomous Agents). The MIT Press, September 2005.

[6] Realtime Location Systems (Whitepaper), Nanotron TechnologiesGmbH, 2007. [Online]. Available: www.nanotron.com

[7] J. V. Candy, Bayesian Signal Processing: Classical, Modern and ParticleFiltering Methods. New York, NY, USA: Wiley-Interscience, 2009.

[8] S. Julier, J. Uhlmann, and H. Durrant-Whyte, “A new method forthe nonlinear transformation of means and covariances in filters andestimators,” Automatic Control, IEEE Transactions on, vol. 45, no. 3,pp. 477–482, Mar 2000.

[9] R. van der Merwe, “Sigma-point kalman filters for probabilistic in-ference in dynamic state-space modes,” in Workshop on Advances inMachine Learning, Montreal, June 2003.

[10] H. Durrant-Whyte and T. Bailey, “Simultaneous localization and map-ping: part i,” Robotics Automation Magazine, IEEE, vol. 13, no. 2, pp.99 –110, june 2006.

[11] M. Romanovas, L. Klingbeil, M. Traechtler, and Y. Manoli, “Efficientorientation estimation algorithm for low cost inertial and magnetic sensorsystems,” in 2009 IEEE Workshop on Statistical Signal Processing.Cardiff, Wales, UK: IEEE, 2009.

[12] J. Parkka, M. Ermes, P. Korpipaa, J. Mantyjarvi, J. Peltola, and I. Korho-nen, “Activity classification using realistic data from wearable sensors,”Information Technology in Biomedicine, IEEE Transactions on, vol. 10,no. 1, pp. 119 –128, jan. 2006.

[13] S.-W. Lee and K. Mase, “Activity and location recognition usingwearable sensors,” IEEE Pervasive Computing, vol. 1, pp. 24–32, 2002.

[14] M. Hazas and A. Hopper, “Broadband ultrasonic location systems forimproved indoor positioning,” IEEE Transactions on Mobile Computing,vol. 5, pp. 536–547, 2006.

[15] J. Gonzalez and C. Bleakley, “High-precision robust broadband ultra-sonic location and orientation estimation,” Selected Topics in SignalProcessing, IEEE Journal of, vol. 3, no. 5, pp. 832 –844, oct. 2009.

[16] A. Prusak, O. Melnychuk, H. Roth, I. Schiller, and R. Koch, “Poseestimation and map building with a time-of-flight-camera for robotnavigation,” Int. J. Intell. Syst. Technol. Appl., vol. 5, no. 3/4, pp. 355–364, 2008.

[17] D. Harmer, A. Yarovoy, N. Schmidt, K. Witrisal, M. Russell, E. Frazer,T. Bauge, S. Ingram, A. Nezirovic, A. Lo, L. Xia, B. Kull, andV. Dizdarevic, “An ultra-wide band indoor personnel tracking system foremergency situations (europcom),” in Radar Conference, 2008. EuRAD2008. European, oct. 2008, pp. 404 –407.

[18] M. Segura, V. Mut, and H. Patino, “Mobile robot self-localization systemusing ir-uwb sensor in indoor environments,” in Robotic and SensorsEnvironments, 2009. ROSE 2009. IEEE International Workshop on, nov.2009, pp. 29 –34.

[19] Z. Guoping and S. Rao, “Position localization with impulse ultrawide band,” in Proceedings of Wireless Communications and AppliedComputational Electromagnetics, april 2005, pp. 17 – 22.

[20] B. Neuwinger, U. Witkowski, and U. Ruckert, “Ad-hoc communicationand localization system for mobile robots,” in Proceedings of theFIRA RoboWorld Congress 2009 on Advances in Robotics. Berlin,Heidelberg: Springer-Verlag, 2009, pp. 220–229.

[21] A. Paul and E. Wan, “Rssi-based indoor localization and tracking usingsigma-point kalman smoothers,” Selected Topics in Signal Processing,IEEE Journal of, vol. 3, no. 5, pp. 860 –873, oct. 2009.

[22] S. Mazuelas, A. Bahillo, R. Lorenzo, P. Fernandez, F. Lago, E. Garcia,J. Blas, and E. Abril, “Robust indoor positioning provided by real-timerssi values in unmodified wlan networks,” Selected Topics in SignalProcessing, IEEE Journal of, vol. 3, no. 5, pp. 821 –831, oct. 2009.

[23] D. Nister, O. Naroditsky, and J. Bergen, “Visual odometry,” ComputerVision and Pattern Recognition, IEEE Computer Society Conference on,vol. 1, pp. 652–659, 2004.

[24] C. M. Takenga and K. Kyamakya, “Robust positioning system basedon fingerprint approach,” in MobiWac ’07: Proceedings of the 5th ACMinternational workshop on Mobility management and wireless access.New York, NY, USA: ACM, 2007, pp. 1–8.

[25] L. Klingbeil and T. Wark, “A wireless sensor network for real-timeindoor localisation and motion monitoring,” in Proc. InternationalConference on Information Processing in Sensor Networks IPSN ’08,22–24 April 2008, pp. 39–50.

[26] Q. Ladetto, “On foot navigation: Continuous step calibration using bothcomplementary recursive prediction and adaptive kalman filtering,” inProceedings of the ION GPS, Salt Lake City, Utah, 2000.

192