a state-of-the-art review on mapping and localization of ...2016/10/19  ·...

21
Review Article A State-of-the-Art Review on Mapping and Localization of Mobile Robots Using Omnidirectional Vision Sensors L. Payá, A. Gil, and O. Reinoso Universidad Miguel Hern´ andez de Elche, Avda. de la Universidad s/n, Elche, Spain Correspondence should be addressed to O. Reinoso; [email protected] Received 19 October 2016; Revised 19 February 2017; Accepted 15 March 2017; Published 24 April 2017 Academic Editor: Lucio Pancheri Copyright © 2017 L. Pay´ a et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Nowadays, the field of mobile robotics is experiencing a quick evolution, and a variety of autonomous vehicles is available to solve different tasks. e advances in computer vision have led to a substantial increase in the use of cameras as the main sensors in mobile robots. ey can be used as the only source of information or in combination with other sensors such as odometry or laser. Among vision systems, omnidirectional sensors stand out due to the richness of the information they provide the robot with, and an increasing number of works about them have been published over the last few years, leading to a wide variety of frameworks. In this review, some of the most important works are analysed. One of the key problems the scientific community is addressing currently is the improvement of the autonomy of mobile robots. To this end, building robust models of the environment and solving the localization and navigation problems are three important abilities that any mobile robot must have. Taking it into account, the review concentrates on these problems; how researchers have addressed them by means of omnidirectional vision; the main frameworks they have proposed; and how they have evolved in recent years. 1. Introduction Over the last few years, the range of applications of mobile robots has significantly increased and they can be found in diverse environments, such as households, industrial and educational, where they are able to carry out a variety of tasks. e improvements performed both in perception and computation have had an important contribution to increase the range of environments where mobile robots can be used. In order to be fully functional, a mobile robot must be capable of navigating safely through an unknown environ- ment while simultaneously carrying out the task it has been designed for. With this aim, the robot must be able to build a model or map of this previously unknown environment, to estimate its current position and orientation making use of this model, and to navigate to the target points. Mapping, localization, and navigation are three classical problems in mobile robotics that have received a great deal of attention in the literature and keep on being very active research areas at present. Finding a robust solution to these three key problems is crucial to increase the autonomy and adaptability of mobile robots to different circumstances and definitely expand their range of applications. To address the mapping, localization, and navigation problems, it is necessary that the robot has some relevant information about the environment where it moves, which is a priori unknown in most applications. e robots can be equipped with diverse sensorial systems that allow them to extract the necessary information from the environment to be able to carry out their tasks autonomously. is way, from the first works on mobile robotics [1], the control systems have made use of the information collected from the environment, to a greater or lesser extent. e methods used to process this information have evolved as new algorithms have come out and the capabilities of the perception and computation systems have increased. As a result, very different approaches have emerged, considering different kinds of sensory infor- mation and processing techniques. Among them, in recent years, the use of omnidirectional vision systems has become popular in many works on mobile robots, and it is worth studying their main properties and applications in mapping, localization, and navigation. Hindawi Journal of Sensors Volume 2017, Article ID 3497650, 20 pages https://doi.org/10.1155/2017/3497650

Upload: others

Post on 21-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

Review ArticleA State-of-the-Art Review on Mapping and Localization ofMobile Robots Using Omnidirectional Vision Sensors

L Payaacute A Gil and O Reinoso

Universidad Miguel Hernandez de Elche Avda de la Universidad sn Elche Spain

Correspondence should be addressed to O Reinoso oreinosogoumhumhes

Received 19 October 2016 Revised 19 February 2017 Accepted 15 March 2017 Published 24 April 2017

Academic Editor Lucio Pancheri

Copyright copy 2017 L Paya et al This is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Nowadays the field of mobile robotics is experiencing a quick evolution and a variety of autonomous vehicles is available to solvedifferent tasks The advances in computer vision have led to a substantial increase in the use of cameras as the main sensors inmobile robots They can be used as the only source of information or in combination with other sensors such as odometry orlaser Among vision systems omnidirectional sensors stand out due to the richness of the information they provide the robotwith and an increasing number of works about them have been published over the last few years leading to a wide variety offrameworks In this review some of the most important works are analysed One of the key problems the scientific community isaddressing currently is the improvement of the autonomy of mobile robots To this end building robust models of the environmentand solving the localization and navigation problems are three important abilities that any mobile robot must have Taking it intoaccount the review concentrates on these problems how researchers have addressed them bymeans of omnidirectional vision themain frameworks they have proposed and how they have evolved in recent years

1 Introduction

Over the last few years the range of applications of mobilerobots has significantly increased and they can be foundin diverse environments such as households industrial andeducational where they are able to carry out a variety oftasks The improvements performed both in perception andcomputation have had an important contribution to increasethe range of environments where mobile robots can be used

In order to be fully functional a mobile robot must becapable of navigating safely through an unknown environ-ment while simultaneously carrying out the task it has beendesigned for With this aim the robot must be able to builda model or map of this previously unknown environment toestimate its current position and orientation making use ofthis model and to navigate to the target points Mappinglocalization and navigation are three classical problems inmobile robotics that have received a great deal of attention inthe literature and keep on being very active research areas atpresent Finding a robust solution to these three key problemsis crucial to increase the autonomy and adaptability of mobile

robots to different circumstances and definitely expand theirrange of applications

To address the mapping localization and navigationproblems it is necessary that the robot has some relevantinformation about the environment where it moves whichis a priori unknown in most applications The robots can beequipped with diverse sensorial systems that allow them toextract the necessary information from the environment to beable to carry out their tasks autonomouslyThis way from thefirst works on mobile robotics [1] the control systems havemade use of the information collected from the environmentto a greater or lesser extent The methods used to processthis information have evolved as new algorithms have comeout and the capabilities of the perception and computationsystems have increased As a result very different approacheshave emerged considering different kinds of sensory infor-mation and processing techniques Among them in recentyears the use of omnidirectional vision systems has becomepopular in many works on mobile robots and it is worthstudying their main properties and applications in mappinglocalization and navigation

HindawiJournal of SensorsVolume 2017 Article ID 3497650 20 pageshttpsdoiorg10115520173497650

2 Journal of Sensors

In the light of the above the purpose of this review isto present the most relevant works carried out in the fieldof mapping and localization using computer vision payingspecial attention to those developments that make use ofomnidirectional visual information

The remainder of the paper is structured as followsFirst Section 2 presents the kinds of sensors that can beused to extract information from the mobile robot and fromthe environment Then Section 3 outlines some preliminaryconcepts about omnidirectional vision the possible geome-tries and the transformations that can be made with thevisual informationAfter that Section 4presents the necessityof describing the information of the scenes and the mainapproaches that can be used Later Section 5 focuses on theproblems of map creation localization and navigation ofmobile robots To conclude a final discussion is carried outin Section 6

2 Sensors in Mobile Robotics

In this section we analyse first the types of sensors thathave been traditionally used in mobile robotics to solve themapping localization and navigation problems (Section 21)After that we focus on vision sensors we study theiradvantages (Section 22) the possible configurations theyoffer (Section 23) and some current trends in the creationof visual models (Section 24)

21 Types of Sensors The solution to the mapping localiza-tion and navigation ofmobile robots depends strongly on theinformation which is available on the state of the robot andthe environmentMany types of sensors have been used so farwith this aim both proprioceptive and exteroceptive On theone hand proprioceptive sensors such as odometry measurethe state of the robot On the other hand exteroceptivesensors such as GPS SONAR laser and infrared sensorsmeasure some external information from the environmentFirst odometry is usually calculated from the measurementsof encoders installed in the wheels It can estimate the dis-placement of the robot but the accumulation of errors makesits application infeasible as a unique source of informationin real applications This is the reason why it is usuallyused in combination with other kinds of sensors SecondGPS (Global Positioning System) constitutes a good choiceoutdoors but it loses fiability close to buildings on narrowstreets and indoors Third SONAR (Sound Navigation andRanging) sensors have a relatively low cost and permitmeasuring distances to the objects situated around throughthe emission of sound pulses and measuring the receptionof echoes from these pulses [2 3] However their precisionis relatively low because they tend to present a high angularuncertainty and some noise introduced by the reflection ofthe sound signals [4] Fourth laser sensors determine thedistance by measuring the flight time of a laser pulse whenit is reflected onto the neighbour objects Their precisionis higher than SONARrsquos and they can measure distancesfrom centimeters to dozens of meters with a relatively goodprecision and angular resolution Nevertheless the cost theweight and the power consumption of such equipment are

considerably high Lasers have been used to create both 2D[5 6] and 3D maps [7 8] combined often with the use ofodometry [9] Finally it is also possible to find some worksthat use infrared sensors in navigation tasks whose range ofdetection arrives to several tens of centimeters [10]

22 Vision Sensors As an alternative to the perceptionsystems presented in the previous subsection vision sensorshave gained popularity because they present some inter-esting advantages The cameras provide a big quantity ofinformation from the environment and 3D data can beextracted from it Also they present a relatively low cost andpower consumption comparing to laser rangefinders whichis specially relevant to the design of autonomous robotswhichneed toworkwith batteries during long periods of timeTheirbehaviour is stable both outdoors and indoors unlike GPSwhose signal tends to degrade in some areas Finally theavailability of images permits carrying out additional highlevel tasks apart from mapping and localization These tasksinclude people detection and recognition and identificationof the state of some objects which are relevant in robotnavigation such as doors and traffic lights

Vision systems can be used either as the only perceptionsystem of the robot or in conjunction with the informa-tion provided by other sensors For example Choi et al[11] present a system that combines SONAR with visualinformation in a mobile robot navigation application whileHara et al [12] combine it with laser Chang and Chuang[13] present a laser-vision system composed of a projectorof laser lines and a camera whose information is processedto avoid obstacles and to extract relations between thepoints identified by the visual sensor and the laser Someauthors have reflected the wide variety of solutions proposedby researchers depending on the visual techniques usedthe combination with other sensors and the algorithms toprocess the information The evolution of the techniques inthese fields is widely documented in [14] which shows thedevelopments carried out in map building localization andnavigation using computer vision until the mid-1990s and[15] which complements this workwith a state of the art fromthe 1990s

Traditionally the solutions based on the use of visionhave been applied to Autonomous Ground Vehicles (AGV)However more recently these sensors have gained presencein the development of Unmanned Aerial Vehicles (UAV)which have great prospects of use in some applications suchas surveillance search and rescue inspection of some areasor buildings risky aerial missions map creation or firedetection The higher quantity of degrees of freedom thesekinds of vehicles tend to have makes it necessary a deeperstudy about how to analyse the sensory informationThe aimis to obtain robust data that permit localization with all thesenecessary degrees of freedom

23 Configuration of Vision Systems As far as vision sensorsare concerned different configurations have been proposeddepending on the number of cameras used and the field ofview they offer Among them there are systems based on

Journal of Sensors 3

monocular configurations [16ndash18] binocular systems (stereocameras) [19ndash21] and even trinocular configurations [22 23]Binocular and trinocular configurations permit measuringdepth from the images However the limited field of viewof these systems causes the necessity of employing severalimages of the environment in order to acquire complete infor-mation from it which is necessary to create a completemodelof an unknown environment In contrast more recently thesystems that provide omnidirectional visual information [24]have gained popularity thanks mainly to the great quantityof information they provide the robot with as they usuallyhave a field of view of 360 deg around the robot [25]They canbe composed of several cameras pointing towards differentdirections [26] or a unique camera and a reflective surface(catadioptric vision systems) [27]

Omnidirectional vision systems have further advantagescomparing to conventional vision systems The features thatappear in the images aremore stable since they remain longerin the field of view as the robot moves Also they provideinformation that permits estimating the position of the robotindependently on its orientation and they permit estimatingthis orientation too In general omnidirectional vision sys-tems have the ability to capture a more complete descriptionof the environment in only one scene which permits cre-ating exhaustive models of the environment with a reducednumber of views Finally even if some objects or personsocclude partially the scene omnidirectional images containsome environment information from other directions

These systems are usually based on the combination ofa conventional camera and a convex mirror which can havevarious shapes These structures are known as catadioptricvision systems and the raw information they provide isknown as omnidirectional image However some transforma-tions can be carried out to obtain other kinds of projectionswhich may be more useful depending on the task to developand the degrees of freedom of the movement of the robotThey include the spherical cylindrical or orthographic pro-jections [25 28] This issue will be addressed in Section 3

24 Trends in the Creation of Visual Models Using any ofthe configurations and projections presented in the previoussubsections a complete model of the environment can bebuilt Traditionally three different approaches have beenproposed with this aim metric topological or hybrid Firsta metric map usually defines the position of some relevantlandmarks extracted from the scenes with respect to acoordinate system and permits estimating the position of therobot with geometric accuracy Second a topological modelconsists generally of a graph where some representativelocalizations appear as nodes along with the connectivityrelations that permit navigating between consecutive nodesComparing to metric models they usually permit a rougherlocalization with a reasonable computational cost Finallyhybrid maps arrange the information into several layers withdifferent degrees of detail They usually combine topologicalmodels in the top layers which permit a rough localizationwith metric models in the bottom layers to refine thislocalization This way they try to combine the advantages ofmetric and topological maps

The use of visual information to create models of theenvironment has an important disadvantage the visualappearance of the environment changes not only when therobot moves but also under other circumstances such asvariations in lighting conditions which are present in allreal applications and may produce substantial changes inthe appearance of the scenes The environment may alsohave some changes after the model has been created andsometimes the scenes may be partially occluded by thenatural presence of people or other robots moving in theenvironment which is beingmodelled Taking these facts intoaccount independently on the approach used to create themodel it is necessary to extract some relevant informationfrom the scenes This information must be useful to identifythe environment and the position of the robot when it wascaptured independently on any other phenomena that mayhappen The extraction and description of this informationcan be carried out using two different approaches based onlocal features or based on global appearance On the onehand the approaches based on local features try to extractsome landmarks points or regions from the scenes andon the other global methods create a unique descriptor perscene that contains information on its global appearanceThisproblem is analysed more deeply in Section 4

3 Omnidirectional Vision Sensors

As stated in the previous section the expansion of the fieldof view that can be achieved with vision sensors is one of thereasons that explains the extensive use of computer vision inmobile robotics applications Omnidirectional vision sensorsstand out because they present a complete field of viewaround the camera axis The objective of this section istwofold On the one hand some of the configurations thatpermit capturing omnidirectional images are presented Onthe other hand the different formats that can be used toexpress this information and their application in robotic tasksare detailed

There are several configurations that permit capturingomnidirectional visual information First an array of camerascan be used to capture information from a variety of direc-tions The ladybug systems [29] constitute an example Theyare composed of several vision sensors distributed aroundthe camera and depending on the model they can capture avisual field covering more than 90 of a sphere whose centreis situated in the sensor

Fisheye lenses can also be included within the systemsthat capture a wide field of view [30 31] These lenses canprovide a field of view higher than 180 deg The Gear 360camera [32] contains two fisheye lenses each one capturinga field of view of 180 deg both horizontally and verticallyCombining both images the camera provides images witha complete 360 deg field of view Some works have shownhow several cameras with such lenses can be combined toobtain omnidirectional information For example Li et al[33] present a vision system equipped with two cameraswith fisheye lenses whose field of view is a complete spherearound the sensor However using different cameras to createa spherical image can be challenging taking the different

4 Journal of Sensors

Hyperbolic mirror

Image plane

F

F㰀

(a)

Parabolic mirror

Image plane

F

F㰀

(b)

Figure 1 Projection model of central catadioptric systems (a) Hyperbolic mirror and perspective projection lens and (b) parabolic mirrorand orthographic lens

lighting conditions of each camera into account This is whyLi developed a new system [34] to avoid this problem

Catadioptric vision sensors are another example [35]They make use of convex mirrors where the scene is pro-jected These mirrors may present different geometries suchas spherical hyperbolic parabolic elliptic or conic and thecamera takes the information from the environment throughits reflection onto the mirrors Due to their importance thenext subsection describes them in depth

31 Catadioptric Vision Sensors Catadioptric vision systemsmake use of convex reflective surfaces to expand the fieldof view of the camera The vision sensors capture theinformation through these surfaces The shape position andorientation of the mirror with respect to the camera willdefine the geometry of the projection of the world onto thecamera

Nowadays many kinds of catadioptric vision sensors canbe found One of the first developments was done by Rees in1970 [36] More recent examples of catadioptric systems canbe found in [37ndash40] Most of them permit omnidirectionalvision which means having a vision angle equal to 360 degaround the mirror axis The lateral angle of view dependsessentially on the geometry of the mirror and the relativeposition with respect to the camera

In the related literature several kinds of mirrors can befound as a part of catadioptric vision systems such as spheri-cal [41] conic [42] parabolic [27] or hyperbolic [43] Yoshidaet al [44] present a work about the possibility of creatinga catadioptric system composed of two mirrors focusing

on the study on how the changes in the geometry of thesystem are reflected in the final image captured by the cameraAlso Baker and Nayar [45] present a comparative analysisabout the use of different mirror geometries in catadioptricvision systems According to this work it is not possibleto state in general that a specific geometry outperformsthe others Each geometry presents characteristic reflectionproperties that may be advantageous under some specificcircumstances Anyway parabolic and hyperbolic mirrorspresent some particular properties that make them speciallyuseful when perspective projections of the omnidirectionalimage are needed as the next paragraph details

There are two main issues when using catadioptric visionsystems for mapping and localization the single effectiveviewpoint property and calibration On the one hand whenusing catadioptric vision sensors it is interesting that thesystem has a unique projection centre (ie that they con-stitute a central camera) In such systems all the rays thatarrive to the mirror surface converge into a unique pointwhich is the optical centre This is advantageous becausethanks to it it is possible to obtain undistorted perspectiveimages from the scene captured by the catadioptric system[39] This property appears in [45] referred to as singleeffective viewpoint According to both works there are twoways of building a catadioptric vision system that meetsthis property with the combination of a hyperbolic mirrorand a camera with perspective projection lens (conventionallens or pin-hole model) as shown in Figure 1(a) or with asystem composed of a parabolic mirror and an orthographicprojection lens (Figure 1(b)) In both figures 119865 and 1198651015840 are the

Journal of Sensors 5

(a) (b)

Figure 2 Two sample omnidirectional images captured in the same room from the same position and using two different catadioptricsystems

foci of the camera and themirror respectively In other casesthe rays that arrive to the camera converge into different focalpoints depending on their vertical incidence angle (this is thecase of noncentral cameras) Ohte et al studied this problemwith spherical mirrors [41] In the case of objects that arerelatively far from the axis of the mirror the error producedby the existence of different focal points is relatively smallHowever when the object is close to the mirror the errorsin the projection angles become significant and the completeprojection model of the camera must be used

On the other hand many mapping and localizationapplications work correctly only if all the parameters of thesystem are known With this aim the catadioptric systemneeds to go under a calibration process [46] The result ofthis process can be either (a) the intrinsic parameters ofthe camera the coefficients of the mirror and the relativeposition between them or (b) a list of correspondencesbetween each pixel in the image plane and the ray of light ofthe world that has projected onto it Many works have beendeveloped on calibration of catadioptric vision systems Forexample Goncalves and Araujo [47] propose a method tocalibrate both central and noncentral catadioptric camerasusing an approach based on bundle adjustment Ying andHu [48] present a method that uses geometric invariantsextracted from projections of lines or spheres in the worldThis method can be used to calibrate central catadioptriccameras Also Marcato Jr et al [49] develop an approach tocalibrate a catadioptric system composed of a wide-angle lenscamera and a conicmirror that takes into account the possiblemisalignment between the camera and mirror axes FinallyScaramuzza et al [50] develop a framework to calibratecentral catadioptric cameras using a planar pattern shown ata number of different orientations

32 Projections of the Omnidirectional Information The rawinformation captured by the camera in a catadioptric visionsystem is the omnidirectional image that contains the infor-mation of the environment previously reflected onto the mir-ror This image can be considered as a polar representationof the world whose origin is the projection of the focus ofthe mirror onto the image plane Figure 2 shows two samplecatadioptric vision systems composed of a camera and ahyperbolic mirror and the omnidirectional images capturedwith each one The mirror (a) is the model Wide 70 of the

manufacturer Eizoh and (b) is the model Super-Wide ViewLarge manufactured by Accowle

The omnidirectional scene can be used directly to obtainuseful information in robot navigation tasks For exampleScaramuzza et al [51] present a description method based onthe extraction of radial lines from the omnidirectional imageto characterize omnidirectional scenes captured in real envi-ronments They show how radial lines in omnidirectionalimages correspond to vertical lines of the environment whilecircumferences whose centre is the origin of coordinates ofthe image correspond to horizontal lines in the world

From the original omnidirectional image it is possible toobtain different scene representations through the projectionof the visual information onto different planes and surfacesTo make it possible in general it is necessary to calibratethe catadioptric system Using this information it is possibleto project the visual information onto different planes orsurfaces that show different perspectives of the original sceneEach projection presents specific properties that can be usefulin different navigation tasks

The next subsections present some of the most importantrepresentations and some of the works that have beendeveloped with each of them in the field of mobile roboticsA completemathematical description of these projections canbe found in [39]

321 Unit Sphere Projection An omnidirectional scene canbe projected onto a unit sphere whose centre is the focus ofthe mirror Every pixel of this sphere takes the value of theray of light that has the same direction with respect to thefocus of themirror To obtain this projection the catadioptricvision system has to be previously calibrated Figure 3 showsthe projection model of the unit sphere projection when ahyperbolic mirror and a perspective lens are usedThemirrorand the image plane are shown with blue color 119865 and 1198651015840 arethe foci of the camera and the mirror respectively The pixelsin the image plane are back-projected onto the mirror (to doit the calibration of the catadioptric systemmust be available)and after that each point of themirror is projected onto a unitsphere

This projection has been traditionally useful in thoseapplications that make use of the Spherical Fourier Trans-form which permits studying 3D rotations in the spaceas Geyer and Daniilidis show [52] Friedrich et al [53 54]

6 Journal of Sensors

F

F㰀

Figure 3 Projection model of the unit sphere projection

present an algorithm for robot localization and navigationusing spherical harmonics They use images captured with ahemispheric camera Makadia et al address the estimationof 3D rotations from the Spherical Fourier Transform [55ndash57] using a catadioptric vision system to capture the imagesFinally Schairer et al present several works about orientationestimation using this transform such as [58ndash60] where theydevelop methods to improve the accuracy in orientationestimation implement a particle filter to solve this task anddevelop a navigation system that combines odometry and theSpherical Fourier Transform applied to low resolution visioninformation

322 Cylindrical Projection This representation consists inprojecting the omnidirectional information onto a cylinderwhose axis is parallel to the mirror axis Conceptually it canbe obtained by changing the polar coordinate system of theomnidirectional image into a rectangular coordinate systemThis way every circumference in the omnidirectional imagewill be converted into a horizontal line in the panoramicscene This projection does not require the previous calibra-tion of the catadioptric vision system

Figure 4 shows the projection model of the cylindricalprojection when a hyperbolic mirror and a perspective lensare used The mirror and the image plane are shown withblue color 119865 and 1198651015840 are the foci of the camera and the mirrorrespectivelyThe pixels in the image plane are back-projectedonto a cylindrical surface

The cylindrical projection is commonly known aspanoramic image and it is one of the most used representa-tions in mobile robotics works to solve some problems suchas map creation and localization [61ndash63] SLAM [64] andvisual servoing and navigation [65 66] This is due to thefact that this representation is more easily understandableby human operators and it permits using standard imageprocessing algorithms which are usually designed to be usedwith perspective images

323 Perspective Projection From the omnidirectionalimage it is possible to obtain projective images in any

F

F㰀

Figure 4 Projection model of the cylindrical projection

direction They would be equivalent to the images capturedby virtual conventional cameras situated in the focus ofthe mirror The catadioptric system has to be previouslycalibrated to obtain such projection Figure 5 shows theprojection model of the perspective projection when ahyperbolic mirror and a perspective lens are used Themirror and the image plane are shown with blue color 119865 and1198651015840 are the foci of the camera and the mirror respectively Thepixels in the image plane are back-projected to the mirror(to do it the calibration of the catadioptric system must beavailable) and after that each point of the mirror is projectedonto a plane It is equivalent to capturing an image from thevirtual conventional camera placed in the focus 1198651015840

The orthographic projection also known as birdrsquos eyeview can be considered as a specific case of the perspectiveprojection In this case the projection plane is situatedperpendicularly to the camera axis If the world referencesystem is defined in such a way that the floor is the plane119909119910 and the 119911-axis is vertical and the camera axis is parallelto the 119911-axis then the orthographic projection is equivalentto having a conventional camera situated in the focus ofthe mirror pointing to the floor plane Figure 6 shows theprojection model of the orthographic view In this case thepixels are back-projected onto a horizontal plane

In the literature some algorithms which make use ofthe orthographic projection in map building localizationand navigation applications can be found For exampleGaspar et al [25] make use of this projection to extractparallel lines from the floor of corridors and other rooms toperform navigation indoors Also Bonev et al [67] propose anavigation system that combines the information containedin the omnidirectional image cylindrical projection andorthographic projection Finally Roebert et al [68] showhowa model of the environment can be created using perspectiveimages and how this model can be used for localization andnavigation purposes

To conclude this section Figure 7 shows (a) an omni-directional image captured by the catadioptric system pre-sented in Figure 2(a) and the visual appearance of the projec-tions calculated from this image (b) unit sphere projection(c) cylindrical projection (d) perspective projection onto avertical plane and (e) orthographic projection

Journal of Sensors 7

A

F

F㰀

(a)

U

V

View from Afp

(b)

Figure 5 Projection model of the perspective projection

F

F㰀

fp

Figure 6 Projection model of the orthographic projection

4 Description of the Visual Information

As described in the previous section numerous authors havestudied the use of omnidirectional images or any of theirprojections both in map building and in localization tasksThe images are highly dimensional data that change not onlywhen the robot moves but also when there is any change inthe environment such as changes in lighting conditions or in

the position of some objects Taking these facts into accountit is necessary to extract relevant information from the scenesto be able to solve robustly the mapping and localizationtasks Depending on the method followed to extract thisinformation the different solutions can be classified into twogroups solutions based on the extraction and descriptionof local features and solutions based on global appearanceTraditionally researchers have focused on the first family ofmethods but more recently some global appearance algo-rithms have been demonstrated to be also a robust alternative

Many algorithms can be found in the literature workingbothwith local features andwith global appearance of imagesAll these algorithms imply many parameters that have tobe correctly tuned so that the mapping and localizationprocesses are correct In the next subsections some of thesealgorithms and their applications are detailed

41 Methods Based on the Extraction andDescription of Local Features

411 General Issues Local features approaches are based onthe extraction of a set of outstanding points objects orregions from each scene Every feature is described by meansof a descriptor which is usually invariant against changesin the position and orientation of the robot Once extractedand described the solution to the localization problem isaddressed usually in two steps [69] First the extractedfeatures are tracked along a set of scenes to identify the zoneswhere these features are likely to be in themost recent imagesSecond a feature matching process is carried out to identifythe features

Many different philosophies can be found depending onthe type of features which are extracted and the procedurefollowed to carry out the tracking and matching As an

8 Journal of Sensors

(a) (b)

(c)

(d) (e)

Figure 7 (a) Sample omnidirectional image and some projections calculated from it (b) unit sphere projection (c) cylindrical projection(d) perspective projection onto a vertical plane and (e) orthographic projection

example Pears and Liang [70] extract corners which arelocated in the floor plane in indoor environments and usehomographies to carry out the tracking Zhou and Li [71]also use homographies but they extract features throughthe Harris corner detector [72] Sometimes geometricallymore complex features are used as Saripalli et al do [73]They carry out a segmentation process to extract predefinedfeatures such as windows and a Kalman filtering to carry outthe tracking and matching of features

412 Local Features Descriptors Among the methods forfeature extraction and description SIFT and SURF can behighlighted On the one hand SIFT (Scale Invariant FeatureTransform) was developed by Lowe [74 75] and providesfeatures which are invariant against scaling rotation changesin lighting conditions and camera viewpoint On the otherhand SURF (Speeded-Up Robust Features) whose standardversion was developed by Bay et al [76 77] is inspired in

SIFT but presents a lower computational cost and a higherrobustness against image transformationsMore recent devel-opments include BRIEF [78] which is designed to be usedin real-time applications at the expense of a lower toleranceto image distortions and transformations ORB [79] whichis based on BRIEF trying to improve its invariance againstrotation and resistance to noise but it is not robust againstchanges of scale BRISK [80] and FREAK [81] which tryto have the robustness of SURF but with an improvedcomputational cost

These descriptors have become popular in mapping andlocalization tasks using mobile robots as many researchersshow such as Angeli et al [82] and Se et al [83] who makeuse of SIFTdescriptors to solve these problems and theworksof Valgren and Lilienthal [84] and Murillo et al [85] whoemploy SURF features extracted fromomnidirectional scenesto estimate the position of the robot in a previously built mapAlso Pan et al [86] present a method based on the use of

Journal of Sensors 9

BRISK descriptors to estimate the position of an unmannedaerial vehicle

413 Using Methods Based on Local Features to Build VisualMaps Using feature-based approaches in combination withprobabilistic techniques it is possible to build metric maps[87] However these methods present some drawbacks forexample it is necessary that the environment be rich inprominent details (otherwise artificial landmarks can beinserted in the environment but this is not always possible)also the detection of such points is sometimes not robustagainst changes in the environment and their description isnot always fully invariant to changes in robot position andorientation Besides camera calibration is crucial in orderto incorporate new measurements in the model correctlyThis way small deviations in either the intrinsic or theextrinsic parameters add some error to the measurementsAt last extracting describing and comparing landmarksare computationally complex processes that often make itinfeasible building the model in real time as the robotexplores the environment

Feature-based approaches have reached a relative matu-rity and some comparative evaluations of their performancehave been carried out such as [88] and [89] These evalu-ations are useful to choose the most suitable extractor anddescriptor to a specific application

42 Methods Based on the Global Appearance of Scenes

421 General Issues The approaches based on the globalappearance of scenes work with the images as a wholewithout extracting any local information Each image isrepresented by means of a unique descriptor which containsinformation on its global appearance This kind of methodspresents some advantages in dynamic andor poorly struc-tured environments where it is difficult to extract stablecharacteristic features or regions from a set of scenes Theseapproaches lead to conceptually simpler algorithms sinceeach scene is described by means of only one descriptorMap creation and localization can be achieved just storingand comparing pairwise these descriptors As a drawbackextracting metric relationships from this information isdifficult thus this family of techniques is usually employedto build topological maps (unless the visual informationis combined with other sensory data such as odometry)Despite their simplicity several difficulties must be facedwhen using these techniques Since no local informationis extracted from the scenes it is necessary to use anycompression and description method that make the processcomputationally feasible Nevertheless the current imagedescription and compression methods permit optimisingthe size of the databases to store the necessary informationand carrying out the comparisons between scenes with arelative computational efficiency Occasionally these descrip-tors do not present invariance neither to changes in therobot orientation or in the lighting conditions nor to otherchanges in the environment (position of objects doorsetc) They will also experience some problems in environ-ments where visual aliasing is present which is a common

phenomenon in indoor environments with repetitive visualstructures

In spite of these disadvantages techniques based onglobal appearance constitute a systematic and intuitive alter-native to solve the mapping and localization problems In theworks that make use of this approach these tasks are usuallyaddressed in two steps The first step consists in creating amodel of the environment In this step the robot captures aset of images and describes each of thembymeans of a uniquedescriptor and from the information of these descriptorsa map is created and some relationships are establishedbetween images or robot posesThis step is known as learningphase Once the environment has been modelled in thesecond step the robot carries out the localization by capturingan image describing it and comparing this descriptor withthe descriptors previously stored in the model This step isalso known as test or auto-localization phase

422 Global Appearance Descriptors The key to the properfunctioning of this kind of methods is in the global descrip-tion algorithmusedDifferent alternatives can be found in therelated literature Some of the pioneer works on this approachwere developed by Matsumoto et al [90] and make a directcomparison between the pixels of a central region of thescenes using a correlation process However considering thehigh computational cost of this method the authors startedto build some descriptors that stored global information fromthe scenes using depth information estimated through stereodivergence [91]

Among the techniques that try to compress globally thevisual information Principal Components Analysis (PCA)[92] can be highlighted as one of the first robust alternativesused PCA considers the images as multidimensional datathat can be projected onto a lower dimension space retainingmost of the original information Some authors like Krose etal [93 94] and Stimec et al [95] make use of this method tocreate robustmodels of the environment usingmobile robotsThe first PCA developments presented two main problemsOn the one hand the descriptors were variant against changesin the orientation of the robot in the ground plane and onthe other hand all the images had to be available to build themodel which means that the map cannot be built online asthe robot explores the environment If a new image has tobe added to the previously built PCA model (eg becausethe robot must update this representation as new relevantimages are captured) it is necessary to start the process fromthe scratch using again all the images captured so far Someauthors have tried to overcome these drawbacks First Joganand Leonardis [96] proposed a version of PCA that providesdescriptors which are invariant against rotations of the robotin the ground plane when omnidirectional images are usedat the expense of a substantial increase of the computationalcost Second Artac et al [97] used an incremental versionof PCA that permits adding new images to a previously builtmodel

Other authors have proposed the implementation ofdescriptors based on the application of the Discrete FourierTransform (DFT) to the images with the aim of extractingthe most relevant information from the scenes In this field

10 Journal of Sensors

some alternatives can be found On the one hand in the caseof panoramic images both the two-dimensional DFT or theFourier Signature can be implemented as Paya et al [28] andMenegatti et al [98] show respectively On the other handin the case of omnidirectional images the Spherical FourierTransform (SFT) can be used as Rossi et al [99] show In allthe cases the resulting descriptor is able to compress mostof the information contained in the original scene in a lowernumber of components Also these methods permit buildingdescriptors which are invariant against rotations of the robotin the ground plane Apart from this they contain enoughinformation to estimate not only the position of the robotbut also its relative orientation At last the computational costto describe each scene is relatively low and each descriptorcan be calculated independently on the rest of descriptorsTaking these features into account in this moment DFToutperforms PCA as far as mapping and localization areconcerned As an example Menegatti et al [100] show howa probabilistic localization process can be carried out withina visual memory previously created by means of the FourierSignature of a set of panoramic scenes as the only source ofinformation

Other authors have described globally the scenes throughapproaches based on gradient either the magnitude or theorientation As an example Kosecka et al [101] make use of ahistogram of gradient orientation to describe each scene withthe goal of creating a map of the environment and carryingout the localization process However some comparisonsbetween local areas of the candidate zones are performed torefine these processes Murillo et al [102] propose to use apanoramic gist descriptor [103] and try to optimise its sizewhile keeping most of the information of the environmentThe approaches based on gradients and gist tend to present aperformance similar to Fourier Transform methods [104]

The information stored in the color channels also consti-tutes a useful alternative to build global appearance descrip-tors The works of Zhou et al [105] show an example of useof this information They propose building histograms thatcontain information on color gradient edges density andtexture to represent the images

423 Using Methods Based on Global Appearance to BuildVisual Maps Finally when working with global appearanceit is necessary to consider that the appearance of an imagestrongly depends on the lighting conditions of the environ-ment representedThis way the global appearance descriptorshould be robust against changes in these conditions Someresearchers have focused on this topic and have proposeddifferent solutions First the problem can be addressed bymeans of considering sets of images captured under differentlighting conditions This way the model would containinformation on the changes that appearance can undergoAs an example the works developed by Murase and Nayar[106] solved the problem of visual recognition of objectsunder changing lighting conditions using this approach Thesecond approach consists in trying to remove or minimisethe effects produced by these changes during the creation ofthe descriptor to obtain a normalised model This approachis mainly based on the use of filters for example to detect

and extract edges [107] since this information tends to bemore insensitive to changes in lighting conditions than theinformation of intensity or homomorphic filters [108] whichtry to separate the luminance and reflectance componentsand minimise the first component which is the one that ismore prone to change when the lighting conditions do

5 Mapping Localization and NavigationUsing Omnidirectional Vision Sensors

This section addresses the problems of mapping localizationandnavigation using the information provided by omnidirec-tional vision sensors First themain approaches to solve thesetasks are outlined and then some relevant works developedwithin these approaches are described

While the initial works tried tomodel the geometry of theenvironment with metric precision from visual informationand arranging the information through CAD (ComputerAssisted Design) models these approaches gave way to sim-pler models that represent the environment through occu-pation grids topological maps or even sequences of imagesTraditionally the problems of map building and localizationhave been addressed using three different approaches

(i) Map building and subsequent localization in theseapproaches first the robot goes through the envi-ronment to map (usually in a teleoperated way)and collects some useful information from a varietyof positions This information is then processed tobuild the map Once the map is available the robotcaptures information from its current position andcomparing this information with the map its currentpose (position and orientation) is estimated

(ii) Continuous map building and updating in thisapproach the robot is able to explore autonomouslythe environment to map and build or update amodel while it is moving The SLAM algorithms(Simultaneous Localization And Mapping) fit withinthis approach

(iii) Mapless navigation systems the robot navigatesthrough the environment by applying some analysistechniques to the last captured scenes such as opticalflow or through visual memories previously recordedthat contain sequences of images and associatedactions but no relation between images This kindof navigation is associated basically with reactivebehaviours

The next subsections present some of the most relevantcontributions developed in each of these three frameworks

51 Map Building and Subsequent Localization Pretty usu-ally solving the localization problem requires having apreviously built model of the environment in such a waythat the robot can estimate its pose by comparing its sensoryinformation with the information captured in the modelIn general depending on how this information is arrangedthese models can be classified into three categories metrictopological or hybrid [109] First metric approaches try to

Journal of Sensors 11

create a model with geometric precision including somefeatures of the environment with respect to a referencesystem These models are created usually using local featuresextracted from images [87] and they permit estimating met-rically the position of the robot up to a specific error Secondtopological approaches try to create compact models thatinclude information from several characteristic localizationsand the connectivity relations between them Both localfeatures and global appearance can be used to create suchmodels [104 110] They usually permit a rough localizationof the robot with a reasonable computational cost Finallyhybrid maps combine metric and topological information totry to have the advantages of both methods The informationis arranged into several layers with topological informationthat permits carrying out an initial rough estimation andmetric information to refine this estimation when necessary[111]

The use of information captured by omnidirectionalvision sensors has expanded in recent years to solve themapping localization and navigation problems The nextparagraphs outline some of these works

Initially a simple way to create a metric model consistsin using some visual beacons which can be seen from avariety of positions and used to estimate the pose of therobot Following this philosophy Li et al [112] present asystem for the localization of agricultural vehicles using somevisual beacons which are perceived using an omnidirectionalvision system These beacons are constituted by four redlandmarks situated in the vertices of the environment wherethese vehicles may move The omnidirectional vision systemmakes use of these four landmarks as beacons situated onspecific and previously known positions to estimate its poseIn other occasions a more complete description of theenvironment is previously available as in the work of Lu etal [113] They use the model provided by the competitionRoboCup Middle Size League Taking this competition intoaccount the objective consists in carrying out an accuratelocalization of the robot With this aim a Monte Carlolocalization method is employed to provide a rough initiallocalization and once the algorithmhas converged this resultis used as the initial value of a matching optimisation algo-rithm in order to perform accurate and efficient localizationtracking

The maps composed of local visual features have beenextensively used along the last few years Classical approacheshave made use of monocular or stereo vision to extract andtrack these local features or landmarks More recently someresearchers have shown that it is feasible to extend theseclassical approaches to be used with omnidirectional visionIn this line Choi et al [114] propose an algorithm to createmaps which is based on an object extraction method usingLucas-Kanade optical flowmotion detection from the imagesobtained by an omnidirectional vision systemThe algorithmuses the outer point of motion vectors as feature points of theenvironment They are obtained based on the corner pointsof an extracted object and using Lucas-Kanade optical flow

Valgren et al [115] propose the use of local features thatare extracted from images captured in a sequence Thesefeatures are used both to cluster the images into nodes and

to detect links between the nodes They employ a variant ofSIFT features that are extracted and matched from differentviewpoints Each node of the topological map contains acollection of images which are considered similar enough(ie a sufficient number of feature matches must existamong the images contained in the node) Once the nodesare constructed they are connected taking into accountfeature matching between the images in the nodes This waythey build a topological map incrementally creating newnodes as the environment is explored In a later work [84]the same authors study how to build topological maps inlarge environments both indoors and outdoors using thelocal features extracted from a set of omnidirectional viewsincluding the epipolar restriction and a clustering method tocarry out localization in an efficient way

Goedeme et al [116] present an algorithm to build atopological model of complex environments They also pro-pose algorithms to solve the problems of localization (bothglobal when the robot has no information about its previousposition and local when this information is available) andnavigation In this approach the authors propose two localdescriptors to extract the most relevant information fromthe images a color enhanced version of SIFT and invariantcolumn segments

The use of bag-of-features approaches has also beenconsidered by some researchers in mobile robotics As anexample Lu et al [117] propose the use of a bag-of-featuresapproach to carry out topological localization The map isbuilt in a previous process consisting mainly in clusteringthe visual information captured by an omnidirectional visionsystem It is performed in two steps First local features areextracted from all the images and they are used to carry out a119870-means clustering obtaining the 119896 clustersrsquo centres Secondthese centres constitute the visual vocabulary which is usedsubsequently to solve the localization problem

Beyond these frameworks the use of omnidirectionalvision systems has contributed to the emergence of newparadigms to create visual maps using some different fun-damentals to those traditionally used with local featuresOne of these frameworks consists in creating topologicalmaps using a global description of the images captured bythe omnidirectional vision system In this area Menegattiet al [98] show how a visual memory can be built usingthe Fourier Signature which presents rotational invariancewhen used with panoramic images The map is built usingglobal appearance methods and a system of mechanicalforces calculated from the similitude between descriptorsLiu et al [118] propose a framework that includes a methodto describe the visual appearance of the environment Itconsists in an adaptive descriptor based on color featuresand geometric information Using this descriptor they createa topological map based on the fact that the vertical edgesdivide the rooms in regions with a uniform meaning in thecase of indoor environments These descriptors are used asa basis to create a topological map through an incrementalprocess in which similar descriptors are incorporated intoeach node A global threshold is considered to evaluatethe similitude between descriptors Also other authors havestudied the localization problem using color information as

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 2: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

2 Journal of Sensors

In the light of the above the purpose of this review isto present the most relevant works carried out in the fieldof mapping and localization using computer vision payingspecial attention to those developments that make use ofomnidirectional visual information

The remainder of the paper is structured as followsFirst Section 2 presents the kinds of sensors that can beused to extract information from the mobile robot and fromthe environment Then Section 3 outlines some preliminaryconcepts about omnidirectional vision the possible geome-tries and the transformations that can be made with thevisual informationAfter that Section 4presents the necessityof describing the information of the scenes and the mainapproaches that can be used Later Section 5 focuses on theproblems of map creation localization and navigation ofmobile robots To conclude a final discussion is carried outin Section 6

2 Sensors in Mobile Robotics

In this section we analyse first the types of sensors thathave been traditionally used in mobile robotics to solve themapping localization and navigation problems (Section 21)After that we focus on vision sensors we study theiradvantages (Section 22) the possible configurations theyoffer (Section 23) and some current trends in the creationof visual models (Section 24)

21 Types of Sensors The solution to the mapping localiza-tion and navigation ofmobile robots depends strongly on theinformation which is available on the state of the robot andthe environmentMany types of sensors have been used so farwith this aim both proprioceptive and exteroceptive On theone hand proprioceptive sensors such as odometry measurethe state of the robot On the other hand exteroceptivesensors such as GPS SONAR laser and infrared sensorsmeasure some external information from the environmentFirst odometry is usually calculated from the measurementsof encoders installed in the wheels It can estimate the dis-placement of the robot but the accumulation of errors makesits application infeasible as a unique source of informationin real applications This is the reason why it is usuallyused in combination with other kinds of sensors SecondGPS (Global Positioning System) constitutes a good choiceoutdoors but it loses fiability close to buildings on narrowstreets and indoors Third SONAR (Sound Navigation andRanging) sensors have a relatively low cost and permitmeasuring distances to the objects situated around throughthe emission of sound pulses and measuring the receptionof echoes from these pulses [2 3] However their precisionis relatively low because they tend to present a high angularuncertainty and some noise introduced by the reflection ofthe sound signals [4] Fourth laser sensors determine thedistance by measuring the flight time of a laser pulse whenit is reflected onto the neighbour objects Their precisionis higher than SONARrsquos and they can measure distancesfrom centimeters to dozens of meters with a relatively goodprecision and angular resolution Nevertheless the cost theweight and the power consumption of such equipment are

considerably high Lasers have been used to create both 2D[5 6] and 3D maps [7 8] combined often with the use ofodometry [9] Finally it is also possible to find some worksthat use infrared sensors in navigation tasks whose range ofdetection arrives to several tens of centimeters [10]

22 Vision Sensors As an alternative to the perceptionsystems presented in the previous subsection vision sensorshave gained popularity because they present some inter-esting advantages The cameras provide a big quantity ofinformation from the environment and 3D data can beextracted from it Also they present a relatively low cost andpower consumption comparing to laser rangefinders whichis specially relevant to the design of autonomous robotswhichneed toworkwith batteries during long periods of timeTheirbehaviour is stable both outdoors and indoors unlike GPSwhose signal tends to degrade in some areas Finally theavailability of images permits carrying out additional highlevel tasks apart from mapping and localization These tasksinclude people detection and recognition and identificationof the state of some objects which are relevant in robotnavigation such as doors and traffic lights

Vision systems can be used either as the only perceptionsystem of the robot or in conjunction with the informa-tion provided by other sensors For example Choi et al[11] present a system that combines SONAR with visualinformation in a mobile robot navigation application whileHara et al [12] combine it with laser Chang and Chuang[13] present a laser-vision system composed of a projectorof laser lines and a camera whose information is processedto avoid obstacles and to extract relations between thepoints identified by the visual sensor and the laser Someauthors have reflected the wide variety of solutions proposedby researchers depending on the visual techniques usedthe combination with other sensors and the algorithms toprocess the information The evolution of the techniques inthese fields is widely documented in [14] which shows thedevelopments carried out in map building localization andnavigation using computer vision until the mid-1990s and[15] which complements this workwith a state of the art fromthe 1990s

Traditionally the solutions based on the use of visionhave been applied to Autonomous Ground Vehicles (AGV)However more recently these sensors have gained presencein the development of Unmanned Aerial Vehicles (UAV)which have great prospects of use in some applications suchas surveillance search and rescue inspection of some areasor buildings risky aerial missions map creation or firedetection The higher quantity of degrees of freedom thesekinds of vehicles tend to have makes it necessary a deeperstudy about how to analyse the sensory informationThe aimis to obtain robust data that permit localization with all thesenecessary degrees of freedom

23 Configuration of Vision Systems As far as vision sensorsare concerned different configurations have been proposeddepending on the number of cameras used and the field ofview they offer Among them there are systems based on

Journal of Sensors 3

monocular configurations [16ndash18] binocular systems (stereocameras) [19ndash21] and even trinocular configurations [22 23]Binocular and trinocular configurations permit measuringdepth from the images However the limited field of viewof these systems causes the necessity of employing severalimages of the environment in order to acquire complete infor-mation from it which is necessary to create a completemodelof an unknown environment In contrast more recently thesystems that provide omnidirectional visual information [24]have gained popularity thanks mainly to the great quantityof information they provide the robot with as they usuallyhave a field of view of 360 deg around the robot [25]They canbe composed of several cameras pointing towards differentdirections [26] or a unique camera and a reflective surface(catadioptric vision systems) [27]

Omnidirectional vision systems have further advantagescomparing to conventional vision systems The features thatappear in the images aremore stable since they remain longerin the field of view as the robot moves Also they provideinformation that permits estimating the position of the robotindependently on its orientation and they permit estimatingthis orientation too In general omnidirectional vision sys-tems have the ability to capture a more complete descriptionof the environment in only one scene which permits cre-ating exhaustive models of the environment with a reducednumber of views Finally even if some objects or personsocclude partially the scene omnidirectional images containsome environment information from other directions

These systems are usually based on the combination ofa conventional camera and a convex mirror which can havevarious shapes These structures are known as catadioptricvision systems and the raw information they provide isknown as omnidirectional image However some transforma-tions can be carried out to obtain other kinds of projectionswhich may be more useful depending on the task to developand the degrees of freedom of the movement of the robotThey include the spherical cylindrical or orthographic pro-jections [25 28] This issue will be addressed in Section 3

24 Trends in the Creation of Visual Models Using any ofthe configurations and projections presented in the previoussubsections a complete model of the environment can bebuilt Traditionally three different approaches have beenproposed with this aim metric topological or hybrid Firsta metric map usually defines the position of some relevantlandmarks extracted from the scenes with respect to acoordinate system and permits estimating the position of therobot with geometric accuracy Second a topological modelconsists generally of a graph where some representativelocalizations appear as nodes along with the connectivityrelations that permit navigating between consecutive nodesComparing to metric models they usually permit a rougherlocalization with a reasonable computational cost Finallyhybrid maps arrange the information into several layers withdifferent degrees of detail They usually combine topologicalmodels in the top layers which permit a rough localizationwith metric models in the bottom layers to refine thislocalization This way they try to combine the advantages ofmetric and topological maps

The use of visual information to create models of theenvironment has an important disadvantage the visualappearance of the environment changes not only when therobot moves but also under other circumstances such asvariations in lighting conditions which are present in allreal applications and may produce substantial changes inthe appearance of the scenes The environment may alsohave some changes after the model has been created andsometimes the scenes may be partially occluded by thenatural presence of people or other robots moving in theenvironment which is beingmodelled Taking these facts intoaccount independently on the approach used to create themodel it is necessary to extract some relevant informationfrom the scenes This information must be useful to identifythe environment and the position of the robot when it wascaptured independently on any other phenomena that mayhappen The extraction and description of this informationcan be carried out using two different approaches based onlocal features or based on global appearance On the onehand the approaches based on local features try to extractsome landmarks points or regions from the scenes andon the other global methods create a unique descriptor perscene that contains information on its global appearanceThisproblem is analysed more deeply in Section 4

3 Omnidirectional Vision Sensors

As stated in the previous section the expansion of the fieldof view that can be achieved with vision sensors is one of thereasons that explains the extensive use of computer vision inmobile robotics applications Omnidirectional vision sensorsstand out because they present a complete field of viewaround the camera axis The objective of this section istwofold On the one hand some of the configurations thatpermit capturing omnidirectional images are presented Onthe other hand the different formats that can be used toexpress this information and their application in robotic tasksare detailed

There are several configurations that permit capturingomnidirectional visual information First an array of camerascan be used to capture information from a variety of direc-tions The ladybug systems [29] constitute an example Theyare composed of several vision sensors distributed aroundthe camera and depending on the model they can capture avisual field covering more than 90 of a sphere whose centreis situated in the sensor

Fisheye lenses can also be included within the systemsthat capture a wide field of view [30 31] These lenses canprovide a field of view higher than 180 deg The Gear 360camera [32] contains two fisheye lenses each one capturinga field of view of 180 deg both horizontally and verticallyCombining both images the camera provides images witha complete 360 deg field of view Some works have shownhow several cameras with such lenses can be combined toobtain omnidirectional information For example Li et al[33] present a vision system equipped with two cameraswith fisheye lenses whose field of view is a complete spherearound the sensor However using different cameras to createa spherical image can be challenging taking the different

4 Journal of Sensors

Hyperbolic mirror

Image plane

F

F㰀

(a)

Parabolic mirror

Image plane

F

F㰀

(b)

Figure 1 Projection model of central catadioptric systems (a) Hyperbolic mirror and perspective projection lens and (b) parabolic mirrorand orthographic lens

lighting conditions of each camera into account This is whyLi developed a new system [34] to avoid this problem

Catadioptric vision sensors are another example [35]They make use of convex mirrors where the scene is pro-jected These mirrors may present different geometries suchas spherical hyperbolic parabolic elliptic or conic and thecamera takes the information from the environment throughits reflection onto the mirrors Due to their importance thenext subsection describes them in depth

31 Catadioptric Vision Sensors Catadioptric vision systemsmake use of convex reflective surfaces to expand the fieldof view of the camera The vision sensors capture theinformation through these surfaces The shape position andorientation of the mirror with respect to the camera willdefine the geometry of the projection of the world onto thecamera

Nowadays many kinds of catadioptric vision sensors canbe found One of the first developments was done by Rees in1970 [36] More recent examples of catadioptric systems canbe found in [37ndash40] Most of them permit omnidirectionalvision which means having a vision angle equal to 360 degaround the mirror axis The lateral angle of view dependsessentially on the geometry of the mirror and the relativeposition with respect to the camera

In the related literature several kinds of mirrors can befound as a part of catadioptric vision systems such as spheri-cal [41] conic [42] parabolic [27] or hyperbolic [43] Yoshidaet al [44] present a work about the possibility of creatinga catadioptric system composed of two mirrors focusing

on the study on how the changes in the geometry of thesystem are reflected in the final image captured by the cameraAlso Baker and Nayar [45] present a comparative analysisabout the use of different mirror geometries in catadioptricvision systems According to this work it is not possibleto state in general that a specific geometry outperformsthe others Each geometry presents characteristic reflectionproperties that may be advantageous under some specificcircumstances Anyway parabolic and hyperbolic mirrorspresent some particular properties that make them speciallyuseful when perspective projections of the omnidirectionalimage are needed as the next paragraph details

There are two main issues when using catadioptric visionsystems for mapping and localization the single effectiveviewpoint property and calibration On the one hand whenusing catadioptric vision sensors it is interesting that thesystem has a unique projection centre (ie that they con-stitute a central camera) In such systems all the rays thatarrive to the mirror surface converge into a unique pointwhich is the optical centre This is advantageous becausethanks to it it is possible to obtain undistorted perspectiveimages from the scene captured by the catadioptric system[39] This property appears in [45] referred to as singleeffective viewpoint According to both works there are twoways of building a catadioptric vision system that meetsthis property with the combination of a hyperbolic mirrorand a camera with perspective projection lens (conventionallens or pin-hole model) as shown in Figure 1(a) or with asystem composed of a parabolic mirror and an orthographicprojection lens (Figure 1(b)) In both figures 119865 and 1198651015840 are the

Journal of Sensors 5

(a) (b)

Figure 2 Two sample omnidirectional images captured in the same room from the same position and using two different catadioptricsystems

foci of the camera and themirror respectively In other casesthe rays that arrive to the camera converge into different focalpoints depending on their vertical incidence angle (this is thecase of noncentral cameras) Ohte et al studied this problemwith spherical mirrors [41] In the case of objects that arerelatively far from the axis of the mirror the error producedby the existence of different focal points is relatively smallHowever when the object is close to the mirror the errorsin the projection angles become significant and the completeprojection model of the camera must be used

On the other hand many mapping and localizationapplications work correctly only if all the parameters of thesystem are known With this aim the catadioptric systemneeds to go under a calibration process [46] The result ofthis process can be either (a) the intrinsic parameters ofthe camera the coefficients of the mirror and the relativeposition between them or (b) a list of correspondencesbetween each pixel in the image plane and the ray of light ofthe world that has projected onto it Many works have beendeveloped on calibration of catadioptric vision systems Forexample Goncalves and Araujo [47] propose a method tocalibrate both central and noncentral catadioptric camerasusing an approach based on bundle adjustment Ying andHu [48] present a method that uses geometric invariantsextracted from projections of lines or spheres in the worldThis method can be used to calibrate central catadioptriccameras Also Marcato Jr et al [49] develop an approach tocalibrate a catadioptric system composed of a wide-angle lenscamera and a conicmirror that takes into account the possiblemisalignment between the camera and mirror axes FinallyScaramuzza et al [50] develop a framework to calibratecentral catadioptric cameras using a planar pattern shown ata number of different orientations

32 Projections of the Omnidirectional Information The rawinformation captured by the camera in a catadioptric visionsystem is the omnidirectional image that contains the infor-mation of the environment previously reflected onto the mir-ror This image can be considered as a polar representationof the world whose origin is the projection of the focus ofthe mirror onto the image plane Figure 2 shows two samplecatadioptric vision systems composed of a camera and ahyperbolic mirror and the omnidirectional images capturedwith each one The mirror (a) is the model Wide 70 of the

manufacturer Eizoh and (b) is the model Super-Wide ViewLarge manufactured by Accowle

The omnidirectional scene can be used directly to obtainuseful information in robot navigation tasks For exampleScaramuzza et al [51] present a description method based onthe extraction of radial lines from the omnidirectional imageto characterize omnidirectional scenes captured in real envi-ronments They show how radial lines in omnidirectionalimages correspond to vertical lines of the environment whilecircumferences whose centre is the origin of coordinates ofthe image correspond to horizontal lines in the world

From the original omnidirectional image it is possible toobtain different scene representations through the projectionof the visual information onto different planes and surfacesTo make it possible in general it is necessary to calibratethe catadioptric system Using this information it is possibleto project the visual information onto different planes orsurfaces that show different perspectives of the original sceneEach projection presents specific properties that can be usefulin different navigation tasks

The next subsections present some of the most importantrepresentations and some of the works that have beendeveloped with each of them in the field of mobile roboticsA completemathematical description of these projections canbe found in [39]

321 Unit Sphere Projection An omnidirectional scene canbe projected onto a unit sphere whose centre is the focus ofthe mirror Every pixel of this sphere takes the value of theray of light that has the same direction with respect to thefocus of themirror To obtain this projection the catadioptricvision system has to be previously calibrated Figure 3 showsthe projection model of the unit sphere projection when ahyperbolic mirror and a perspective lens are usedThemirrorand the image plane are shown with blue color 119865 and 1198651015840 arethe foci of the camera and the mirror respectively The pixelsin the image plane are back-projected onto the mirror (to doit the calibration of the catadioptric systemmust be available)and after that each point of themirror is projected onto a unitsphere

This projection has been traditionally useful in thoseapplications that make use of the Spherical Fourier Trans-form which permits studying 3D rotations in the spaceas Geyer and Daniilidis show [52] Friedrich et al [53 54]

6 Journal of Sensors

F

F㰀

Figure 3 Projection model of the unit sphere projection

present an algorithm for robot localization and navigationusing spherical harmonics They use images captured with ahemispheric camera Makadia et al address the estimationof 3D rotations from the Spherical Fourier Transform [55ndash57] using a catadioptric vision system to capture the imagesFinally Schairer et al present several works about orientationestimation using this transform such as [58ndash60] where theydevelop methods to improve the accuracy in orientationestimation implement a particle filter to solve this task anddevelop a navigation system that combines odometry and theSpherical Fourier Transform applied to low resolution visioninformation

322 Cylindrical Projection This representation consists inprojecting the omnidirectional information onto a cylinderwhose axis is parallel to the mirror axis Conceptually it canbe obtained by changing the polar coordinate system of theomnidirectional image into a rectangular coordinate systemThis way every circumference in the omnidirectional imagewill be converted into a horizontal line in the panoramicscene This projection does not require the previous calibra-tion of the catadioptric vision system

Figure 4 shows the projection model of the cylindricalprojection when a hyperbolic mirror and a perspective lensare used The mirror and the image plane are shown withblue color 119865 and 1198651015840 are the foci of the camera and the mirrorrespectivelyThe pixels in the image plane are back-projectedonto a cylindrical surface

The cylindrical projection is commonly known aspanoramic image and it is one of the most used representa-tions in mobile robotics works to solve some problems suchas map creation and localization [61ndash63] SLAM [64] andvisual servoing and navigation [65 66] This is due to thefact that this representation is more easily understandableby human operators and it permits using standard imageprocessing algorithms which are usually designed to be usedwith perspective images

323 Perspective Projection From the omnidirectionalimage it is possible to obtain projective images in any

F

F㰀

Figure 4 Projection model of the cylindrical projection

direction They would be equivalent to the images capturedby virtual conventional cameras situated in the focus ofthe mirror The catadioptric system has to be previouslycalibrated to obtain such projection Figure 5 shows theprojection model of the perspective projection when ahyperbolic mirror and a perspective lens are used Themirror and the image plane are shown with blue color 119865 and1198651015840 are the foci of the camera and the mirror respectively Thepixels in the image plane are back-projected to the mirror(to do it the calibration of the catadioptric system must beavailable) and after that each point of the mirror is projectedonto a plane It is equivalent to capturing an image from thevirtual conventional camera placed in the focus 1198651015840

The orthographic projection also known as birdrsquos eyeview can be considered as a specific case of the perspectiveprojection In this case the projection plane is situatedperpendicularly to the camera axis If the world referencesystem is defined in such a way that the floor is the plane119909119910 and the 119911-axis is vertical and the camera axis is parallelto the 119911-axis then the orthographic projection is equivalentto having a conventional camera situated in the focus ofthe mirror pointing to the floor plane Figure 6 shows theprojection model of the orthographic view In this case thepixels are back-projected onto a horizontal plane

In the literature some algorithms which make use ofthe orthographic projection in map building localizationand navigation applications can be found For exampleGaspar et al [25] make use of this projection to extractparallel lines from the floor of corridors and other rooms toperform navigation indoors Also Bonev et al [67] propose anavigation system that combines the information containedin the omnidirectional image cylindrical projection andorthographic projection Finally Roebert et al [68] showhowa model of the environment can be created using perspectiveimages and how this model can be used for localization andnavigation purposes

To conclude this section Figure 7 shows (a) an omni-directional image captured by the catadioptric system pre-sented in Figure 2(a) and the visual appearance of the projec-tions calculated from this image (b) unit sphere projection(c) cylindrical projection (d) perspective projection onto avertical plane and (e) orthographic projection

Journal of Sensors 7

A

F

F㰀

(a)

U

V

View from Afp

(b)

Figure 5 Projection model of the perspective projection

F

F㰀

fp

Figure 6 Projection model of the orthographic projection

4 Description of the Visual Information

As described in the previous section numerous authors havestudied the use of omnidirectional images or any of theirprojections both in map building and in localization tasksThe images are highly dimensional data that change not onlywhen the robot moves but also when there is any change inthe environment such as changes in lighting conditions or in

the position of some objects Taking these facts into accountit is necessary to extract relevant information from the scenesto be able to solve robustly the mapping and localizationtasks Depending on the method followed to extract thisinformation the different solutions can be classified into twogroups solutions based on the extraction and descriptionof local features and solutions based on global appearanceTraditionally researchers have focused on the first family ofmethods but more recently some global appearance algo-rithms have been demonstrated to be also a robust alternative

Many algorithms can be found in the literature workingbothwith local features andwith global appearance of imagesAll these algorithms imply many parameters that have tobe correctly tuned so that the mapping and localizationprocesses are correct In the next subsections some of thesealgorithms and their applications are detailed

41 Methods Based on the Extraction andDescription of Local Features

411 General Issues Local features approaches are based onthe extraction of a set of outstanding points objects orregions from each scene Every feature is described by meansof a descriptor which is usually invariant against changesin the position and orientation of the robot Once extractedand described the solution to the localization problem isaddressed usually in two steps [69] First the extractedfeatures are tracked along a set of scenes to identify the zoneswhere these features are likely to be in themost recent imagesSecond a feature matching process is carried out to identifythe features

Many different philosophies can be found depending onthe type of features which are extracted and the procedurefollowed to carry out the tracking and matching As an

8 Journal of Sensors

(a) (b)

(c)

(d) (e)

Figure 7 (a) Sample omnidirectional image and some projections calculated from it (b) unit sphere projection (c) cylindrical projection(d) perspective projection onto a vertical plane and (e) orthographic projection

example Pears and Liang [70] extract corners which arelocated in the floor plane in indoor environments and usehomographies to carry out the tracking Zhou and Li [71]also use homographies but they extract features throughthe Harris corner detector [72] Sometimes geometricallymore complex features are used as Saripalli et al do [73]They carry out a segmentation process to extract predefinedfeatures such as windows and a Kalman filtering to carry outthe tracking and matching of features

412 Local Features Descriptors Among the methods forfeature extraction and description SIFT and SURF can behighlighted On the one hand SIFT (Scale Invariant FeatureTransform) was developed by Lowe [74 75] and providesfeatures which are invariant against scaling rotation changesin lighting conditions and camera viewpoint On the otherhand SURF (Speeded-Up Robust Features) whose standardversion was developed by Bay et al [76 77] is inspired in

SIFT but presents a lower computational cost and a higherrobustness against image transformationsMore recent devel-opments include BRIEF [78] which is designed to be usedin real-time applications at the expense of a lower toleranceto image distortions and transformations ORB [79] whichis based on BRIEF trying to improve its invariance againstrotation and resistance to noise but it is not robust againstchanges of scale BRISK [80] and FREAK [81] which tryto have the robustness of SURF but with an improvedcomputational cost

These descriptors have become popular in mapping andlocalization tasks using mobile robots as many researchersshow such as Angeli et al [82] and Se et al [83] who makeuse of SIFTdescriptors to solve these problems and theworksof Valgren and Lilienthal [84] and Murillo et al [85] whoemploy SURF features extracted fromomnidirectional scenesto estimate the position of the robot in a previously built mapAlso Pan et al [86] present a method based on the use of

Journal of Sensors 9

BRISK descriptors to estimate the position of an unmannedaerial vehicle

413 Using Methods Based on Local Features to Build VisualMaps Using feature-based approaches in combination withprobabilistic techniques it is possible to build metric maps[87] However these methods present some drawbacks forexample it is necessary that the environment be rich inprominent details (otherwise artificial landmarks can beinserted in the environment but this is not always possible)also the detection of such points is sometimes not robustagainst changes in the environment and their description isnot always fully invariant to changes in robot position andorientation Besides camera calibration is crucial in orderto incorporate new measurements in the model correctlyThis way small deviations in either the intrinsic or theextrinsic parameters add some error to the measurementsAt last extracting describing and comparing landmarksare computationally complex processes that often make itinfeasible building the model in real time as the robotexplores the environment

Feature-based approaches have reached a relative matu-rity and some comparative evaluations of their performancehave been carried out such as [88] and [89] These evalu-ations are useful to choose the most suitable extractor anddescriptor to a specific application

42 Methods Based on the Global Appearance of Scenes

421 General Issues The approaches based on the globalappearance of scenes work with the images as a wholewithout extracting any local information Each image isrepresented by means of a unique descriptor which containsinformation on its global appearance This kind of methodspresents some advantages in dynamic andor poorly struc-tured environments where it is difficult to extract stablecharacteristic features or regions from a set of scenes Theseapproaches lead to conceptually simpler algorithms sinceeach scene is described by means of only one descriptorMap creation and localization can be achieved just storingand comparing pairwise these descriptors As a drawbackextracting metric relationships from this information isdifficult thus this family of techniques is usually employedto build topological maps (unless the visual informationis combined with other sensory data such as odometry)Despite their simplicity several difficulties must be facedwhen using these techniques Since no local informationis extracted from the scenes it is necessary to use anycompression and description method that make the processcomputationally feasible Nevertheless the current imagedescription and compression methods permit optimisingthe size of the databases to store the necessary informationand carrying out the comparisons between scenes with arelative computational efficiency Occasionally these descrip-tors do not present invariance neither to changes in therobot orientation or in the lighting conditions nor to otherchanges in the environment (position of objects doorsetc) They will also experience some problems in environ-ments where visual aliasing is present which is a common

phenomenon in indoor environments with repetitive visualstructures

In spite of these disadvantages techniques based onglobal appearance constitute a systematic and intuitive alter-native to solve the mapping and localization problems In theworks that make use of this approach these tasks are usuallyaddressed in two steps The first step consists in creating amodel of the environment In this step the robot captures aset of images and describes each of thembymeans of a uniquedescriptor and from the information of these descriptorsa map is created and some relationships are establishedbetween images or robot posesThis step is known as learningphase Once the environment has been modelled in thesecond step the robot carries out the localization by capturingan image describing it and comparing this descriptor withthe descriptors previously stored in the model This step isalso known as test or auto-localization phase

422 Global Appearance Descriptors The key to the properfunctioning of this kind of methods is in the global descrip-tion algorithmusedDifferent alternatives can be found in therelated literature Some of the pioneer works on this approachwere developed by Matsumoto et al [90] and make a directcomparison between the pixels of a central region of thescenes using a correlation process However considering thehigh computational cost of this method the authors startedto build some descriptors that stored global information fromthe scenes using depth information estimated through stereodivergence [91]

Among the techniques that try to compress globally thevisual information Principal Components Analysis (PCA)[92] can be highlighted as one of the first robust alternativesused PCA considers the images as multidimensional datathat can be projected onto a lower dimension space retainingmost of the original information Some authors like Krose etal [93 94] and Stimec et al [95] make use of this method tocreate robustmodels of the environment usingmobile robotsThe first PCA developments presented two main problemsOn the one hand the descriptors were variant against changesin the orientation of the robot in the ground plane and onthe other hand all the images had to be available to build themodel which means that the map cannot be built online asthe robot explores the environment If a new image has tobe added to the previously built PCA model (eg becausethe robot must update this representation as new relevantimages are captured) it is necessary to start the process fromthe scratch using again all the images captured so far Someauthors have tried to overcome these drawbacks First Joganand Leonardis [96] proposed a version of PCA that providesdescriptors which are invariant against rotations of the robotin the ground plane when omnidirectional images are usedat the expense of a substantial increase of the computationalcost Second Artac et al [97] used an incremental versionof PCA that permits adding new images to a previously builtmodel

Other authors have proposed the implementation ofdescriptors based on the application of the Discrete FourierTransform (DFT) to the images with the aim of extractingthe most relevant information from the scenes In this field

10 Journal of Sensors

some alternatives can be found On the one hand in the caseof panoramic images both the two-dimensional DFT or theFourier Signature can be implemented as Paya et al [28] andMenegatti et al [98] show respectively On the other handin the case of omnidirectional images the Spherical FourierTransform (SFT) can be used as Rossi et al [99] show In allthe cases the resulting descriptor is able to compress mostof the information contained in the original scene in a lowernumber of components Also these methods permit buildingdescriptors which are invariant against rotations of the robotin the ground plane Apart from this they contain enoughinformation to estimate not only the position of the robotbut also its relative orientation At last the computational costto describe each scene is relatively low and each descriptorcan be calculated independently on the rest of descriptorsTaking these features into account in this moment DFToutperforms PCA as far as mapping and localization areconcerned As an example Menegatti et al [100] show howa probabilistic localization process can be carried out withina visual memory previously created by means of the FourierSignature of a set of panoramic scenes as the only source ofinformation

Other authors have described globally the scenes throughapproaches based on gradient either the magnitude or theorientation As an example Kosecka et al [101] make use of ahistogram of gradient orientation to describe each scene withthe goal of creating a map of the environment and carryingout the localization process However some comparisonsbetween local areas of the candidate zones are performed torefine these processes Murillo et al [102] propose to use apanoramic gist descriptor [103] and try to optimise its sizewhile keeping most of the information of the environmentThe approaches based on gradients and gist tend to present aperformance similar to Fourier Transform methods [104]

The information stored in the color channels also consti-tutes a useful alternative to build global appearance descrip-tors The works of Zhou et al [105] show an example of useof this information They propose building histograms thatcontain information on color gradient edges density andtexture to represent the images

423 Using Methods Based on Global Appearance to BuildVisual Maps Finally when working with global appearanceit is necessary to consider that the appearance of an imagestrongly depends on the lighting conditions of the environ-ment representedThis way the global appearance descriptorshould be robust against changes in these conditions Someresearchers have focused on this topic and have proposeddifferent solutions First the problem can be addressed bymeans of considering sets of images captured under differentlighting conditions This way the model would containinformation on the changes that appearance can undergoAs an example the works developed by Murase and Nayar[106] solved the problem of visual recognition of objectsunder changing lighting conditions using this approach Thesecond approach consists in trying to remove or minimisethe effects produced by these changes during the creation ofthe descriptor to obtain a normalised model This approachis mainly based on the use of filters for example to detect

and extract edges [107] since this information tends to bemore insensitive to changes in lighting conditions than theinformation of intensity or homomorphic filters [108] whichtry to separate the luminance and reflectance componentsand minimise the first component which is the one that ismore prone to change when the lighting conditions do

5 Mapping Localization and NavigationUsing Omnidirectional Vision Sensors

This section addresses the problems of mapping localizationandnavigation using the information provided by omnidirec-tional vision sensors First themain approaches to solve thesetasks are outlined and then some relevant works developedwithin these approaches are described

While the initial works tried tomodel the geometry of theenvironment with metric precision from visual informationand arranging the information through CAD (ComputerAssisted Design) models these approaches gave way to sim-pler models that represent the environment through occu-pation grids topological maps or even sequences of imagesTraditionally the problems of map building and localizationhave been addressed using three different approaches

(i) Map building and subsequent localization in theseapproaches first the robot goes through the envi-ronment to map (usually in a teleoperated way)and collects some useful information from a varietyof positions This information is then processed tobuild the map Once the map is available the robotcaptures information from its current position andcomparing this information with the map its currentpose (position and orientation) is estimated

(ii) Continuous map building and updating in thisapproach the robot is able to explore autonomouslythe environment to map and build or update amodel while it is moving The SLAM algorithms(Simultaneous Localization And Mapping) fit withinthis approach

(iii) Mapless navigation systems the robot navigatesthrough the environment by applying some analysistechniques to the last captured scenes such as opticalflow or through visual memories previously recordedthat contain sequences of images and associatedactions but no relation between images This kindof navigation is associated basically with reactivebehaviours

The next subsections present some of the most relevantcontributions developed in each of these three frameworks

51 Map Building and Subsequent Localization Pretty usu-ally solving the localization problem requires having apreviously built model of the environment in such a waythat the robot can estimate its pose by comparing its sensoryinformation with the information captured in the modelIn general depending on how this information is arrangedthese models can be classified into three categories metrictopological or hybrid [109] First metric approaches try to

Journal of Sensors 11

create a model with geometric precision including somefeatures of the environment with respect to a referencesystem These models are created usually using local featuresextracted from images [87] and they permit estimating met-rically the position of the robot up to a specific error Secondtopological approaches try to create compact models thatinclude information from several characteristic localizationsand the connectivity relations between them Both localfeatures and global appearance can be used to create suchmodels [104 110] They usually permit a rough localizationof the robot with a reasonable computational cost Finallyhybrid maps combine metric and topological information totry to have the advantages of both methods The informationis arranged into several layers with topological informationthat permits carrying out an initial rough estimation andmetric information to refine this estimation when necessary[111]

The use of information captured by omnidirectionalvision sensors has expanded in recent years to solve themapping localization and navigation problems The nextparagraphs outline some of these works

Initially a simple way to create a metric model consistsin using some visual beacons which can be seen from avariety of positions and used to estimate the pose of therobot Following this philosophy Li et al [112] present asystem for the localization of agricultural vehicles using somevisual beacons which are perceived using an omnidirectionalvision system These beacons are constituted by four redlandmarks situated in the vertices of the environment wherethese vehicles may move The omnidirectional vision systemmakes use of these four landmarks as beacons situated onspecific and previously known positions to estimate its poseIn other occasions a more complete description of theenvironment is previously available as in the work of Lu etal [113] They use the model provided by the competitionRoboCup Middle Size League Taking this competition intoaccount the objective consists in carrying out an accuratelocalization of the robot With this aim a Monte Carlolocalization method is employed to provide a rough initiallocalization and once the algorithmhas converged this resultis used as the initial value of a matching optimisation algo-rithm in order to perform accurate and efficient localizationtracking

The maps composed of local visual features have beenextensively used along the last few years Classical approacheshave made use of monocular or stereo vision to extract andtrack these local features or landmarks More recently someresearchers have shown that it is feasible to extend theseclassical approaches to be used with omnidirectional visionIn this line Choi et al [114] propose an algorithm to createmaps which is based on an object extraction method usingLucas-Kanade optical flowmotion detection from the imagesobtained by an omnidirectional vision systemThe algorithmuses the outer point of motion vectors as feature points of theenvironment They are obtained based on the corner pointsof an extracted object and using Lucas-Kanade optical flow

Valgren et al [115] propose the use of local features thatare extracted from images captured in a sequence Thesefeatures are used both to cluster the images into nodes and

to detect links between the nodes They employ a variant ofSIFT features that are extracted and matched from differentviewpoints Each node of the topological map contains acollection of images which are considered similar enough(ie a sufficient number of feature matches must existamong the images contained in the node) Once the nodesare constructed they are connected taking into accountfeature matching between the images in the nodes This waythey build a topological map incrementally creating newnodes as the environment is explored In a later work [84]the same authors study how to build topological maps inlarge environments both indoors and outdoors using thelocal features extracted from a set of omnidirectional viewsincluding the epipolar restriction and a clustering method tocarry out localization in an efficient way

Goedeme et al [116] present an algorithm to build atopological model of complex environments They also pro-pose algorithms to solve the problems of localization (bothglobal when the robot has no information about its previousposition and local when this information is available) andnavigation In this approach the authors propose two localdescriptors to extract the most relevant information fromthe images a color enhanced version of SIFT and invariantcolumn segments

The use of bag-of-features approaches has also beenconsidered by some researchers in mobile robotics As anexample Lu et al [117] propose the use of a bag-of-featuresapproach to carry out topological localization The map isbuilt in a previous process consisting mainly in clusteringthe visual information captured by an omnidirectional visionsystem It is performed in two steps First local features areextracted from all the images and they are used to carry out a119870-means clustering obtaining the 119896 clustersrsquo centres Secondthese centres constitute the visual vocabulary which is usedsubsequently to solve the localization problem

Beyond these frameworks the use of omnidirectionalvision systems has contributed to the emergence of newparadigms to create visual maps using some different fun-damentals to those traditionally used with local featuresOne of these frameworks consists in creating topologicalmaps using a global description of the images captured bythe omnidirectional vision system In this area Menegattiet al [98] show how a visual memory can be built usingthe Fourier Signature which presents rotational invariancewhen used with panoramic images The map is built usingglobal appearance methods and a system of mechanicalforces calculated from the similitude between descriptorsLiu et al [118] propose a framework that includes a methodto describe the visual appearance of the environment Itconsists in an adaptive descriptor based on color featuresand geometric information Using this descriptor they createa topological map based on the fact that the vertical edgesdivide the rooms in regions with a uniform meaning in thecase of indoor environments These descriptors are used asa basis to create a topological map through an incrementalprocess in which similar descriptors are incorporated intoeach node A global threshold is considered to evaluatethe similitude between descriptors Also other authors havestudied the localization problem using color information as

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 3: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

Journal of Sensors 3

monocular configurations [16ndash18] binocular systems (stereocameras) [19ndash21] and even trinocular configurations [22 23]Binocular and trinocular configurations permit measuringdepth from the images However the limited field of viewof these systems causes the necessity of employing severalimages of the environment in order to acquire complete infor-mation from it which is necessary to create a completemodelof an unknown environment In contrast more recently thesystems that provide omnidirectional visual information [24]have gained popularity thanks mainly to the great quantityof information they provide the robot with as they usuallyhave a field of view of 360 deg around the robot [25]They canbe composed of several cameras pointing towards differentdirections [26] or a unique camera and a reflective surface(catadioptric vision systems) [27]

Omnidirectional vision systems have further advantagescomparing to conventional vision systems The features thatappear in the images aremore stable since they remain longerin the field of view as the robot moves Also they provideinformation that permits estimating the position of the robotindependently on its orientation and they permit estimatingthis orientation too In general omnidirectional vision sys-tems have the ability to capture a more complete descriptionof the environment in only one scene which permits cre-ating exhaustive models of the environment with a reducednumber of views Finally even if some objects or personsocclude partially the scene omnidirectional images containsome environment information from other directions

These systems are usually based on the combination ofa conventional camera and a convex mirror which can havevarious shapes These structures are known as catadioptricvision systems and the raw information they provide isknown as omnidirectional image However some transforma-tions can be carried out to obtain other kinds of projectionswhich may be more useful depending on the task to developand the degrees of freedom of the movement of the robotThey include the spherical cylindrical or orthographic pro-jections [25 28] This issue will be addressed in Section 3

24 Trends in the Creation of Visual Models Using any ofthe configurations and projections presented in the previoussubsections a complete model of the environment can bebuilt Traditionally three different approaches have beenproposed with this aim metric topological or hybrid Firsta metric map usually defines the position of some relevantlandmarks extracted from the scenes with respect to acoordinate system and permits estimating the position of therobot with geometric accuracy Second a topological modelconsists generally of a graph where some representativelocalizations appear as nodes along with the connectivityrelations that permit navigating between consecutive nodesComparing to metric models they usually permit a rougherlocalization with a reasonable computational cost Finallyhybrid maps arrange the information into several layers withdifferent degrees of detail They usually combine topologicalmodels in the top layers which permit a rough localizationwith metric models in the bottom layers to refine thislocalization This way they try to combine the advantages ofmetric and topological maps

The use of visual information to create models of theenvironment has an important disadvantage the visualappearance of the environment changes not only when therobot moves but also under other circumstances such asvariations in lighting conditions which are present in allreal applications and may produce substantial changes inthe appearance of the scenes The environment may alsohave some changes after the model has been created andsometimes the scenes may be partially occluded by thenatural presence of people or other robots moving in theenvironment which is beingmodelled Taking these facts intoaccount independently on the approach used to create themodel it is necessary to extract some relevant informationfrom the scenes This information must be useful to identifythe environment and the position of the robot when it wascaptured independently on any other phenomena that mayhappen The extraction and description of this informationcan be carried out using two different approaches based onlocal features or based on global appearance On the onehand the approaches based on local features try to extractsome landmarks points or regions from the scenes andon the other global methods create a unique descriptor perscene that contains information on its global appearanceThisproblem is analysed more deeply in Section 4

3 Omnidirectional Vision Sensors

As stated in the previous section the expansion of the fieldof view that can be achieved with vision sensors is one of thereasons that explains the extensive use of computer vision inmobile robotics applications Omnidirectional vision sensorsstand out because they present a complete field of viewaround the camera axis The objective of this section istwofold On the one hand some of the configurations thatpermit capturing omnidirectional images are presented Onthe other hand the different formats that can be used toexpress this information and their application in robotic tasksare detailed

There are several configurations that permit capturingomnidirectional visual information First an array of camerascan be used to capture information from a variety of direc-tions The ladybug systems [29] constitute an example Theyare composed of several vision sensors distributed aroundthe camera and depending on the model they can capture avisual field covering more than 90 of a sphere whose centreis situated in the sensor

Fisheye lenses can also be included within the systemsthat capture a wide field of view [30 31] These lenses canprovide a field of view higher than 180 deg The Gear 360camera [32] contains two fisheye lenses each one capturinga field of view of 180 deg both horizontally and verticallyCombining both images the camera provides images witha complete 360 deg field of view Some works have shownhow several cameras with such lenses can be combined toobtain omnidirectional information For example Li et al[33] present a vision system equipped with two cameraswith fisheye lenses whose field of view is a complete spherearound the sensor However using different cameras to createa spherical image can be challenging taking the different

4 Journal of Sensors

Hyperbolic mirror

Image plane

F

F㰀

(a)

Parabolic mirror

Image plane

F

F㰀

(b)

Figure 1 Projection model of central catadioptric systems (a) Hyperbolic mirror and perspective projection lens and (b) parabolic mirrorand orthographic lens

lighting conditions of each camera into account This is whyLi developed a new system [34] to avoid this problem

Catadioptric vision sensors are another example [35]They make use of convex mirrors where the scene is pro-jected These mirrors may present different geometries suchas spherical hyperbolic parabolic elliptic or conic and thecamera takes the information from the environment throughits reflection onto the mirrors Due to their importance thenext subsection describes them in depth

31 Catadioptric Vision Sensors Catadioptric vision systemsmake use of convex reflective surfaces to expand the fieldof view of the camera The vision sensors capture theinformation through these surfaces The shape position andorientation of the mirror with respect to the camera willdefine the geometry of the projection of the world onto thecamera

Nowadays many kinds of catadioptric vision sensors canbe found One of the first developments was done by Rees in1970 [36] More recent examples of catadioptric systems canbe found in [37ndash40] Most of them permit omnidirectionalvision which means having a vision angle equal to 360 degaround the mirror axis The lateral angle of view dependsessentially on the geometry of the mirror and the relativeposition with respect to the camera

In the related literature several kinds of mirrors can befound as a part of catadioptric vision systems such as spheri-cal [41] conic [42] parabolic [27] or hyperbolic [43] Yoshidaet al [44] present a work about the possibility of creatinga catadioptric system composed of two mirrors focusing

on the study on how the changes in the geometry of thesystem are reflected in the final image captured by the cameraAlso Baker and Nayar [45] present a comparative analysisabout the use of different mirror geometries in catadioptricvision systems According to this work it is not possibleto state in general that a specific geometry outperformsthe others Each geometry presents characteristic reflectionproperties that may be advantageous under some specificcircumstances Anyway parabolic and hyperbolic mirrorspresent some particular properties that make them speciallyuseful when perspective projections of the omnidirectionalimage are needed as the next paragraph details

There are two main issues when using catadioptric visionsystems for mapping and localization the single effectiveviewpoint property and calibration On the one hand whenusing catadioptric vision sensors it is interesting that thesystem has a unique projection centre (ie that they con-stitute a central camera) In such systems all the rays thatarrive to the mirror surface converge into a unique pointwhich is the optical centre This is advantageous becausethanks to it it is possible to obtain undistorted perspectiveimages from the scene captured by the catadioptric system[39] This property appears in [45] referred to as singleeffective viewpoint According to both works there are twoways of building a catadioptric vision system that meetsthis property with the combination of a hyperbolic mirrorand a camera with perspective projection lens (conventionallens or pin-hole model) as shown in Figure 1(a) or with asystem composed of a parabolic mirror and an orthographicprojection lens (Figure 1(b)) In both figures 119865 and 1198651015840 are the

Journal of Sensors 5

(a) (b)

Figure 2 Two sample omnidirectional images captured in the same room from the same position and using two different catadioptricsystems

foci of the camera and themirror respectively In other casesthe rays that arrive to the camera converge into different focalpoints depending on their vertical incidence angle (this is thecase of noncentral cameras) Ohte et al studied this problemwith spherical mirrors [41] In the case of objects that arerelatively far from the axis of the mirror the error producedby the existence of different focal points is relatively smallHowever when the object is close to the mirror the errorsin the projection angles become significant and the completeprojection model of the camera must be used

On the other hand many mapping and localizationapplications work correctly only if all the parameters of thesystem are known With this aim the catadioptric systemneeds to go under a calibration process [46] The result ofthis process can be either (a) the intrinsic parameters ofthe camera the coefficients of the mirror and the relativeposition between them or (b) a list of correspondencesbetween each pixel in the image plane and the ray of light ofthe world that has projected onto it Many works have beendeveloped on calibration of catadioptric vision systems Forexample Goncalves and Araujo [47] propose a method tocalibrate both central and noncentral catadioptric camerasusing an approach based on bundle adjustment Ying andHu [48] present a method that uses geometric invariantsextracted from projections of lines or spheres in the worldThis method can be used to calibrate central catadioptriccameras Also Marcato Jr et al [49] develop an approach tocalibrate a catadioptric system composed of a wide-angle lenscamera and a conicmirror that takes into account the possiblemisalignment between the camera and mirror axes FinallyScaramuzza et al [50] develop a framework to calibratecentral catadioptric cameras using a planar pattern shown ata number of different orientations

32 Projections of the Omnidirectional Information The rawinformation captured by the camera in a catadioptric visionsystem is the omnidirectional image that contains the infor-mation of the environment previously reflected onto the mir-ror This image can be considered as a polar representationof the world whose origin is the projection of the focus ofthe mirror onto the image plane Figure 2 shows two samplecatadioptric vision systems composed of a camera and ahyperbolic mirror and the omnidirectional images capturedwith each one The mirror (a) is the model Wide 70 of the

manufacturer Eizoh and (b) is the model Super-Wide ViewLarge manufactured by Accowle

The omnidirectional scene can be used directly to obtainuseful information in robot navigation tasks For exampleScaramuzza et al [51] present a description method based onthe extraction of radial lines from the omnidirectional imageto characterize omnidirectional scenes captured in real envi-ronments They show how radial lines in omnidirectionalimages correspond to vertical lines of the environment whilecircumferences whose centre is the origin of coordinates ofthe image correspond to horizontal lines in the world

From the original omnidirectional image it is possible toobtain different scene representations through the projectionof the visual information onto different planes and surfacesTo make it possible in general it is necessary to calibratethe catadioptric system Using this information it is possibleto project the visual information onto different planes orsurfaces that show different perspectives of the original sceneEach projection presents specific properties that can be usefulin different navigation tasks

The next subsections present some of the most importantrepresentations and some of the works that have beendeveloped with each of them in the field of mobile roboticsA completemathematical description of these projections canbe found in [39]

321 Unit Sphere Projection An omnidirectional scene canbe projected onto a unit sphere whose centre is the focus ofthe mirror Every pixel of this sphere takes the value of theray of light that has the same direction with respect to thefocus of themirror To obtain this projection the catadioptricvision system has to be previously calibrated Figure 3 showsthe projection model of the unit sphere projection when ahyperbolic mirror and a perspective lens are usedThemirrorand the image plane are shown with blue color 119865 and 1198651015840 arethe foci of the camera and the mirror respectively The pixelsin the image plane are back-projected onto the mirror (to doit the calibration of the catadioptric systemmust be available)and after that each point of themirror is projected onto a unitsphere

This projection has been traditionally useful in thoseapplications that make use of the Spherical Fourier Trans-form which permits studying 3D rotations in the spaceas Geyer and Daniilidis show [52] Friedrich et al [53 54]

6 Journal of Sensors

F

F㰀

Figure 3 Projection model of the unit sphere projection

present an algorithm for robot localization and navigationusing spherical harmonics They use images captured with ahemispheric camera Makadia et al address the estimationof 3D rotations from the Spherical Fourier Transform [55ndash57] using a catadioptric vision system to capture the imagesFinally Schairer et al present several works about orientationestimation using this transform such as [58ndash60] where theydevelop methods to improve the accuracy in orientationestimation implement a particle filter to solve this task anddevelop a navigation system that combines odometry and theSpherical Fourier Transform applied to low resolution visioninformation

322 Cylindrical Projection This representation consists inprojecting the omnidirectional information onto a cylinderwhose axis is parallel to the mirror axis Conceptually it canbe obtained by changing the polar coordinate system of theomnidirectional image into a rectangular coordinate systemThis way every circumference in the omnidirectional imagewill be converted into a horizontal line in the panoramicscene This projection does not require the previous calibra-tion of the catadioptric vision system

Figure 4 shows the projection model of the cylindricalprojection when a hyperbolic mirror and a perspective lensare used The mirror and the image plane are shown withblue color 119865 and 1198651015840 are the foci of the camera and the mirrorrespectivelyThe pixels in the image plane are back-projectedonto a cylindrical surface

The cylindrical projection is commonly known aspanoramic image and it is one of the most used representa-tions in mobile robotics works to solve some problems suchas map creation and localization [61ndash63] SLAM [64] andvisual servoing and navigation [65 66] This is due to thefact that this representation is more easily understandableby human operators and it permits using standard imageprocessing algorithms which are usually designed to be usedwith perspective images

323 Perspective Projection From the omnidirectionalimage it is possible to obtain projective images in any

F

F㰀

Figure 4 Projection model of the cylindrical projection

direction They would be equivalent to the images capturedby virtual conventional cameras situated in the focus ofthe mirror The catadioptric system has to be previouslycalibrated to obtain such projection Figure 5 shows theprojection model of the perspective projection when ahyperbolic mirror and a perspective lens are used Themirror and the image plane are shown with blue color 119865 and1198651015840 are the foci of the camera and the mirror respectively Thepixels in the image plane are back-projected to the mirror(to do it the calibration of the catadioptric system must beavailable) and after that each point of the mirror is projectedonto a plane It is equivalent to capturing an image from thevirtual conventional camera placed in the focus 1198651015840

The orthographic projection also known as birdrsquos eyeview can be considered as a specific case of the perspectiveprojection In this case the projection plane is situatedperpendicularly to the camera axis If the world referencesystem is defined in such a way that the floor is the plane119909119910 and the 119911-axis is vertical and the camera axis is parallelto the 119911-axis then the orthographic projection is equivalentto having a conventional camera situated in the focus ofthe mirror pointing to the floor plane Figure 6 shows theprojection model of the orthographic view In this case thepixels are back-projected onto a horizontal plane

In the literature some algorithms which make use ofthe orthographic projection in map building localizationand navigation applications can be found For exampleGaspar et al [25] make use of this projection to extractparallel lines from the floor of corridors and other rooms toperform navigation indoors Also Bonev et al [67] propose anavigation system that combines the information containedin the omnidirectional image cylindrical projection andorthographic projection Finally Roebert et al [68] showhowa model of the environment can be created using perspectiveimages and how this model can be used for localization andnavigation purposes

To conclude this section Figure 7 shows (a) an omni-directional image captured by the catadioptric system pre-sented in Figure 2(a) and the visual appearance of the projec-tions calculated from this image (b) unit sphere projection(c) cylindrical projection (d) perspective projection onto avertical plane and (e) orthographic projection

Journal of Sensors 7

A

F

F㰀

(a)

U

V

View from Afp

(b)

Figure 5 Projection model of the perspective projection

F

F㰀

fp

Figure 6 Projection model of the orthographic projection

4 Description of the Visual Information

As described in the previous section numerous authors havestudied the use of omnidirectional images or any of theirprojections both in map building and in localization tasksThe images are highly dimensional data that change not onlywhen the robot moves but also when there is any change inthe environment such as changes in lighting conditions or in

the position of some objects Taking these facts into accountit is necessary to extract relevant information from the scenesto be able to solve robustly the mapping and localizationtasks Depending on the method followed to extract thisinformation the different solutions can be classified into twogroups solutions based on the extraction and descriptionof local features and solutions based on global appearanceTraditionally researchers have focused on the first family ofmethods but more recently some global appearance algo-rithms have been demonstrated to be also a robust alternative

Many algorithms can be found in the literature workingbothwith local features andwith global appearance of imagesAll these algorithms imply many parameters that have tobe correctly tuned so that the mapping and localizationprocesses are correct In the next subsections some of thesealgorithms and their applications are detailed

41 Methods Based on the Extraction andDescription of Local Features

411 General Issues Local features approaches are based onthe extraction of a set of outstanding points objects orregions from each scene Every feature is described by meansof a descriptor which is usually invariant against changesin the position and orientation of the robot Once extractedand described the solution to the localization problem isaddressed usually in two steps [69] First the extractedfeatures are tracked along a set of scenes to identify the zoneswhere these features are likely to be in themost recent imagesSecond a feature matching process is carried out to identifythe features

Many different philosophies can be found depending onthe type of features which are extracted and the procedurefollowed to carry out the tracking and matching As an

8 Journal of Sensors

(a) (b)

(c)

(d) (e)

Figure 7 (a) Sample omnidirectional image and some projections calculated from it (b) unit sphere projection (c) cylindrical projection(d) perspective projection onto a vertical plane and (e) orthographic projection

example Pears and Liang [70] extract corners which arelocated in the floor plane in indoor environments and usehomographies to carry out the tracking Zhou and Li [71]also use homographies but they extract features throughthe Harris corner detector [72] Sometimes geometricallymore complex features are used as Saripalli et al do [73]They carry out a segmentation process to extract predefinedfeatures such as windows and a Kalman filtering to carry outthe tracking and matching of features

412 Local Features Descriptors Among the methods forfeature extraction and description SIFT and SURF can behighlighted On the one hand SIFT (Scale Invariant FeatureTransform) was developed by Lowe [74 75] and providesfeatures which are invariant against scaling rotation changesin lighting conditions and camera viewpoint On the otherhand SURF (Speeded-Up Robust Features) whose standardversion was developed by Bay et al [76 77] is inspired in

SIFT but presents a lower computational cost and a higherrobustness against image transformationsMore recent devel-opments include BRIEF [78] which is designed to be usedin real-time applications at the expense of a lower toleranceto image distortions and transformations ORB [79] whichis based on BRIEF trying to improve its invariance againstrotation and resistance to noise but it is not robust againstchanges of scale BRISK [80] and FREAK [81] which tryto have the robustness of SURF but with an improvedcomputational cost

These descriptors have become popular in mapping andlocalization tasks using mobile robots as many researchersshow such as Angeli et al [82] and Se et al [83] who makeuse of SIFTdescriptors to solve these problems and theworksof Valgren and Lilienthal [84] and Murillo et al [85] whoemploy SURF features extracted fromomnidirectional scenesto estimate the position of the robot in a previously built mapAlso Pan et al [86] present a method based on the use of

Journal of Sensors 9

BRISK descriptors to estimate the position of an unmannedaerial vehicle

413 Using Methods Based on Local Features to Build VisualMaps Using feature-based approaches in combination withprobabilistic techniques it is possible to build metric maps[87] However these methods present some drawbacks forexample it is necessary that the environment be rich inprominent details (otherwise artificial landmarks can beinserted in the environment but this is not always possible)also the detection of such points is sometimes not robustagainst changes in the environment and their description isnot always fully invariant to changes in robot position andorientation Besides camera calibration is crucial in orderto incorporate new measurements in the model correctlyThis way small deviations in either the intrinsic or theextrinsic parameters add some error to the measurementsAt last extracting describing and comparing landmarksare computationally complex processes that often make itinfeasible building the model in real time as the robotexplores the environment

Feature-based approaches have reached a relative matu-rity and some comparative evaluations of their performancehave been carried out such as [88] and [89] These evalu-ations are useful to choose the most suitable extractor anddescriptor to a specific application

42 Methods Based on the Global Appearance of Scenes

421 General Issues The approaches based on the globalappearance of scenes work with the images as a wholewithout extracting any local information Each image isrepresented by means of a unique descriptor which containsinformation on its global appearance This kind of methodspresents some advantages in dynamic andor poorly struc-tured environments where it is difficult to extract stablecharacteristic features or regions from a set of scenes Theseapproaches lead to conceptually simpler algorithms sinceeach scene is described by means of only one descriptorMap creation and localization can be achieved just storingand comparing pairwise these descriptors As a drawbackextracting metric relationships from this information isdifficult thus this family of techniques is usually employedto build topological maps (unless the visual informationis combined with other sensory data such as odometry)Despite their simplicity several difficulties must be facedwhen using these techniques Since no local informationis extracted from the scenes it is necessary to use anycompression and description method that make the processcomputationally feasible Nevertheless the current imagedescription and compression methods permit optimisingthe size of the databases to store the necessary informationand carrying out the comparisons between scenes with arelative computational efficiency Occasionally these descrip-tors do not present invariance neither to changes in therobot orientation or in the lighting conditions nor to otherchanges in the environment (position of objects doorsetc) They will also experience some problems in environ-ments where visual aliasing is present which is a common

phenomenon in indoor environments with repetitive visualstructures

In spite of these disadvantages techniques based onglobal appearance constitute a systematic and intuitive alter-native to solve the mapping and localization problems In theworks that make use of this approach these tasks are usuallyaddressed in two steps The first step consists in creating amodel of the environment In this step the robot captures aset of images and describes each of thembymeans of a uniquedescriptor and from the information of these descriptorsa map is created and some relationships are establishedbetween images or robot posesThis step is known as learningphase Once the environment has been modelled in thesecond step the robot carries out the localization by capturingan image describing it and comparing this descriptor withthe descriptors previously stored in the model This step isalso known as test or auto-localization phase

422 Global Appearance Descriptors The key to the properfunctioning of this kind of methods is in the global descrip-tion algorithmusedDifferent alternatives can be found in therelated literature Some of the pioneer works on this approachwere developed by Matsumoto et al [90] and make a directcomparison between the pixels of a central region of thescenes using a correlation process However considering thehigh computational cost of this method the authors startedto build some descriptors that stored global information fromthe scenes using depth information estimated through stereodivergence [91]

Among the techniques that try to compress globally thevisual information Principal Components Analysis (PCA)[92] can be highlighted as one of the first robust alternativesused PCA considers the images as multidimensional datathat can be projected onto a lower dimension space retainingmost of the original information Some authors like Krose etal [93 94] and Stimec et al [95] make use of this method tocreate robustmodels of the environment usingmobile robotsThe first PCA developments presented two main problemsOn the one hand the descriptors were variant against changesin the orientation of the robot in the ground plane and onthe other hand all the images had to be available to build themodel which means that the map cannot be built online asthe robot explores the environment If a new image has tobe added to the previously built PCA model (eg becausethe robot must update this representation as new relevantimages are captured) it is necessary to start the process fromthe scratch using again all the images captured so far Someauthors have tried to overcome these drawbacks First Joganand Leonardis [96] proposed a version of PCA that providesdescriptors which are invariant against rotations of the robotin the ground plane when omnidirectional images are usedat the expense of a substantial increase of the computationalcost Second Artac et al [97] used an incremental versionof PCA that permits adding new images to a previously builtmodel

Other authors have proposed the implementation ofdescriptors based on the application of the Discrete FourierTransform (DFT) to the images with the aim of extractingthe most relevant information from the scenes In this field

10 Journal of Sensors

some alternatives can be found On the one hand in the caseof panoramic images both the two-dimensional DFT or theFourier Signature can be implemented as Paya et al [28] andMenegatti et al [98] show respectively On the other handin the case of omnidirectional images the Spherical FourierTransform (SFT) can be used as Rossi et al [99] show In allthe cases the resulting descriptor is able to compress mostof the information contained in the original scene in a lowernumber of components Also these methods permit buildingdescriptors which are invariant against rotations of the robotin the ground plane Apart from this they contain enoughinformation to estimate not only the position of the robotbut also its relative orientation At last the computational costto describe each scene is relatively low and each descriptorcan be calculated independently on the rest of descriptorsTaking these features into account in this moment DFToutperforms PCA as far as mapping and localization areconcerned As an example Menegatti et al [100] show howa probabilistic localization process can be carried out withina visual memory previously created by means of the FourierSignature of a set of panoramic scenes as the only source ofinformation

Other authors have described globally the scenes throughapproaches based on gradient either the magnitude or theorientation As an example Kosecka et al [101] make use of ahistogram of gradient orientation to describe each scene withthe goal of creating a map of the environment and carryingout the localization process However some comparisonsbetween local areas of the candidate zones are performed torefine these processes Murillo et al [102] propose to use apanoramic gist descriptor [103] and try to optimise its sizewhile keeping most of the information of the environmentThe approaches based on gradients and gist tend to present aperformance similar to Fourier Transform methods [104]

The information stored in the color channels also consti-tutes a useful alternative to build global appearance descrip-tors The works of Zhou et al [105] show an example of useof this information They propose building histograms thatcontain information on color gradient edges density andtexture to represent the images

423 Using Methods Based on Global Appearance to BuildVisual Maps Finally when working with global appearanceit is necessary to consider that the appearance of an imagestrongly depends on the lighting conditions of the environ-ment representedThis way the global appearance descriptorshould be robust against changes in these conditions Someresearchers have focused on this topic and have proposeddifferent solutions First the problem can be addressed bymeans of considering sets of images captured under differentlighting conditions This way the model would containinformation on the changes that appearance can undergoAs an example the works developed by Murase and Nayar[106] solved the problem of visual recognition of objectsunder changing lighting conditions using this approach Thesecond approach consists in trying to remove or minimisethe effects produced by these changes during the creation ofthe descriptor to obtain a normalised model This approachis mainly based on the use of filters for example to detect

and extract edges [107] since this information tends to bemore insensitive to changes in lighting conditions than theinformation of intensity or homomorphic filters [108] whichtry to separate the luminance and reflectance componentsand minimise the first component which is the one that ismore prone to change when the lighting conditions do

5 Mapping Localization and NavigationUsing Omnidirectional Vision Sensors

This section addresses the problems of mapping localizationandnavigation using the information provided by omnidirec-tional vision sensors First themain approaches to solve thesetasks are outlined and then some relevant works developedwithin these approaches are described

While the initial works tried tomodel the geometry of theenvironment with metric precision from visual informationand arranging the information through CAD (ComputerAssisted Design) models these approaches gave way to sim-pler models that represent the environment through occu-pation grids topological maps or even sequences of imagesTraditionally the problems of map building and localizationhave been addressed using three different approaches

(i) Map building and subsequent localization in theseapproaches first the robot goes through the envi-ronment to map (usually in a teleoperated way)and collects some useful information from a varietyof positions This information is then processed tobuild the map Once the map is available the robotcaptures information from its current position andcomparing this information with the map its currentpose (position and orientation) is estimated

(ii) Continuous map building and updating in thisapproach the robot is able to explore autonomouslythe environment to map and build or update amodel while it is moving The SLAM algorithms(Simultaneous Localization And Mapping) fit withinthis approach

(iii) Mapless navigation systems the robot navigatesthrough the environment by applying some analysistechniques to the last captured scenes such as opticalflow or through visual memories previously recordedthat contain sequences of images and associatedactions but no relation between images This kindof navigation is associated basically with reactivebehaviours

The next subsections present some of the most relevantcontributions developed in each of these three frameworks

51 Map Building and Subsequent Localization Pretty usu-ally solving the localization problem requires having apreviously built model of the environment in such a waythat the robot can estimate its pose by comparing its sensoryinformation with the information captured in the modelIn general depending on how this information is arrangedthese models can be classified into three categories metrictopological or hybrid [109] First metric approaches try to

Journal of Sensors 11

create a model with geometric precision including somefeatures of the environment with respect to a referencesystem These models are created usually using local featuresextracted from images [87] and they permit estimating met-rically the position of the robot up to a specific error Secondtopological approaches try to create compact models thatinclude information from several characteristic localizationsand the connectivity relations between them Both localfeatures and global appearance can be used to create suchmodels [104 110] They usually permit a rough localizationof the robot with a reasonable computational cost Finallyhybrid maps combine metric and topological information totry to have the advantages of both methods The informationis arranged into several layers with topological informationthat permits carrying out an initial rough estimation andmetric information to refine this estimation when necessary[111]

The use of information captured by omnidirectionalvision sensors has expanded in recent years to solve themapping localization and navigation problems The nextparagraphs outline some of these works

Initially a simple way to create a metric model consistsin using some visual beacons which can be seen from avariety of positions and used to estimate the pose of therobot Following this philosophy Li et al [112] present asystem for the localization of agricultural vehicles using somevisual beacons which are perceived using an omnidirectionalvision system These beacons are constituted by four redlandmarks situated in the vertices of the environment wherethese vehicles may move The omnidirectional vision systemmakes use of these four landmarks as beacons situated onspecific and previously known positions to estimate its poseIn other occasions a more complete description of theenvironment is previously available as in the work of Lu etal [113] They use the model provided by the competitionRoboCup Middle Size League Taking this competition intoaccount the objective consists in carrying out an accuratelocalization of the robot With this aim a Monte Carlolocalization method is employed to provide a rough initiallocalization and once the algorithmhas converged this resultis used as the initial value of a matching optimisation algo-rithm in order to perform accurate and efficient localizationtracking

The maps composed of local visual features have beenextensively used along the last few years Classical approacheshave made use of monocular or stereo vision to extract andtrack these local features or landmarks More recently someresearchers have shown that it is feasible to extend theseclassical approaches to be used with omnidirectional visionIn this line Choi et al [114] propose an algorithm to createmaps which is based on an object extraction method usingLucas-Kanade optical flowmotion detection from the imagesobtained by an omnidirectional vision systemThe algorithmuses the outer point of motion vectors as feature points of theenvironment They are obtained based on the corner pointsof an extracted object and using Lucas-Kanade optical flow

Valgren et al [115] propose the use of local features thatare extracted from images captured in a sequence Thesefeatures are used both to cluster the images into nodes and

to detect links between the nodes They employ a variant ofSIFT features that are extracted and matched from differentviewpoints Each node of the topological map contains acollection of images which are considered similar enough(ie a sufficient number of feature matches must existamong the images contained in the node) Once the nodesare constructed they are connected taking into accountfeature matching between the images in the nodes This waythey build a topological map incrementally creating newnodes as the environment is explored In a later work [84]the same authors study how to build topological maps inlarge environments both indoors and outdoors using thelocal features extracted from a set of omnidirectional viewsincluding the epipolar restriction and a clustering method tocarry out localization in an efficient way

Goedeme et al [116] present an algorithm to build atopological model of complex environments They also pro-pose algorithms to solve the problems of localization (bothglobal when the robot has no information about its previousposition and local when this information is available) andnavigation In this approach the authors propose two localdescriptors to extract the most relevant information fromthe images a color enhanced version of SIFT and invariantcolumn segments

The use of bag-of-features approaches has also beenconsidered by some researchers in mobile robotics As anexample Lu et al [117] propose the use of a bag-of-featuresapproach to carry out topological localization The map isbuilt in a previous process consisting mainly in clusteringthe visual information captured by an omnidirectional visionsystem It is performed in two steps First local features areextracted from all the images and they are used to carry out a119870-means clustering obtaining the 119896 clustersrsquo centres Secondthese centres constitute the visual vocabulary which is usedsubsequently to solve the localization problem

Beyond these frameworks the use of omnidirectionalvision systems has contributed to the emergence of newparadigms to create visual maps using some different fun-damentals to those traditionally used with local featuresOne of these frameworks consists in creating topologicalmaps using a global description of the images captured bythe omnidirectional vision system In this area Menegattiet al [98] show how a visual memory can be built usingthe Fourier Signature which presents rotational invariancewhen used with panoramic images The map is built usingglobal appearance methods and a system of mechanicalforces calculated from the similitude between descriptorsLiu et al [118] propose a framework that includes a methodto describe the visual appearance of the environment Itconsists in an adaptive descriptor based on color featuresand geometric information Using this descriptor they createa topological map based on the fact that the vertical edgesdivide the rooms in regions with a uniform meaning in thecase of indoor environments These descriptors are used asa basis to create a topological map through an incrementalprocess in which similar descriptors are incorporated intoeach node A global threshold is considered to evaluatethe similitude between descriptors Also other authors havestudied the localization problem using color information as

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 4: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

4 Journal of Sensors

Hyperbolic mirror

Image plane

F

F㰀

(a)

Parabolic mirror

Image plane

F

F㰀

(b)

Figure 1 Projection model of central catadioptric systems (a) Hyperbolic mirror and perspective projection lens and (b) parabolic mirrorand orthographic lens

lighting conditions of each camera into account This is whyLi developed a new system [34] to avoid this problem

Catadioptric vision sensors are another example [35]They make use of convex mirrors where the scene is pro-jected These mirrors may present different geometries suchas spherical hyperbolic parabolic elliptic or conic and thecamera takes the information from the environment throughits reflection onto the mirrors Due to their importance thenext subsection describes them in depth

31 Catadioptric Vision Sensors Catadioptric vision systemsmake use of convex reflective surfaces to expand the fieldof view of the camera The vision sensors capture theinformation through these surfaces The shape position andorientation of the mirror with respect to the camera willdefine the geometry of the projection of the world onto thecamera

Nowadays many kinds of catadioptric vision sensors canbe found One of the first developments was done by Rees in1970 [36] More recent examples of catadioptric systems canbe found in [37ndash40] Most of them permit omnidirectionalvision which means having a vision angle equal to 360 degaround the mirror axis The lateral angle of view dependsessentially on the geometry of the mirror and the relativeposition with respect to the camera

In the related literature several kinds of mirrors can befound as a part of catadioptric vision systems such as spheri-cal [41] conic [42] parabolic [27] or hyperbolic [43] Yoshidaet al [44] present a work about the possibility of creatinga catadioptric system composed of two mirrors focusing

on the study on how the changes in the geometry of thesystem are reflected in the final image captured by the cameraAlso Baker and Nayar [45] present a comparative analysisabout the use of different mirror geometries in catadioptricvision systems According to this work it is not possibleto state in general that a specific geometry outperformsthe others Each geometry presents characteristic reflectionproperties that may be advantageous under some specificcircumstances Anyway parabolic and hyperbolic mirrorspresent some particular properties that make them speciallyuseful when perspective projections of the omnidirectionalimage are needed as the next paragraph details

There are two main issues when using catadioptric visionsystems for mapping and localization the single effectiveviewpoint property and calibration On the one hand whenusing catadioptric vision sensors it is interesting that thesystem has a unique projection centre (ie that they con-stitute a central camera) In such systems all the rays thatarrive to the mirror surface converge into a unique pointwhich is the optical centre This is advantageous becausethanks to it it is possible to obtain undistorted perspectiveimages from the scene captured by the catadioptric system[39] This property appears in [45] referred to as singleeffective viewpoint According to both works there are twoways of building a catadioptric vision system that meetsthis property with the combination of a hyperbolic mirrorand a camera with perspective projection lens (conventionallens or pin-hole model) as shown in Figure 1(a) or with asystem composed of a parabolic mirror and an orthographicprojection lens (Figure 1(b)) In both figures 119865 and 1198651015840 are the

Journal of Sensors 5

(a) (b)

Figure 2 Two sample omnidirectional images captured in the same room from the same position and using two different catadioptricsystems

foci of the camera and themirror respectively In other casesthe rays that arrive to the camera converge into different focalpoints depending on their vertical incidence angle (this is thecase of noncentral cameras) Ohte et al studied this problemwith spherical mirrors [41] In the case of objects that arerelatively far from the axis of the mirror the error producedby the existence of different focal points is relatively smallHowever when the object is close to the mirror the errorsin the projection angles become significant and the completeprojection model of the camera must be used

On the other hand many mapping and localizationapplications work correctly only if all the parameters of thesystem are known With this aim the catadioptric systemneeds to go under a calibration process [46] The result ofthis process can be either (a) the intrinsic parameters ofthe camera the coefficients of the mirror and the relativeposition between them or (b) a list of correspondencesbetween each pixel in the image plane and the ray of light ofthe world that has projected onto it Many works have beendeveloped on calibration of catadioptric vision systems Forexample Goncalves and Araujo [47] propose a method tocalibrate both central and noncentral catadioptric camerasusing an approach based on bundle adjustment Ying andHu [48] present a method that uses geometric invariantsextracted from projections of lines or spheres in the worldThis method can be used to calibrate central catadioptriccameras Also Marcato Jr et al [49] develop an approach tocalibrate a catadioptric system composed of a wide-angle lenscamera and a conicmirror that takes into account the possiblemisalignment between the camera and mirror axes FinallyScaramuzza et al [50] develop a framework to calibratecentral catadioptric cameras using a planar pattern shown ata number of different orientations

32 Projections of the Omnidirectional Information The rawinformation captured by the camera in a catadioptric visionsystem is the omnidirectional image that contains the infor-mation of the environment previously reflected onto the mir-ror This image can be considered as a polar representationof the world whose origin is the projection of the focus ofthe mirror onto the image plane Figure 2 shows two samplecatadioptric vision systems composed of a camera and ahyperbolic mirror and the omnidirectional images capturedwith each one The mirror (a) is the model Wide 70 of the

manufacturer Eizoh and (b) is the model Super-Wide ViewLarge manufactured by Accowle

The omnidirectional scene can be used directly to obtainuseful information in robot navigation tasks For exampleScaramuzza et al [51] present a description method based onthe extraction of radial lines from the omnidirectional imageto characterize omnidirectional scenes captured in real envi-ronments They show how radial lines in omnidirectionalimages correspond to vertical lines of the environment whilecircumferences whose centre is the origin of coordinates ofthe image correspond to horizontal lines in the world

From the original omnidirectional image it is possible toobtain different scene representations through the projectionof the visual information onto different planes and surfacesTo make it possible in general it is necessary to calibratethe catadioptric system Using this information it is possibleto project the visual information onto different planes orsurfaces that show different perspectives of the original sceneEach projection presents specific properties that can be usefulin different navigation tasks

The next subsections present some of the most importantrepresentations and some of the works that have beendeveloped with each of them in the field of mobile roboticsA completemathematical description of these projections canbe found in [39]

321 Unit Sphere Projection An omnidirectional scene canbe projected onto a unit sphere whose centre is the focus ofthe mirror Every pixel of this sphere takes the value of theray of light that has the same direction with respect to thefocus of themirror To obtain this projection the catadioptricvision system has to be previously calibrated Figure 3 showsthe projection model of the unit sphere projection when ahyperbolic mirror and a perspective lens are usedThemirrorand the image plane are shown with blue color 119865 and 1198651015840 arethe foci of the camera and the mirror respectively The pixelsin the image plane are back-projected onto the mirror (to doit the calibration of the catadioptric systemmust be available)and after that each point of themirror is projected onto a unitsphere

This projection has been traditionally useful in thoseapplications that make use of the Spherical Fourier Trans-form which permits studying 3D rotations in the spaceas Geyer and Daniilidis show [52] Friedrich et al [53 54]

6 Journal of Sensors

F

F㰀

Figure 3 Projection model of the unit sphere projection

present an algorithm for robot localization and navigationusing spherical harmonics They use images captured with ahemispheric camera Makadia et al address the estimationof 3D rotations from the Spherical Fourier Transform [55ndash57] using a catadioptric vision system to capture the imagesFinally Schairer et al present several works about orientationestimation using this transform such as [58ndash60] where theydevelop methods to improve the accuracy in orientationestimation implement a particle filter to solve this task anddevelop a navigation system that combines odometry and theSpherical Fourier Transform applied to low resolution visioninformation

322 Cylindrical Projection This representation consists inprojecting the omnidirectional information onto a cylinderwhose axis is parallel to the mirror axis Conceptually it canbe obtained by changing the polar coordinate system of theomnidirectional image into a rectangular coordinate systemThis way every circumference in the omnidirectional imagewill be converted into a horizontal line in the panoramicscene This projection does not require the previous calibra-tion of the catadioptric vision system

Figure 4 shows the projection model of the cylindricalprojection when a hyperbolic mirror and a perspective lensare used The mirror and the image plane are shown withblue color 119865 and 1198651015840 are the foci of the camera and the mirrorrespectivelyThe pixels in the image plane are back-projectedonto a cylindrical surface

The cylindrical projection is commonly known aspanoramic image and it is one of the most used representa-tions in mobile robotics works to solve some problems suchas map creation and localization [61ndash63] SLAM [64] andvisual servoing and navigation [65 66] This is due to thefact that this representation is more easily understandableby human operators and it permits using standard imageprocessing algorithms which are usually designed to be usedwith perspective images

323 Perspective Projection From the omnidirectionalimage it is possible to obtain projective images in any

F

F㰀

Figure 4 Projection model of the cylindrical projection

direction They would be equivalent to the images capturedby virtual conventional cameras situated in the focus ofthe mirror The catadioptric system has to be previouslycalibrated to obtain such projection Figure 5 shows theprojection model of the perspective projection when ahyperbolic mirror and a perspective lens are used Themirror and the image plane are shown with blue color 119865 and1198651015840 are the foci of the camera and the mirror respectively Thepixels in the image plane are back-projected to the mirror(to do it the calibration of the catadioptric system must beavailable) and after that each point of the mirror is projectedonto a plane It is equivalent to capturing an image from thevirtual conventional camera placed in the focus 1198651015840

The orthographic projection also known as birdrsquos eyeview can be considered as a specific case of the perspectiveprojection In this case the projection plane is situatedperpendicularly to the camera axis If the world referencesystem is defined in such a way that the floor is the plane119909119910 and the 119911-axis is vertical and the camera axis is parallelto the 119911-axis then the orthographic projection is equivalentto having a conventional camera situated in the focus ofthe mirror pointing to the floor plane Figure 6 shows theprojection model of the orthographic view In this case thepixels are back-projected onto a horizontal plane

In the literature some algorithms which make use ofthe orthographic projection in map building localizationand navigation applications can be found For exampleGaspar et al [25] make use of this projection to extractparallel lines from the floor of corridors and other rooms toperform navigation indoors Also Bonev et al [67] propose anavigation system that combines the information containedin the omnidirectional image cylindrical projection andorthographic projection Finally Roebert et al [68] showhowa model of the environment can be created using perspectiveimages and how this model can be used for localization andnavigation purposes

To conclude this section Figure 7 shows (a) an omni-directional image captured by the catadioptric system pre-sented in Figure 2(a) and the visual appearance of the projec-tions calculated from this image (b) unit sphere projection(c) cylindrical projection (d) perspective projection onto avertical plane and (e) orthographic projection

Journal of Sensors 7

A

F

F㰀

(a)

U

V

View from Afp

(b)

Figure 5 Projection model of the perspective projection

F

F㰀

fp

Figure 6 Projection model of the orthographic projection

4 Description of the Visual Information

As described in the previous section numerous authors havestudied the use of omnidirectional images or any of theirprojections both in map building and in localization tasksThe images are highly dimensional data that change not onlywhen the robot moves but also when there is any change inthe environment such as changes in lighting conditions or in

the position of some objects Taking these facts into accountit is necessary to extract relevant information from the scenesto be able to solve robustly the mapping and localizationtasks Depending on the method followed to extract thisinformation the different solutions can be classified into twogroups solutions based on the extraction and descriptionof local features and solutions based on global appearanceTraditionally researchers have focused on the first family ofmethods but more recently some global appearance algo-rithms have been demonstrated to be also a robust alternative

Many algorithms can be found in the literature workingbothwith local features andwith global appearance of imagesAll these algorithms imply many parameters that have tobe correctly tuned so that the mapping and localizationprocesses are correct In the next subsections some of thesealgorithms and their applications are detailed

41 Methods Based on the Extraction andDescription of Local Features

411 General Issues Local features approaches are based onthe extraction of a set of outstanding points objects orregions from each scene Every feature is described by meansof a descriptor which is usually invariant against changesin the position and orientation of the robot Once extractedand described the solution to the localization problem isaddressed usually in two steps [69] First the extractedfeatures are tracked along a set of scenes to identify the zoneswhere these features are likely to be in themost recent imagesSecond a feature matching process is carried out to identifythe features

Many different philosophies can be found depending onthe type of features which are extracted and the procedurefollowed to carry out the tracking and matching As an

8 Journal of Sensors

(a) (b)

(c)

(d) (e)

Figure 7 (a) Sample omnidirectional image and some projections calculated from it (b) unit sphere projection (c) cylindrical projection(d) perspective projection onto a vertical plane and (e) orthographic projection

example Pears and Liang [70] extract corners which arelocated in the floor plane in indoor environments and usehomographies to carry out the tracking Zhou and Li [71]also use homographies but they extract features throughthe Harris corner detector [72] Sometimes geometricallymore complex features are used as Saripalli et al do [73]They carry out a segmentation process to extract predefinedfeatures such as windows and a Kalman filtering to carry outthe tracking and matching of features

412 Local Features Descriptors Among the methods forfeature extraction and description SIFT and SURF can behighlighted On the one hand SIFT (Scale Invariant FeatureTransform) was developed by Lowe [74 75] and providesfeatures which are invariant against scaling rotation changesin lighting conditions and camera viewpoint On the otherhand SURF (Speeded-Up Robust Features) whose standardversion was developed by Bay et al [76 77] is inspired in

SIFT but presents a lower computational cost and a higherrobustness against image transformationsMore recent devel-opments include BRIEF [78] which is designed to be usedin real-time applications at the expense of a lower toleranceto image distortions and transformations ORB [79] whichis based on BRIEF trying to improve its invariance againstrotation and resistance to noise but it is not robust againstchanges of scale BRISK [80] and FREAK [81] which tryto have the robustness of SURF but with an improvedcomputational cost

These descriptors have become popular in mapping andlocalization tasks using mobile robots as many researchersshow such as Angeli et al [82] and Se et al [83] who makeuse of SIFTdescriptors to solve these problems and theworksof Valgren and Lilienthal [84] and Murillo et al [85] whoemploy SURF features extracted fromomnidirectional scenesto estimate the position of the robot in a previously built mapAlso Pan et al [86] present a method based on the use of

Journal of Sensors 9

BRISK descriptors to estimate the position of an unmannedaerial vehicle

413 Using Methods Based on Local Features to Build VisualMaps Using feature-based approaches in combination withprobabilistic techniques it is possible to build metric maps[87] However these methods present some drawbacks forexample it is necessary that the environment be rich inprominent details (otherwise artificial landmarks can beinserted in the environment but this is not always possible)also the detection of such points is sometimes not robustagainst changes in the environment and their description isnot always fully invariant to changes in robot position andorientation Besides camera calibration is crucial in orderto incorporate new measurements in the model correctlyThis way small deviations in either the intrinsic or theextrinsic parameters add some error to the measurementsAt last extracting describing and comparing landmarksare computationally complex processes that often make itinfeasible building the model in real time as the robotexplores the environment

Feature-based approaches have reached a relative matu-rity and some comparative evaluations of their performancehave been carried out such as [88] and [89] These evalu-ations are useful to choose the most suitable extractor anddescriptor to a specific application

42 Methods Based on the Global Appearance of Scenes

421 General Issues The approaches based on the globalappearance of scenes work with the images as a wholewithout extracting any local information Each image isrepresented by means of a unique descriptor which containsinformation on its global appearance This kind of methodspresents some advantages in dynamic andor poorly struc-tured environments where it is difficult to extract stablecharacteristic features or regions from a set of scenes Theseapproaches lead to conceptually simpler algorithms sinceeach scene is described by means of only one descriptorMap creation and localization can be achieved just storingand comparing pairwise these descriptors As a drawbackextracting metric relationships from this information isdifficult thus this family of techniques is usually employedto build topological maps (unless the visual informationis combined with other sensory data such as odometry)Despite their simplicity several difficulties must be facedwhen using these techniques Since no local informationis extracted from the scenes it is necessary to use anycompression and description method that make the processcomputationally feasible Nevertheless the current imagedescription and compression methods permit optimisingthe size of the databases to store the necessary informationand carrying out the comparisons between scenes with arelative computational efficiency Occasionally these descrip-tors do not present invariance neither to changes in therobot orientation or in the lighting conditions nor to otherchanges in the environment (position of objects doorsetc) They will also experience some problems in environ-ments where visual aliasing is present which is a common

phenomenon in indoor environments with repetitive visualstructures

In spite of these disadvantages techniques based onglobal appearance constitute a systematic and intuitive alter-native to solve the mapping and localization problems In theworks that make use of this approach these tasks are usuallyaddressed in two steps The first step consists in creating amodel of the environment In this step the robot captures aset of images and describes each of thembymeans of a uniquedescriptor and from the information of these descriptorsa map is created and some relationships are establishedbetween images or robot posesThis step is known as learningphase Once the environment has been modelled in thesecond step the robot carries out the localization by capturingan image describing it and comparing this descriptor withthe descriptors previously stored in the model This step isalso known as test or auto-localization phase

422 Global Appearance Descriptors The key to the properfunctioning of this kind of methods is in the global descrip-tion algorithmusedDifferent alternatives can be found in therelated literature Some of the pioneer works on this approachwere developed by Matsumoto et al [90] and make a directcomparison between the pixels of a central region of thescenes using a correlation process However considering thehigh computational cost of this method the authors startedto build some descriptors that stored global information fromthe scenes using depth information estimated through stereodivergence [91]

Among the techniques that try to compress globally thevisual information Principal Components Analysis (PCA)[92] can be highlighted as one of the first robust alternativesused PCA considers the images as multidimensional datathat can be projected onto a lower dimension space retainingmost of the original information Some authors like Krose etal [93 94] and Stimec et al [95] make use of this method tocreate robustmodels of the environment usingmobile robotsThe first PCA developments presented two main problemsOn the one hand the descriptors were variant against changesin the orientation of the robot in the ground plane and onthe other hand all the images had to be available to build themodel which means that the map cannot be built online asthe robot explores the environment If a new image has tobe added to the previously built PCA model (eg becausethe robot must update this representation as new relevantimages are captured) it is necessary to start the process fromthe scratch using again all the images captured so far Someauthors have tried to overcome these drawbacks First Joganand Leonardis [96] proposed a version of PCA that providesdescriptors which are invariant against rotations of the robotin the ground plane when omnidirectional images are usedat the expense of a substantial increase of the computationalcost Second Artac et al [97] used an incremental versionof PCA that permits adding new images to a previously builtmodel

Other authors have proposed the implementation ofdescriptors based on the application of the Discrete FourierTransform (DFT) to the images with the aim of extractingthe most relevant information from the scenes In this field

10 Journal of Sensors

some alternatives can be found On the one hand in the caseof panoramic images both the two-dimensional DFT or theFourier Signature can be implemented as Paya et al [28] andMenegatti et al [98] show respectively On the other handin the case of omnidirectional images the Spherical FourierTransform (SFT) can be used as Rossi et al [99] show In allthe cases the resulting descriptor is able to compress mostof the information contained in the original scene in a lowernumber of components Also these methods permit buildingdescriptors which are invariant against rotations of the robotin the ground plane Apart from this they contain enoughinformation to estimate not only the position of the robotbut also its relative orientation At last the computational costto describe each scene is relatively low and each descriptorcan be calculated independently on the rest of descriptorsTaking these features into account in this moment DFToutperforms PCA as far as mapping and localization areconcerned As an example Menegatti et al [100] show howa probabilistic localization process can be carried out withina visual memory previously created by means of the FourierSignature of a set of panoramic scenes as the only source ofinformation

Other authors have described globally the scenes throughapproaches based on gradient either the magnitude or theorientation As an example Kosecka et al [101] make use of ahistogram of gradient orientation to describe each scene withthe goal of creating a map of the environment and carryingout the localization process However some comparisonsbetween local areas of the candidate zones are performed torefine these processes Murillo et al [102] propose to use apanoramic gist descriptor [103] and try to optimise its sizewhile keeping most of the information of the environmentThe approaches based on gradients and gist tend to present aperformance similar to Fourier Transform methods [104]

The information stored in the color channels also consti-tutes a useful alternative to build global appearance descrip-tors The works of Zhou et al [105] show an example of useof this information They propose building histograms thatcontain information on color gradient edges density andtexture to represent the images

423 Using Methods Based on Global Appearance to BuildVisual Maps Finally when working with global appearanceit is necessary to consider that the appearance of an imagestrongly depends on the lighting conditions of the environ-ment representedThis way the global appearance descriptorshould be robust against changes in these conditions Someresearchers have focused on this topic and have proposeddifferent solutions First the problem can be addressed bymeans of considering sets of images captured under differentlighting conditions This way the model would containinformation on the changes that appearance can undergoAs an example the works developed by Murase and Nayar[106] solved the problem of visual recognition of objectsunder changing lighting conditions using this approach Thesecond approach consists in trying to remove or minimisethe effects produced by these changes during the creation ofthe descriptor to obtain a normalised model This approachis mainly based on the use of filters for example to detect

and extract edges [107] since this information tends to bemore insensitive to changes in lighting conditions than theinformation of intensity or homomorphic filters [108] whichtry to separate the luminance and reflectance componentsand minimise the first component which is the one that ismore prone to change when the lighting conditions do

5 Mapping Localization and NavigationUsing Omnidirectional Vision Sensors

This section addresses the problems of mapping localizationandnavigation using the information provided by omnidirec-tional vision sensors First themain approaches to solve thesetasks are outlined and then some relevant works developedwithin these approaches are described

While the initial works tried tomodel the geometry of theenvironment with metric precision from visual informationand arranging the information through CAD (ComputerAssisted Design) models these approaches gave way to sim-pler models that represent the environment through occu-pation grids topological maps or even sequences of imagesTraditionally the problems of map building and localizationhave been addressed using three different approaches

(i) Map building and subsequent localization in theseapproaches first the robot goes through the envi-ronment to map (usually in a teleoperated way)and collects some useful information from a varietyof positions This information is then processed tobuild the map Once the map is available the robotcaptures information from its current position andcomparing this information with the map its currentpose (position and orientation) is estimated

(ii) Continuous map building and updating in thisapproach the robot is able to explore autonomouslythe environment to map and build or update amodel while it is moving The SLAM algorithms(Simultaneous Localization And Mapping) fit withinthis approach

(iii) Mapless navigation systems the robot navigatesthrough the environment by applying some analysistechniques to the last captured scenes such as opticalflow or through visual memories previously recordedthat contain sequences of images and associatedactions but no relation between images This kindof navigation is associated basically with reactivebehaviours

The next subsections present some of the most relevantcontributions developed in each of these three frameworks

51 Map Building and Subsequent Localization Pretty usu-ally solving the localization problem requires having apreviously built model of the environment in such a waythat the robot can estimate its pose by comparing its sensoryinformation with the information captured in the modelIn general depending on how this information is arrangedthese models can be classified into three categories metrictopological or hybrid [109] First metric approaches try to

Journal of Sensors 11

create a model with geometric precision including somefeatures of the environment with respect to a referencesystem These models are created usually using local featuresextracted from images [87] and they permit estimating met-rically the position of the robot up to a specific error Secondtopological approaches try to create compact models thatinclude information from several characteristic localizationsand the connectivity relations between them Both localfeatures and global appearance can be used to create suchmodels [104 110] They usually permit a rough localizationof the robot with a reasonable computational cost Finallyhybrid maps combine metric and topological information totry to have the advantages of both methods The informationis arranged into several layers with topological informationthat permits carrying out an initial rough estimation andmetric information to refine this estimation when necessary[111]

The use of information captured by omnidirectionalvision sensors has expanded in recent years to solve themapping localization and navigation problems The nextparagraphs outline some of these works

Initially a simple way to create a metric model consistsin using some visual beacons which can be seen from avariety of positions and used to estimate the pose of therobot Following this philosophy Li et al [112] present asystem for the localization of agricultural vehicles using somevisual beacons which are perceived using an omnidirectionalvision system These beacons are constituted by four redlandmarks situated in the vertices of the environment wherethese vehicles may move The omnidirectional vision systemmakes use of these four landmarks as beacons situated onspecific and previously known positions to estimate its poseIn other occasions a more complete description of theenvironment is previously available as in the work of Lu etal [113] They use the model provided by the competitionRoboCup Middle Size League Taking this competition intoaccount the objective consists in carrying out an accuratelocalization of the robot With this aim a Monte Carlolocalization method is employed to provide a rough initiallocalization and once the algorithmhas converged this resultis used as the initial value of a matching optimisation algo-rithm in order to perform accurate and efficient localizationtracking

The maps composed of local visual features have beenextensively used along the last few years Classical approacheshave made use of monocular or stereo vision to extract andtrack these local features or landmarks More recently someresearchers have shown that it is feasible to extend theseclassical approaches to be used with omnidirectional visionIn this line Choi et al [114] propose an algorithm to createmaps which is based on an object extraction method usingLucas-Kanade optical flowmotion detection from the imagesobtained by an omnidirectional vision systemThe algorithmuses the outer point of motion vectors as feature points of theenvironment They are obtained based on the corner pointsof an extracted object and using Lucas-Kanade optical flow

Valgren et al [115] propose the use of local features thatare extracted from images captured in a sequence Thesefeatures are used both to cluster the images into nodes and

to detect links between the nodes They employ a variant ofSIFT features that are extracted and matched from differentviewpoints Each node of the topological map contains acollection of images which are considered similar enough(ie a sufficient number of feature matches must existamong the images contained in the node) Once the nodesare constructed they are connected taking into accountfeature matching between the images in the nodes This waythey build a topological map incrementally creating newnodes as the environment is explored In a later work [84]the same authors study how to build topological maps inlarge environments both indoors and outdoors using thelocal features extracted from a set of omnidirectional viewsincluding the epipolar restriction and a clustering method tocarry out localization in an efficient way

Goedeme et al [116] present an algorithm to build atopological model of complex environments They also pro-pose algorithms to solve the problems of localization (bothglobal when the robot has no information about its previousposition and local when this information is available) andnavigation In this approach the authors propose two localdescriptors to extract the most relevant information fromthe images a color enhanced version of SIFT and invariantcolumn segments

The use of bag-of-features approaches has also beenconsidered by some researchers in mobile robotics As anexample Lu et al [117] propose the use of a bag-of-featuresapproach to carry out topological localization The map isbuilt in a previous process consisting mainly in clusteringthe visual information captured by an omnidirectional visionsystem It is performed in two steps First local features areextracted from all the images and they are used to carry out a119870-means clustering obtaining the 119896 clustersrsquo centres Secondthese centres constitute the visual vocabulary which is usedsubsequently to solve the localization problem

Beyond these frameworks the use of omnidirectionalvision systems has contributed to the emergence of newparadigms to create visual maps using some different fun-damentals to those traditionally used with local featuresOne of these frameworks consists in creating topologicalmaps using a global description of the images captured bythe omnidirectional vision system In this area Menegattiet al [98] show how a visual memory can be built usingthe Fourier Signature which presents rotational invariancewhen used with panoramic images The map is built usingglobal appearance methods and a system of mechanicalforces calculated from the similitude between descriptorsLiu et al [118] propose a framework that includes a methodto describe the visual appearance of the environment Itconsists in an adaptive descriptor based on color featuresand geometric information Using this descriptor they createa topological map based on the fact that the vertical edgesdivide the rooms in regions with a uniform meaning in thecase of indoor environments These descriptors are used asa basis to create a topological map through an incrementalprocess in which similar descriptors are incorporated intoeach node A global threshold is considered to evaluatethe similitude between descriptors Also other authors havestudied the localization problem using color information as

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 5: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

Journal of Sensors 5

(a) (b)

Figure 2 Two sample omnidirectional images captured in the same room from the same position and using two different catadioptricsystems

foci of the camera and themirror respectively In other casesthe rays that arrive to the camera converge into different focalpoints depending on their vertical incidence angle (this is thecase of noncentral cameras) Ohte et al studied this problemwith spherical mirrors [41] In the case of objects that arerelatively far from the axis of the mirror the error producedby the existence of different focal points is relatively smallHowever when the object is close to the mirror the errorsin the projection angles become significant and the completeprojection model of the camera must be used

On the other hand many mapping and localizationapplications work correctly only if all the parameters of thesystem are known With this aim the catadioptric systemneeds to go under a calibration process [46] The result ofthis process can be either (a) the intrinsic parameters ofthe camera the coefficients of the mirror and the relativeposition between them or (b) a list of correspondencesbetween each pixel in the image plane and the ray of light ofthe world that has projected onto it Many works have beendeveloped on calibration of catadioptric vision systems Forexample Goncalves and Araujo [47] propose a method tocalibrate both central and noncentral catadioptric camerasusing an approach based on bundle adjustment Ying andHu [48] present a method that uses geometric invariantsextracted from projections of lines or spheres in the worldThis method can be used to calibrate central catadioptriccameras Also Marcato Jr et al [49] develop an approach tocalibrate a catadioptric system composed of a wide-angle lenscamera and a conicmirror that takes into account the possiblemisalignment between the camera and mirror axes FinallyScaramuzza et al [50] develop a framework to calibratecentral catadioptric cameras using a planar pattern shown ata number of different orientations

32 Projections of the Omnidirectional Information The rawinformation captured by the camera in a catadioptric visionsystem is the omnidirectional image that contains the infor-mation of the environment previously reflected onto the mir-ror This image can be considered as a polar representationof the world whose origin is the projection of the focus ofthe mirror onto the image plane Figure 2 shows two samplecatadioptric vision systems composed of a camera and ahyperbolic mirror and the omnidirectional images capturedwith each one The mirror (a) is the model Wide 70 of the

manufacturer Eizoh and (b) is the model Super-Wide ViewLarge manufactured by Accowle

The omnidirectional scene can be used directly to obtainuseful information in robot navigation tasks For exampleScaramuzza et al [51] present a description method based onthe extraction of radial lines from the omnidirectional imageto characterize omnidirectional scenes captured in real envi-ronments They show how radial lines in omnidirectionalimages correspond to vertical lines of the environment whilecircumferences whose centre is the origin of coordinates ofthe image correspond to horizontal lines in the world

From the original omnidirectional image it is possible toobtain different scene representations through the projectionof the visual information onto different planes and surfacesTo make it possible in general it is necessary to calibratethe catadioptric system Using this information it is possibleto project the visual information onto different planes orsurfaces that show different perspectives of the original sceneEach projection presents specific properties that can be usefulin different navigation tasks

The next subsections present some of the most importantrepresentations and some of the works that have beendeveloped with each of them in the field of mobile roboticsA completemathematical description of these projections canbe found in [39]

321 Unit Sphere Projection An omnidirectional scene canbe projected onto a unit sphere whose centre is the focus ofthe mirror Every pixel of this sphere takes the value of theray of light that has the same direction with respect to thefocus of themirror To obtain this projection the catadioptricvision system has to be previously calibrated Figure 3 showsthe projection model of the unit sphere projection when ahyperbolic mirror and a perspective lens are usedThemirrorand the image plane are shown with blue color 119865 and 1198651015840 arethe foci of the camera and the mirror respectively The pixelsin the image plane are back-projected onto the mirror (to doit the calibration of the catadioptric systemmust be available)and after that each point of themirror is projected onto a unitsphere

This projection has been traditionally useful in thoseapplications that make use of the Spherical Fourier Trans-form which permits studying 3D rotations in the spaceas Geyer and Daniilidis show [52] Friedrich et al [53 54]

6 Journal of Sensors

F

F㰀

Figure 3 Projection model of the unit sphere projection

present an algorithm for robot localization and navigationusing spherical harmonics They use images captured with ahemispheric camera Makadia et al address the estimationof 3D rotations from the Spherical Fourier Transform [55ndash57] using a catadioptric vision system to capture the imagesFinally Schairer et al present several works about orientationestimation using this transform such as [58ndash60] where theydevelop methods to improve the accuracy in orientationestimation implement a particle filter to solve this task anddevelop a navigation system that combines odometry and theSpherical Fourier Transform applied to low resolution visioninformation

322 Cylindrical Projection This representation consists inprojecting the omnidirectional information onto a cylinderwhose axis is parallel to the mirror axis Conceptually it canbe obtained by changing the polar coordinate system of theomnidirectional image into a rectangular coordinate systemThis way every circumference in the omnidirectional imagewill be converted into a horizontal line in the panoramicscene This projection does not require the previous calibra-tion of the catadioptric vision system

Figure 4 shows the projection model of the cylindricalprojection when a hyperbolic mirror and a perspective lensare used The mirror and the image plane are shown withblue color 119865 and 1198651015840 are the foci of the camera and the mirrorrespectivelyThe pixels in the image plane are back-projectedonto a cylindrical surface

The cylindrical projection is commonly known aspanoramic image and it is one of the most used representa-tions in mobile robotics works to solve some problems suchas map creation and localization [61ndash63] SLAM [64] andvisual servoing and navigation [65 66] This is due to thefact that this representation is more easily understandableby human operators and it permits using standard imageprocessing algorithms which are usually designed to be usedwith perspective images

323 Perspective Projection From the omnidirectionalimage it is possible to obtain projective images in any

F

F㰀

Figure 4 Projection model of the cylindrical projection

direction They would be equivalent to the images capturedby virtual conventional cameras situated in the focus ofthe mirror The catadioptric system has to be previouslycalibrated to obtain such projection Figure 5 shows theprojection model of the perspective projection when ahyperbolic mirror and a perspective lens are used Themirror and the image plane are shown with blue color 119865 and1198651015840 are the foci of the camera and the mirror respectively Thepixels in the image plane are back-projected to the mirror(to do it the calibration of the catadioptric system must beavailable) and after that each point of the mirror is projectedonto a plane It is equivalent to capturing an image from thevirtual conventional camera placed in the focus 1198651015840

The orthographic projection also known as birdrsquos eyeview can be considered as a specific case of the perspectiveprojection In this case the projection plane is situatedperpendicularly to the camera axis If the world referencesystem is defined in such a way that the floor is the plane119909119910 and the 119911-axis is vertical and the camera axis is parallelto the 119911-axis then the orthographic projection is equivalentto having a conventional camera situated in the focus ofthe mirror pointing to the floor plane Figure 6 shows theprojection model of the orthographic view In this case thepixels are back-projected onto a horizontal plane

In the literature some algorithms which make use ofthe orthographic projection in map building localizationand navigation applications can be found For exampleGaspar et al [25] make use of this projection to extractparallel lines from the floor of corridors and other rooms toperform navigation indoors Also Bonev et al [67] propose anavigation system that combines the information containedin the omnidirectional image cylindrical projection andorthographic projection Finally Roebert et al [68] showhowa model of the environment can be created using perspectiveimages and how this model can be used for localization andnavigation purposes

To conclude this section Figure 7 shows (a) an omni-directional image captured by the catadioptric system pre-sented in Figure 2(a) and the visual appearance of the projec-tions calculated from this image (b) unit sphere projection(c) cylindrical projection (d) perspective projection onto avertical plane and (e) orthographic projection

Journal of Sensors 7

A

F

F㰀

(a)

U

V

View from Afp

(b)

Figure 5 Projection model of the perspective projection

F

F㰀

fp

Figure 6 Projection model of the orthographic projection

4 Description of the Visual Information

As described in the previous section numerous authors havestudied the use of omnidirectional images or any of theirprojections both in map building and in localization tasksThe images are highly dimensional data that change not onlywhen the robot moves but also when there is any change inthe environment such as changes in lighting conditions or in

the position of some objects Taking these facts into accountit is necessary to extract relevant information from the scenesto be able to solve robustly the mapping and localizationtasks Depending on the method followed to extract thisinformation the different solutions can be classified into twogroups solutions based on the extraction and descriptionof local features and solutions based on global appearanceTraditionally researchers have focused on the first family ofmethods but more recently some global appearance algo-rithms have been demonstrated to be also a robust alternative

Many algorithms can be found in the literature workingbothwith local features andwith global appearance of imagesAll these algorithms imply many parameters that have tobe correctly tuned so that the mapping and localizationprocesses are correct In the next subsections some of thesealgorithms and their applications are detailed

41 Methods Based on the Extraction andDescription of Local Features

411 General Issues Local features approaches are based onthe extraction of a set of outstanding points objects orregions from each scene Every feature is described by meansof a descriptor which is usually invariant against changesin the position and orientation of the robot Once extractedand described the solution to the localization problem isaddressed usually in two steps [69] First the extractedfeatures are tracked along a set of scenes to identify the zoneswhere these features are likely to be in themost recent imagesSecond a feature matching process is carried out to identifythe features

Many different philosophies can be found depending onthe type of features which are extracted and the procedurefollowed to carry out the tracking and matching As an

8 Journal of Sensors

(a) (b)

(c)

(d) (e)

Figure 7 (a) Sample omnidirectional image and some projections calculated from it (b) unit sphere projection (c) cylindrical projection(d) perspective projection onto a vertical plane and (e) orthographic projection

example Pears and Liang [70] extract corners which arelocated in the floor plane in indoor environments and usehomographies to carry out the tracking Zhou and Li [71]also use homographies but they extract features throughthe Harris corner detector [72] Sometimes geometricallymore complex features are used as Saripalli et al do [73]They carry out a segmentation process to extract predefinedfeatures such as windows and a Kalman filtering to carry outthe tracking and matching of features

412 Local Features Descriptors Among the methods forfeature extraction and description SIFT and SURF can behighlighted On the one hand SIFT (Scale Invariant FeatureTransform) was developed by Lowe [74 75] and providesfeatures which are invariant against scaling rotation changesin lighting conditions and camera viewpoint On the otherhand SURF (Speeded-Up Robust Features) whose standardversion was developed by Bay et al [76 77] is inspired in

SIFT but presents a lower computational cost and a higherrobustness against image transformationsMore recent devel-opments include BRIEF [78] which is designed to be usedin real-time applications at the expense of a lower toleranceto image distortions and transformations ORB [79] whichis based on BRIEF trying to improve its invariance againstrotation and resistance to noise but it is not robust againstchanges of scale BRISK [80] and FREAK [81] which tryto have the robustness of SURF but with an improvedcomputational cost

These descriptors have become popular in mapping andlocalization tasks using mobile robots as many researchersshow such as Angeli et al [82] and Se et al [83] who makeuse of SIFTdescriptors to solve these problems and theworksof Valgren and Lilienthal [84] and Murillo et al [85] whoemploy SURF features extracted fromomnidirectional scenesto estimate the position of the robot in a previously built mapAlso Pan et al [86] present a method based on the use of

Journal of Sensors 9

BRISK descriptors to estimate the position of an unmannedaerial vehicle

413 Using Methods Based on Local Features to Build VisualMaps Using feature-based approaches in combination withprobabilistic techniques it is possible to build metric maps[87] However these methods present some drawbacks forexample it is necessary that the environment be rich inprominent details (otherwise artificial landmarks can beinserted in the environment but this is not always possible)also the detection of such points is sometimes not robustagainst changes in the environment and their description isnot always fully invariant to changes in robot position andorientation Besides camera calibration is crucial in orderto incorporate new measurements in the model correctlyThis way small deviations in either the intrinsic or theextrinsic parameters add some error to the measurementsAt last extracting describing and comparing landmarksare computationally complex processes that often make itinfeasible building the model in real time as the robotexplores the environment

Feature-based approaches have reached a relative matu-rity and some comparative evaluations of their performancehave been carried out such as [88] and [89] These evalu-ations are useful to choose the most suitable extractor anddescriptor to a specific application

42 Methods Based on the Global Appearance of Scenes

421 General Issues The approaches based on the globalappearance of scenes work with the images as a wholewithout extracting any local information Each image isrepresented by means of a unique descriptor which containsinformation on its global appearance This kind of methodspresents some advantages in dynamic andor poorly struc-tured environments where it is difficult to extract stablecharacteristic features or regions from a set of scenes Theseapproaches lead to conceptually simpler algorithms sinceeach scene is described by means of only one descriptorMap creation and localization can be achieved just storingand comparing pairwise these descriptors As a drawbackextracting metric relationships from this information isdifficult thus this family of techniques is usually employedto build topological maps (unless the visual informationis combined with other sensory data such as odometry)Despite their simplicity several difficulties must be facedwhen using these techniques Since no local informationis extracted from the scenes it is necessary to use anycompression and description method that make the processcomputationally feasible Nevertheless the current imagedescription and compression methods permit optimisingthe size of the databases to store the necessary informationand carrying out the comparisons between scenes with arelative computational efficiency Occasionally these descrip-tors do not present invariance neither to changes in therobot orientation or in the lighting conditions nor to otherchanges in the environment (position of objects doorsetc) They will also experience some problems in environ-ments where visual aliasing is present which is a common

phenomenon in indoor environments with repetitive visualstructures

In spite of these disadvantages techniques based onglobal appearance constitute a systematic and intuitive alter-native to solve the mapping and localization problems In theworks that make use of this approach these tasks are usuallyaddressed in two steps The first step consists in creating amodel of the environment In this step the robot captures aset of images and describes each of thembymeans of a uniquedescriptor and from the information of these descriptorsa map is created and some relationships are establishedbetween images or robot posesThis step is known as learningphase Once the environment has been modelled in thesecond step the robot carries out the localization by capturingan image describing it and comparing this descriptor withthe descriptors previously stored in the model This step isalso known as test or auto-localization phase

422 Global Appearance Descriptors The key to the properfunctioning of this kind of methods is in the global descrip-tion algorithmusedDifferent alternatives can be found in therelated literature Some of the pioneer works on this approachwere developed by Matsumoto et al [90] and make a directcomparison between the pixels of a central region of thescenes using a correlation process However considering thehigh computational cost of this method the authors startedto build some descriptors that stored global information fromthe scenes using depth information estimated through stereodivergence [91]

Among the techniques that try to compress globally thevisual information Principal Components Analysis (PCA)[92] can be highlighted as one of the first robust alternativesused PCA considers the images as multidimensional datathat can be projected onto a lower dimension space retainingmost of the original information Some authors like Krose etal [93 94] and Stimec et al [95] make use of this method tocreate robustmodels of the environment usingmobile robotsThe first PCA developments presented two main problemsOn the one hand the descriptors were variant against changesin the orientation of the robot in the ground plane and onthe other hand all the images had to be available to build themodel which means that the map cannot be built online asthe robot explores the environment If a new image has tobe added to the previously built PCA model (eg becausethe robot must update this representation as new relevantimages are captured) it is necessary to start the process fromthe scratch using again all the images captured so far Someauthors have tried to overcome these drawbacks First Joganand Leonardis [96] proposed a version of PCA that providesdescriptors which are invariant against rotations of the robotin the ground plane when omnidirectional images are usedat the expense of a substantial increase of the computationalcost Second Artac et al [97] used an incremental versionof PCA that permits adding new images to a previously builtmodel

Other authors have proposed the implementation ofdescriptors based on the application of the Discrete FourierTransform (DFT) to the images with the aim of extractingthe most relevant information from the scenes In this field

10 Journal of Sensors

some alternatives can be found On the one hand in the caseof panoramic images both the two-dimensional DFT or theFourier Signature can be implemented as Paya et al [28] andMenegatti et al [98] show respectively On the other handin the case of omnidirectional images the Spherical FourierTransform (SFT) can be used as Rossi et al [99] show In allthe cases the resulting descriptor is able to compress mostof the information contained in the original scene in a lowernumber of components Also these methods permit buildingdescriptors which are invariant against rotations of the robotin the ground plane Apart from this they contain enoughinformation to estimate not only the position of the robotbut also its relative orientation At last the computational costto describe each scene is relatively low and each descriptorcan be calculated independently on the rest of descriptorsTaking these features into account in this moment DFToutperforms PCA as far as mapping and localization areconcerned As an example Menegatti et al [100] show howa probabilistic localization process can be carried out withina visual memory previously created by means of the FourierSignature of a set of panoramic scenes as the only source ofinformation

Other authors have described globally the scenes throughapproaches based on gradient either the magnitude or theorientation As an example Kosecka et al [101] make use of ahistogram of gradient orientation to describe each scene withthe goal of creating a map of the environment and carryingout the localization process However some comparisonsbetween local areas of the candidate zones are performed torefine these processes Murillo et al [102] propose to use apanoramic gist descriptor [103] and try to optimise its sizewhile keeping most of the information of the environmentThe approaches based on gradients and gist tend to present aperformance similar to Fourier Transform methods [104]

The information stored in the color channels also consti-tutes a useful alternative to build global appearance descrip-tors The works of Zhou et al [105] show an example of useof this information They propose building histograms thatcontain information on color gradient edges density andtexture to represent the images

423 Using Methods Based on Global Appearance to BuildVisual Maps Finally when working with global appearanceit is necessary to consider that the appearance of an imagestrongly depends on the lighting conditions of the environ-ment representedThis way the global appearance descriptorshould be robust against changes in these conditions Someresearchers have focused on this topic and have proposeddifferent solutions First the problem can be addressed bymeans of considering sets of images captured under differentlighting conditions This way the model would containinformation on the changes that appearance can undergoAs an example the works developed by Murase and Nayar[106] solved the problem of visual recognition of objectsunder changing lighting conditions using this approach Thesecond approach consists in trying to remove or minimisethe effects produced by these changes during the creation ofthe descriptor to obtain a normalised model This approachis mainly based on the use of filters for example to detect

and extract edges [107] since this information tends to bemore insensitive to changes in lighting conditions than theinformation of intensity or homomorphic filters [108] whichtry to separate the luminance and reflectance componentsand minimise the first component which is the one that ismore prone to change when the lighting conditions do

5 Mapping Localization and NavigationUsing Omnidirectional Vision Sensors

This section addresses the problems of mapping localizationandnavigation using the information provided by omnidirec-tional vision sensors First themain approaches to solve thesetasks are outlined and then some relevant works developedwithin these approaches are described

While the initial works tried tomodel the geometry of theenvironment with metric precision from visual informationand arranging the information through CAD (ComputerAssisted Design) models these approaches gave way to sim-pler models that represent the environment through occu-pation grids topological maps or even sequences of imagesTraditionally the problems of map building and localizationhave been addressed using three different approaches

(i) Map building and subsequent localization in theseapproaches first the robot goes through the envi-ronment to map (usually in a teleoperated way)and collects some useful information from a varietyof positions This information is then processed tobuild the map Once the map is available the robotcaptures information from its current position andcomparing this information with the map its currentpose (position and orientation) is estimated

(ii) Continuous map building and updating in thisapproach the robot is able to explore autonomouslythe environment to map and build or update amodel while it is moving The SLAM algorithms(Simultaneous Localization And Mapping) fit withinthis approach

(iii) Mapless navigation systems the robot navigatesthrough the environment by applying some analysistechniques to the last captured scenes such as opticalflow or through visual memories previously recordedthat contain sequences of images and associatedactions but no relation between images This kindof navigation is associated basically with reactivebehaviours

The next subsections present some of the most relevantcontributions developed in each of these three frameworks

51 Map Building and Subsequent Localization Pretty usu-ally solving the localization problem requires having apreviously built model of the environment in such a waythat the robot can estimate its pose by comparing its sensoryinformation with the information captured in the modelIn general depending on how this information is arrangedthese models can be classified into three categories metrictopological or hybrid [109] First metric approaches try to

Journal of Sensors 11

create a model with geometric precision including somefeatures of the environment with respect to a referencesystem These models are created usually using local featuresextracted from images [87] and they permit estimating met-rically the position of the robot up to a specific error Secondtopological approaches try to create compact models thatinclude information from several characteristic localizationsand the connectivity relations between them Both localfeatures and global appearance can be used to create suchmodels [104 110] They usually permit a rough localizationof the robot with a reasonable computational cost Finallyhybrid maps combine metric and topological information totry to have the advantages of both methods The informationis arranged into several layers with topological informationthat permits carrying out an initial rough estimation andmetric information to refine this estimation when necessary[111]

The use of information captured by omnidirectionalvision sensors has expanded in recent years to solve themapping localization and navigation problems The nextparagraphs outline some of these works

Initially a simple way to create a metric model consistsin using some visual beacons which can be seen from avariety of positions and used to estimate the pose of therobot Following this philosophy Li et al [112] present asystem for the localization of agricultural vehicles using somevisual beacons which are perceived using an omnidirectionalvision system These beacons are constituted by four redlandmarks situated in the vertices of the environment wherethese vehicles may move The omnidirectional vision systemmakes use of these four landmarks as beacons situated onspecific and previously known positions to estimate its poseIn other occasions a more complete description of theenvironment is previously available as in the work of Lu etal [113] They use the model provided by the competitionRoboCup Middle Size League Taking this competition intoaccount the objective consists in carrying out an accuratelocalization of the robot With this aim a Monte Carlolocalization method is employed to provide a rough initiallocalization and once the algorithmhas converged this resultis used as the initial value of a matching optimisation algo-rithm in order to perform accurate and efficient localizationtracking

The maps composed of local visual features have beenextensively used along the last few years Classical approacheshave made use of monocular or stereo vision to extract andtrack these local features or landmarks More recently someresearchers have shown that it is feasible to extend theseclassical approaches to be used with omnidirectional visionIn this line Choi et al [114] propose an algorithm to createmaps which is based on an object extraction method usingLucas-Kanade optical flowmotion detection from the imagesobtained by an omnidirectional vision systemThe algorithmuses the outer point of motion vectors as feature points of theenvironment They are obtained based on the corner pointsof an extracted object and using Lucas-Kanade optical flow

Valgren et al [115] propose the use of local features thatare extracted from images captured in a sequence Thesefeatures are used both to cluster the images into nodes and

to detect links between the nodes They employ a variant ofSIFT features that are extracted and matched from differentviewpoints Each node of the topological map contains acollection of images which are considered similar enough(ie a sufficient number of feature matches must existamong the images contained in the node) Once the nodesare constructed they are connected taking into accountfeature matching between the images in the nodes This waythey build a topological map incrementally creating newnodes as the environment is explored In a later work [84]the same authors study how to build topological maps inlarge environments both indoors and outdoors using thelocal features extracted from a set of omnidirectional viewsincluding the epipolar restriction and a clustering method tocarry out localization in an efficient way

Goedeme et al [116] present an algorithm to build atopological model of complex environments They also pro-pose algorithms to solve the problems of localization (bothglobal when the robot has no information about its previousposition and local when this information is available) andnavigation In this approach the authors propose two localdescriptors to extract the most relevant information fromthe images a color enhanced version of SIFT and invariantcolumn segments

The use of bag-of-features approaches has also beenconsidered by some researchers in mobile robotics As anexample Lu et al [117] propose the use of a bag-of-featuresapproach to carry out topological localization The map isbuilt in a previous process consisting mainly in clusteringthe visual information captured by an omnidirectional visionsystem It is performed in two steps First local features areextracted from all the images and they are used to carry out a119870-means clustering obtaining the 119896 clustersrsquo centres Secondthese centres constitute the visual vocabulary which is usedsubsequently to solve the localization problem

Beyond these frameworks the use of omnidirectionalvision systems has contributed to the emergence of newparadigms to create visual maps using some different fun-damentals to those traditionally used with local featuresOne of these frameworks consists in creating topologicalmaps using a global description of the images captured bythe omnidirectional vision system In this area Menegattiet al [98] show how a visual memory can be built usingthe Fourier Signature which presents rotational invariancewhen used with panoramic images The map is built usingglobal appearance methods and a system of mechanicalforces calculated from the similitude between descriptorsLiu et al [118] propose a framework that includes a methodto describe the visual appearance of the environment Itconsists in an adaptive descriptor based on color featuresand geometric information Using this descriptor they createa topological map based on the fact that the vertical edgesdivide the rooms in regions with a uniform meaning in thecase of indoor environments These descriptors are used asa basis to create a topological map through an incrementalprocess in which similar descriptors are incorporated intoeach node A global threshold is considered to evaluatethe similitude between descriptors Also other authors havestudied the localization problem using color information as

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 6: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

6 Journal of Sensors

F

F㰀

Figure 3 Projection model of the unit sphere projection

present an algorithm for robot localization and navigationusing spherical harmonics They use images captured with ahemispheric camera Makadia et al address the estimationof 3D rotations from the Spherical Fourier Transform [55ndash57] using a catadioptric vision system to capture the imagesFinally Schairer et al present several works about orientationestimation using this transform such as [58ndash60] where theydevelop methods to improve the accuracy in orientationestimation implement a particle filter to solve this task anddevelop a navigation system that combines odometry and theSpherical Fourier Transform applied to low resolution visioninformation

322 Cylindrical Projection This representation consists inprojecting the omnidirectional information onto a cylinderwhose axis is parallel to the mirror axis Conceptually it canbe obtained by changing the polar coordinate system of theomnidirectional image into a rectangular coordinate systemThis way every circumference in the omnidirectional imagewill be converted into a horizontal line in the panoramicscene This projection does not require the previous calibra-tion of the catadioptric vision system

Figure 4 shows the projection model of the cylindricalprojection when a hyperbolic mirror and a perspective lensare used The mirror and the image plane are shown withblue color 119865 and 1198651015840 are the foci of the camera and the mirrorrespectivelyThe pixels in the image plane are back-projectedonto a cylindrical surface

The cylindrical projection is commonly known aspanoramic image and it is one of the most used representa-tions in mobile robotics works to solve some problems suchas map creation and localization [61ndash63] SLAM [64] andvisual servoing and navigation [65 66] This is due to thefact that this representation is more easily understandableby human operators and it permits using standard imageprocessing algorithms which are usually designed to be usedwith perspective images

323 Perspective Projection From the omnidirectionalimage it is possible to obtain projective images in any

F

F㰀

Figure 4 Projection model of the cylindrical projection

direction They would be equivalent to the images capturedby virtual conventional cameras situated in the focus ofthe mirror The catadioptric system has to be previouslycalibrated to obtain such projection Figure 5 shows theprojection model of the perspective projection when ahyperbolic mirror and a perspective lens are used Themirror and the image plane are shown with blue color 119865 and1198651015840 are the foci of the camera and the mirror respectively Thepixels in the image plane are back-projected to the mirror(to do it the calibration of the catadioptric system must beavailable) and after that each point of the mirror is projectedonto a plane It is equivalent to capturing an image from thevirtual conventional camera placed in the focus 1198651015840

The orthographic projection also known as birdrsquos eyeview can be considered as a specific case of the perspectiveprojection In this case the projection plane is situatedperpendicularly to the camera axis If the world referencesystem is defined in such a way that the floor is the plane119909119910 and the 119911-axis is vertical and the camera axis is parallelto the 119911-axis then the orthographic projection is equivalentto having a conventional camera situated in the focus ofthe mirror pointing to the floor plane Figure 6 shows theprojection model of the orthographic view In this case thepixels are back-projected onto a horizontal plane

In the literature some algorithms which make use ofthe orthographic projection in map building localizationand navigation applications can be found For exampleGaspar et al [25] make use of this projection to extractparallel lines from the floor of corridors and other rooms toperform navigation indoors Also Bonev et al [67] propose anavigation system that combines the information containedin the omnidirectional image cylindrical projection andorthographic projection Finally Roebert et al [68] showhowa model of the environment can be created using perspectiveimages and how this model can be used for localization andnavigation purposes

To conclude this section Figure 7 shows (a) an omni-directional image captured by the catadioptric system pre-sented in Figure 2(a) and the visual appearance of the projec-tions calculated from this image (b) unit sphere projection(c) cylindrical projection (d) perspective projection onto avertical plane and (e) orthographic projection

Journal of Sensors 7

A

F

F㰀

(a)

U

V

View from Afp

(b)

Figure 5 Projection model of the perspective projection

F

F㰀

fp

Figure 6 Projection model of the orthographic projection

4 Description of the Visual Information

As described in the previous section numerous authors havestudied the use of omnidirectional images or any of theirprojections both in map building and in localization tasksThe images are highly dimensional data that change not onlywhen the robot moves but also when there is any change inthe environment such as changes in lighting conditions or in

the position of some objects Taking these facts into accountit is necessary to extract relevant information from the scenesto be able to solve robustly the mapping and localizationtasks Depending on the method followed to extract thisinformation the different solutions can be classified into twogroups solutions based on the extraction and descriptionof local features and solutions based on global appearanceTraditionally researchers have focused on the first family ofmethods but more recently some global appearance algo-rithms have been demonstrated to be also a robust alternative

Many algorithms can be found in the literature workingbothwith local features andwith global appearance of imagesAll these algorithms imply many parameters that have tobe correctly tuned so that the mapping and localizationprocesses are correct In the next subsections some of thesealgorithms and their applications are detailed

41 Methods Based on the Extraction andDescription of Local Features

411 General Issues Local features approaches are based onthe extraction of a set of outstanding points objects orregions from each scene Every feature is described by meansof a descriptor which is usually invariant against changesin the position and orientation of the robot Once extractedand described the solution to the localization problem isaddressed usually in two steps [69] First the extractedfeatures are tracked along a set of scenes to identify the zoneswhere these features are likely to be in themost recent imagesSecond a feature matching process is carried out to identifythe features

Many different philosophies can be found depending onthe type of features which are extracted and the procedurefollowed to carry out the tracking and matching As an

8 Journal of Sensors

(a) (b)

(c)

(d) (e)

Figure 7 (a) Sample omnidirectional image and some projections calculated from it (b) unit sphere projection (c) cylindrical projection(d) perspective projection onto a vertical plane and (e) orthographic projection

example Pears and Liang [70] extract corners which arelocated in the floor plane in indoor environments and usehomographies to carry out the tracking Zhou and Li [71]also use homographies but they extract features throughthe Harris corner detector [72] Sometimes geometricallymore complex features are used as Saripalli et al do [73]They carry out a segmentation process to extract predefinedfeatures such as windows and a Kalman filtering to carry outthe tracking and matching of features

412 Local Features Descriptors Among the methods forfeature extraction and description SIFT and SURF can behighlighted On the one hand SIFT (Scale Invariant FeatureTransform) was developed by Lowe [74 75] and providesfeatures which are invariant against scaling rotation changesin lighting conditions and camera viewpoint On the otherhand SURF (Speeded-Up Robust Features) whose standardversion was developed by Bay et al [76 77] is inspired in

SIFT but presents a lower computational cost and a higherrobustness against image transformationsMore recent devel-opments include BRIEF [78] which is designed to be usedin real-time applications at the expense of a lower toleranceto image distortions and transformations ORB [79] whichis based on BRIEF trying to improve its invariance againstrotation and resistance to noise but it is not robust againstchanges of scale BRISK [80] and FREAK [81] which tryto have the robustness of SURF but with an improvedcomputational cost

These descriptors have become popular in mapping andlocalization tasks using mobile robots as many researchersshow such as Angeli et al [82] and Se et al [83] who makeuse of SIFTdescriptors to solve these problems and theworksof Valgren and Lilienthal [84] and Murillo et al [85] whoemploy SURF features extracted fromomnidirectional scenesto estimate the position of the robot in a previously built mapAlso Pan et al [86] present a method based on the use of

Journal of Sensors 9

BRISK descriptors to estimate the position of an unmannedaerial vehicle

413 Using Methods Based on Local Features to Build VisualMaps Using feature-based approaches in combination withprobabilistic techniques it is possible to build metric maps[87] However these methods present some drawbacks forexample it is necessary that the environment be rich inprominent details (otherwise artificial landmarks can beinserted in the environment but this is not always possible)also the detection of such points is sometimes not robustagainst changes in the environment and their description isnot always fully invariant to changes in robot position andorientation Besides camera calibration is crucial in orderto incorporate new measurements in the model correctlyThis way small deviations in either the intrinsic or theextrinsic parameters add some error to the measurementsAt last extracting describing and comparing landmarksare computationally complex processes that often make itinfeasible building the model in real time as the robotexplores the environment

Feature-based approaches have reached a relative matu-rity and some comparative evaluations of their performancehave been carried out such as [88] and [89] These evalu-ations are useful to choose the most suitable extractor anddescriptor to a specific application

42 Methods Based on the Global Appearance of Scenes

421 General Issues The approaches based on the globalappearance of scenes work with the images as a wholewithout extracting any local information Each image isrepresented by means of a unique descriptor which containsinformation on its global appearance This kind of methodspresents some advantages in dynamic andor poorly struc-tured environments where it is difficult to extract stablecharacteristic features or regions from a set of scenes Theseapproaches lead to conceptually simpler algorithms sinceeach scene is described by means of only one descriptorMap creation and localization can be achieved just storingand comparing pairwise these descriptors As a drawbackextracting metric relationships from this information isdifficult thus this family of techniques is usually employedto build topological maps (unless the visual informationis combined with other sensory data such as odometry)Despite their simplicity several difficulties must be facedwhen using these techniques Since no local informationis extracted from the scenes it is necessary to use anycompression and description method that make the processcomputationally feasible Nevertheless the current imagedescription and compression methods permit optimisingthe size of the databases to store the necessary informationand carrying out the comparisons between scenes with arelative computational efficiency Occasionally these descrip-tors do not present invariance neither to changes in therobot orientation or in the lighting conditions nor to otherchanges in the environment (position of objects doorsetc) They will also experience some problems in environ-ments where visual aliasing is present which is a common

phenomenon in indoor environments with repetitive visualstructures

In spite of these disadvantages techniques based onglobal appearance constitute a systematic and intuitive alter-native to solve the mapping and localization problems In theworks that make use of this approach these tasks are usuallyaddressed in two steps The first step consists in creating amodel of the environment In this step the robot captures aset of images and describes each of thembymeans of a uniquedescriptor and from the information of these descriptorsa map is created and some relationships are establishedbetween images or robot posesThis step is known as learningphase Once the environment has been modelled in thesecond step the robot carries out the localization by capturingan image describing it and comparing this descriptor withthe descriptors previously stored in the model This step isalso known as test or auto-localization phase

422 Global Appearance Descriptors The key to the properfunctioning of this kind of methods is in the global descrip-tion algorithmusedDifferent alternatives can be found in therelated literature Some of the pioneer works on this approachwere developed by Matsumoto et al [90] and make a directcomparison between the pixels of a central region of thescenes using a correlation process However considering thehigh computational cost of this method the authors startedto build some descriptors that stored global information fromthe scenes using depth information estimated through stereodivergence [91]

Among the techniques that try to compress globally thevisual information Principal Components Analysis (PCA)[92] can be highlighted as one of the first robust alternativesused PCA considers the images as multidimensional datathat can be projected onto a lower dimension space retainingmost of the original information Some authors like Krose etal [93 94] and Stimec et al [95] make use of this method tocreate robustmodels of the environment usingmobile robotsThe first PCA developments presented two main problemsOn the one hand the descriptors were variant against changesin the orientation of the robot in the ground plane and onthe other hand all the images had to be available to build themodel which means that the map cannot be built online asthe robot explores the environment If a new image has tobe added to the previously built PCA model (eg becausethe robot must update this representation as new relevantimages are captured) it is necessary to start the process fromthe scratch using again all the images captured so far Someauthors have tried to overcome these drawbacks First Joganand Leonardis [96] proposed a version of PCA that providesdescriptors which are invariant against rotations of the robotin the ground plane when omnidirectional images are usedat the expense of a substantial increase of the computationalcost Second Artac et al [97] used an incremental versionof PCA that permits adding new images to a previously builtmodel

Other authors have proposed the implementation ofdescriptors based on the application of the Discrete FourierTransform (DFT) to the images with the aim of extractingthe most relevant information from the scenes In this field

10 Journal of Sensors

some alternatives can be found On the one hand in the caseof panoramic images both the two-dimensional DFT or theFourier Signature can be implemented as Paya et al [28] andMenegatti et al [98] show respectively On the other handin the case of omnidirectional images the Spherical FourierTransform (SFT) can be used as Rossi et al [99] show In allthe cases the resulting descriptor is able to compress mostof the information contained in the original scene in a lowernumber of components Also these methods permit buildingdescriptors which are invariant against rotations of the robotin the ground plane Apart from this they contain enoughinformation to estimate not only the position of the robotbut also its relative orientation At last the computational costto describe each scene is relatively low and each descriptorcan be calculated independently on the rest of descriptorsTaking these features into account in this moment DFToutperforms PCA as far as mapping and localization areconcerned As an example Menegatti et al [100] show howa probabilistic localization process can be carried out withina visual memory previously created by means of the FourierSignature of a set of panoramic scenes as the only source ofinformation

Other authors have described globally the scenes throughapproaches based on gradient either the magnitude or theorientation As an example Kosecka et al [101] make use of ahistogram of gradient orientation to describe each scene withthe goal of creating a map of the environment and carryingout the localization process However some comparisonsbetween local areas of the candidate zones are performed torefine these processes Murillo et al [102] propose to use apanoramic gist descriptor [103] and try to optimise its sizewhile keeping most of the information of the environmentThe approaches based on gradients and gist tend to present aperformance similar to Fourier Transform methods [104]

The information stored in the color channels also consti-tutes a useful alternative to build global appearance descrip-tors The works of Zhou et al [105] show an example of useof this information They propose building histograms thatcontain information on color gradient edges density andtexture to represent the images

423 Using Methods Based on Global Appearance to BuildVisual Maps Finally when working with global appearanceit is necessary to consider that the appearance of an imagestrongly depends on the lighting conditions of the environ-ment representedThis way the global appearance descriptorshould be robust against changes in these conditions Someresearchers have focused on this topic and have proposeddifferent solutions First the problem can be addressed bymeans of considering sets of images captured under differentlighting conditions This way the model would containinformation on the changes that appearance can undergoAs an example the works developed by Murase and Nayar[106] solved the problem of visual recognition of objectsunder changing lighting conditions using this approach Thesecond approach consists in trying to remove or minimisethe effects produced by these changes during the creation ofthe descriptor to obtain a normalised model This approachis mainly based on the use of filters for example to detect

and extract edges [107] since this information tends to bemore insensitive to changes in lighting conditions than theinformation of intensity or homomorphic filters [108] whichtry to separate the luminance and reflectance componentsand minimise the first component which is the one that ismore prone to change when the lighting conditions do

5 Mapping Localization and NavigationUsing Omnidirectional Vision Sensors

This section addresses the problems of mapping localizationandnavigation using the information provided by omnidirec-tional vision sensors First themain approaches to solve thesetasks are outlined and then some relevant works developedwithin these approaches are described

While the initial works tried tomodel the geometry of theenvironment with metric precision from visual informationand arranging the information through CAD (ComputerAssisted Design) models these approaches gave way to sim-pler models that represent the environment through occu-pation grids topological maps or even sequences of imagesTraditionally the problems of map building and localizationhave been addressed using three different approaches

(i) Map building and subsequent localization in theseapproaches first the robot goes through the envi-ronment to map (usually in a teleoperated way)and collects some useful information from a varietyof positions This information is then processed tobuild the map Once the map is available the robotcaptures information from its current position andcomparing this information with the map its currentpose (position and orientation) is estimated

(ii) Continuous map building and updating in thisapproach the robot is able to explore autonomouslythe environment to map and build or update amodel while it is moving The SLAM algorithms(Simultaneous Localization And Mapping) fit withinthis approach

(iii) Mapless navigation systems the robot navigatesthrough the environment by applying some analysistechniques to the last captured scenes such as opticalflow or through visual memories previously recordedthat contain sequences of images and associatedactions but no relation between images This kindof navigation is associated basically with reactivebehaviours

The next subsections present some of the most relevantcontributions developed in each of these three frameworks

51 Map Building and Subsequent Localization Pretty usu-ally solving the localization problem requires having apreviously built model of the environment in such a waythat the robot can estimate its pose by comparing its sensoryinformation with the information captured in the modelIn general depending on how this information is arrangedthese models can be classified into three categories metrictopological or hybrid [109] First metric approaches try to

Journal of Sensors 11

create a model with geometric precision including somefeatures of the environment with respect to a referencesystem These models are created usually using local featuresextracted from images [87] and they permit estimating met-rically the position of the robot up to a specific error Secondtopological approaches try to create compact models thatinclude information from several characteristic localizationsand the connectivity relations between them Both localfeatures and global appearance can be used to create suchmodels [104 110] They usually permit a rough localizationof the robot with a reasonable computational cost Finallyhybrid maps combine metric and topological information totry to have the advantages of both methods The informationis arranged into several layers with topological informationthat permits carrying out an initial rough estimation andmetric information to refine this estimation when necessary[111]

The use of information captured by omnidirectionalvision sensors has expanded in recent years to solve themapping localization and navigation problems The nextparagraphs outline some of these works

Initially a simple way to create a metric model consistsin using some visual beacons which can be seen from avariety of positions and used to estimate the pose of therobot Following this philosophy Li et al [112] present asystem for the localization of agricultural vehicles using somevisual beacons which are perceived using an omnidirectionalvision system These beacons are constituted by four redlandmarks situated in the vertices of the environment wherethese vehicles may move The omnidirectional vision systemmakes use of these four landmarks as beacons situated onspecific and previously known positions to estimate its poseIn other occasions a more complete description of theenvironment is previously available as in the work of Lu etal [113] They use the model provided by the competitionRoboCup Middle Size League Taking this competition intoaccount the objective consists in carrying out an accuratelocalization of the robot With this aim a Monte Carlolocalization method is employed to provide a rough initiallocalization and once the algorithmhas converged this resultis used as the initial value of a matching optimisation algo-rithm in order to perform accurate and efficient localizationtracking

The maps composed of local visual features have beenextensively used along the last few years Classical approacheshave made use of monocular or stereo vision to extract andtrack these local features or landmarks More recently someresearchers have shown that it is feasible to extend theseclassical approaches to be used with omnidirectional visionIn this line Choi et al [114] propose an algorithm to createmaps which is based on an object extraction method usingLucas-Kanade optical flowmotion detection from the imagesobtained by an omnidirectional vision systemThe algorithmuses the outer point of motion vectors as feature points of theenvironment They are obtained based on the corner pointsof an extracted object and using Lucas-Kanade optical flow

Valgren et al [115] propose the use of local features thatare extracted from images captured in a sequence Thesefeatures are used both to cluster the images into nodes and

to detect links between the nodes They employ a variant ofSIFT features that are extracted and matched from differentviewpoints Each node of the topological map contains acollection of images which are considered similar enough(ie a sufficient number of feature matches must existamong the images contained in the node) Once the nodesare constructed they are connected taking into accountfeature matching between the images in the nodes This waythey build a topological map incrementally creating newnodes as the environment is explored In a later work [84]the same authors study how to build topological maps inlarge environments both indoors and outdoors using thelocal features extracted from a set of omnidirectional viewsincluding the epipolar restriction and a clustering method tocarry out localization in an efficient way

Goedeme et al [116] present an algorithm to build atopological model of complex environments They also pro-pose algorithms to solve the problems of localization (bothglobal when the robot has no information about its previousposition and local when this information is available) andnavigation In this approach the authors propose two localdescriptors to extract the most relevant information fromthe images a color enhanced version of SIFT and invariantcolumn segments

The use of bag-of-features approaches has also beenconsidered by some researchers in mobile robotics As anexample Lu et al [117] propose the use of a bag-of-featuresapproach to carry out topological localization The map isbuilt in a previous process consisting mainly in clusteringthe visual information captured by an omnidirectional visionsystem It is performed in two steps First local features areextracted from all the images and they are used to carry out a119870-means clustering obtaining the 119896 clustersrsquo centres Secondthese centres constitute the visual vocabulary which is usedsubsequently to solve the localization problem

Beyond these frameworks the use of omnidirectionalvision systems has contributed to the emergence of newparadigms to create visual maps using some different fun-damentals to those traditionally used with local featuresOne of these frameworks consists in creating topologicalmaps using a global description of the images captured bythe omnidirectional vision system In this area Menegattiet al [98] show how a visual memory can be built usingthe Fourier Signature which presents rotational invariancewhen used with panoramic images The map is built usingglobal appearance methods and a system of mechanicalforces calculated from the similitude between descriptorsLiu et al [118] propose a framework that includes a methodto describe the visual appearance of the environment Itconsists in an adaptive descriptor based on color featuresand geometric information Using this descriptor they createa topological map based on the fact that the vertical edgesdivide the rooms in regions with a uniform meaning in thecase of indoor environments These descriptors are used asa basis to create a topological map through an incrementalprocess in which similar descriptors are incorporated intoeach node A global threshold is considered to evaluatethe similitude between descriptors Also other authors havestudied the localization problem using color information as

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 7: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

Journal of Sensors 7

A

F

F㰀

(a)

U

V

View from Afp

(b)

Figure 5 Projection model of the perspective projection

F

F㰀

fp

Figure 6 Projection model of the orthographic projection

4 Description of the Visual Information

As described in the previous section numerous authors havestudied the use of omnidirectional images or any of theirprojections both in map building and in localization tasksThe images are highly dimensional data that change not onlywhen the robot moves but also when there is any change inthe environment such as changes in lighting conditions or in

the position of some objects Taking these facts into accountit is necessary to extract relevant information from the scenesto be able to solve robustly the mapping and localizationtasks Depending on the method followed to extract thisinformation the different solutions can be classified into twogroups solutions based on the extraction and descriptionof local features and solutions based on global appearanceTraditionally researchers have focused on the first family ofmethods but more recently some global appearance algo-rithms have been demonstrated to be also a robust alternative

Many algorithms can be found in the literature workingbothwith local features andwith global appearance of imagesAll these algorithms imply many parameters that have tobe correctly tuned so that the mapping and localizationprocesses are correct In the next subsections some of thesealgorithms and their applications are detailed

41 Methods Based on the Extraction andDescription of Local Features

411 General Issues Local features approaches are based onthe extraction of a set of outstanding points objects orregions from each scene Every feature is described by meansof a descriptor which is usually invariant against changesin the position and orientation of the robot Once extractedand described the solution to the localization problem isaddressed usually in two steps [69] First the extractedfeatures are tracked along a set of scenes to identify the zoneswhere these features are likely to be in themost recent imagesSecond a feature matching process is carried out to identifythe features

Many different philosophies can be found depending onthe type of features which are extracted and the procedurefollowed to carry out the tracking and matching As an

8 Journal of Sensors

(a) (b)

(c)

(d) (e)

Figure 7 (a) Sample omnidirectional image and some projections calculated from it (b) unit sphere projection (c) cylindrical projection(d) perspective projection onto a vertical plane and (e) orthographic projection

example Pears and Liang [70] extract corners which arelocated in the floor plane in indoor environments and usehomographies to carry out the tracking Zhou and Li [71]also use homographies but they extract features throughthe Harris corner detector [72] Sometimes geometricallymore complex features are used as Saripalli et al do [73]They carry out a segmentation process to extract predefinedfeatures such as windows and a Kalman filtering to carry outthe tracking and matching of features

412 Local Features Descriptors Among the methods forfeature extraction and description SIFT and SURF can behighlighted On the one hand SIFT (Scale Invariant FeatureTransform) was developed by Lowe [74 75] and providesfeatures which are invariant against scaling rotation changesin lighting conditions and camera viewpoint On the otherhand SURF (Speeded-Up Robust Features) whose standardversion was developed by Bay et al [76 77] is inspired in

SIFT but presents a lower computational cost and a higherrobustness against image transformationsMore recent devel-opments include BRIEF [78] which is designed to be usedin real-time applications at the expense of a lower toleranceto image distortions and transformations ORB [79] whichis based on BRIEF trying to improve its invariance againstrotation and resistance to noise but it is not robust againstchanges of scale BRISK [80] and FREAK [81] which tryto have the robustness of SURF but with an improvedcomputational cost

These descriptors have become popular in mapping andlocalization tasks using mobile robots as many researchersshow such as Angeli et al [82] and Se et al [83] who makeuse of SIFTdescriptors to solve these problems and theworksof Valgren and Lilienthal [84] and Murillo et al [85] whoemploy SURF features extracted fromomnidirectional scenesto estimate the position of the robot in a previously built mapAlso Pan et al [86] present a method based on the use of

Journal of Sensors 9

BRISK descriptors to estimate the position of an unmannedaerial vehicle

413 Using Methods Based on Local Features to Build VisualMaps Using feature-based approaches in combination withprobabilistic techniques it is possible to build metric maps[87] However these methods present some drawbacks forexample it is necessary that the environment be rich inprominent details (otherwise artificial landmarks can beinserted in the environment but this is not always possible)also the detection of such points is sometimes not robustagainst changes in the environment and their description isnot always fully invariant to changes in robot position andorientation Besides camera calibration is crucial in orderto incorporate new measurements in the model correctlyThis way small deviations in either the intrinsic or theextrinsic parameters add some error to the measurementsAt last extracting describing and comparing landmarksare computationally complex processes that often make itinfeasible building the model in real time as the robotexplores the environment

Feature-based approaches have reached a relative matu-rity and some comparative evaluations of their performancehave been carried out such as [88] and [89] These evalu-ations are useful to choose the most suitable extractor anddescriptor to a specific application

42 Methods Based on the Global Appearance of Scenes

421 General Issues The approaches based on the globalappearance of scenes work with the images as a wholewithout extracting any local information Each image isrepresented by means of a unique descriptor which containsinformation on its global appearance This kind of methodspresents some advantages in dynamic andor poorly struc-tured environments where it is difficult to extract stablecharacteristic features or regions from a set of scenes Theseapproaches lead to conceptually simpler algorithms sinceeach scene is described by means of only one descriptorMap creation and localization can be achieved just storingand comparing pairwise these descriptors As a drawbackextracting metric relationships from this information isdifficult thus this family of techniques is usually employedto build topological maps (unless the visual informationis combined with other sensory data such as odometry)Despite their simplicity several difficulties must be facedwhen using these techniques Since no local informationis extracted from the scenes it is necessary to use anycompression and description method that make the processcomputationally feasible Nevertheless the current imagedescription and compression methods permit optimisingthe size of the databases to store the necessary informationand carrying out the comparisons between scenes with arelative computational efficiency Occasionally these descrip-tors do not present invariance neither to changes in therobot orientation or in the lighting conditions nor to otherchanges in the environment (position of objects doorsetc) They will also experience some problems in environ-ments where visual aliasing is present which is a common

phenomenon in indoor environments with repetitive visualstructures

In spite of these disadvantages techniques based onglobal appearance constitute a systematic and intuitive alter-native to solve the mapping and localization problems In theworks that make use of this approach these tasks are usuallyaddressed in two steps The first step consists in creating amodel of the environment In this step the robot captures aset of images and describes each of thembymeans of a uniquedescriptor and from the information of these descriptorsa map is created and some relationships are establishedbetween images or robot posesThis step is known as learningphase Once the environment has been modelled in thesecond step the robot carries out the localization by capturingan image describing it and comparing this descriptor withthe descriptors previously stored in the model This step isalso known as test or auto-localization phase

422 Global Appearance Descriptors The key to the properfunctioning of this kind of methods is in the global descrip-tion algorithmusedDifferent alternatives can be found in therelated literature Some of the pioneer works on this approachwere developed by Matsumoto et al [90] and make a directcomparison between the pixels of a central region of thescenes using a correlation process However considering thehigh computational cost of this method the authors startedto build some descriptors that stored global information fromthe scenes using depth information estimated through stereodivergence [91]

Among the techniques that try to compress globally thevisual information Principal Components Analysis (PCA)[92] can be highlighted as one of the first robust alternativesused PCA considers the images as multidimensional datathat can be projected onto a lower dimension space retainingmost of the original information Some authors like Krose etal [93 94] and Stimec et al [95] make use of this method tocreate robustmodels of the environment usingmobile robotsThe first PCA developments presented two main problemsOn the one hand the descriptors were variant against changesin the orientation of the robot in the ground plane and onthe other hand all the images had to be available to build themodel which means that the map cannot be built online asthe robot explores the environment If a new image has tobe added to the previously built PCA model (eg becausethe robot must update this representation as new relevantimages are captured) it is necessary to start the process fromthe scratch using again all the images captured so far Someauthors have tried to overcome these drawbacks First Joganand Leonardis [96] proposed a version of PCA that providesdescriptors which are invariant against rotations of the robotin the ground plane when omnidirectional images are usedat the expense of a substantial increase of the computationalcost Second Artac et al [97] used an incremental versionof PCA that permits adding new images to a previously builtmodel

Other authors have proposed the implementation ofdescriptors based on the application of the Discrete FourierTransform (DFT) to the images with the aim of extractingthe most relevant information from the scenes In this field

10 Journal of Sensors

some alternatives can be found On the one hand in the caseof panoramic images both the two-dimensional DFT or theFourier Signature can be implemented as Paya et al [28] andMenegatti et al [98] show respectively On the other handin the case of omnidirectional images the Spherical FourierTransform (SFT) can be used as Rossi et al [99] show In allthe cases the resulting descriptor is able to compress mostof the information contained in the original scene in a lowernumber of components Also these methods permit buildingdescriptors which are invariant against rotations of the robotin the ground plane Apart from this they contain enoughinformation to estimate not only the position of the robotbut also its relative orientation At last the computational costto describe each scene is relatively low and each descriptorcan be calculated independently on the rest of descriptorsTaking these features into account in this moment DFToutperforms PCA as far as mapping and localization areconcerned As an example Menegatti et al [100] show howa probabilistic localization process can be carried out withina visual memory previously created by means of the FourierSignature of a set of panoramic scenes as the only source ofinformation

Other authors have described globally the scenes throughapproaches based on gradient either the magnitude or theorientation As an example Kosecka et al [101] make use of ahistogram of gradient orientation to describe each scene withthe goal of creating a map of the environment and carryingout the localization process However some comparisonsbetween local areas of the candidate zones are performed torefine these processes Murillo et al [102] propose to use apanoramic gist descriptor [103] and try to optimise its sizewhile keeping most of the information of the environmentThe approaches based on gradients and gist tend to present aperformance similar to Fourier Transform methods [104]

The information stored in the color channels also consti-tutes a useful alternative to build global appearance descrip-tors The works of Zhou et al [105] show an example of useof this information They propose building histograms thatcontain information on color gradient edges density andtexture to represent the images

423 Using Methods Based on Global Appearance to BuildVisual Maps Finally when working with global appearanceit is necessary to consider that the appearance of an imagestrongly depends on the lighting conditions of the environ-ment representedThis way the global appearance descriptorshould be robust against changes in these conditions Someresearchers have focused on this topic and have proposeddifferent solutions First the problem can be addressed bymeans of considering sets of images captured under differentlighting conditions This way the model would containinformation on the changes that appearance can undergoAs an example the works developed by Murase and Nayar[106] solved the problem of visual recognition of objectsunder changing lighting conditions using this approach Thesecond approach consists in trying to remove or minimisethe effects produced by these changes during the creation ofthe descriptor to obtain a normalised model This approachis mainly based on the use of filters for example to detect

and extract edges [107] since this information tends to bemore insensitive to changes in lighting conditions than theinformation of intensity or homomorphic filters [108] whichtry to separate the luminance and reflectance componentsand minimise the first component which is the one that ismore prone to change when the lighting conditions do

5 Mapping Localization and NavigationUsing Omnidirectional Vision Sensors

This section addresses the problems of mapping localizationandnavigation using the information provided by omnidirec-tional vision sensors First themain approaches to solve thesetasks are outlined and then some relevant works developedwithin these approaches are described

While the initial works tried tomodel the geometry of theenvironment with metric precision from visual informationand arranging the information through CAD (ComputerAssisted Design) models these approaches gave way to sim-pler models that represent the environment through occu-pation grids topological maps or even sequences of imagesTraditionally the problems of map building and localizationhave been addressed using three different approaches

(i) Map building and subsequent localization in theseapproaches first the robot goes through the envi-ronment to map (usually in a teleoperated way)and collects some useful information from a varietyof positions This information is then processed tobuild the map Once the map is available the robotcaptures information from its current position andcomparing this information with the map its currentpose (position and orientation) is estimated

(ii) Continuous map building and updating in thisapproach the robot is able to explore autonomouslythe environment to map and build or update amodel while it is moving The SLAM algorithms(Simultaneous Localization And Mapping) fit withinthis approach

(iii) Mapless navigation systems the robot navigatesthrough the environment by applying some analysistechniques to the last captured scenes such as opticalflow or through visual memories previously recordedthat contain sequences of images and associatedactions but no relation between images This kindof navigation is associated basically with reactivebehaviours

The next subsections present some of the most relevantcontributions developed in each of these three frameworks

51 Map Building and Subsequent Localization Pretty usu-ally solving the localization problem requires having apreviously built model of the environment in such a waythat the robot can estimate its pose by comparing its sensoryinformation with the information captured in the modelIn general depending on how this information is arrangedthese models can be classified into three categories metrictopological or hybrid [109] First metric approaches try to

Journal of Sensors 11

create a model with geometric precision including somefeatures of the environment with respect to a referencesystem These models are created usually using local featuresextracted from images [87] and they permit estimating met-rically the position of the robot up to a specific error Secondtopological approaches try to create compact models thatinclude information from several characteristic localizationsand the connectivity relations between them Both localfeatures and global appearance can be used to create suchmodels [104 110] They usually permit a rough localizationof the robot with a reasonable computational cost Finallyhybrid maps combine metric and topological information totry to have the advantages of both methods The informationis arranged into several layers with topological informationthat permits carrying out an initial rough estimation andmetric information to refine this estimation when necessary[111]

The use of information captured by omnidirectionalvision sensors has expanded in recent years to solve themapping localization and navigation problems The nextparagraphs outline some of these works

Initially a simple way to create a metric model consistsin using some visual beacons which can be seen from avariety of positions and used to estimate the pose of therobot Following this philosophy Li et al [112] present asystem for the localization of agricultural vehicles using somevisual beacons which are perceived using an omnidirectionalvision system These beacons are constituted by four redlandmarks situated in the vertices of the environment wherethese vehicles may move The omnidirectional vision systemmakes use of these four landmarks as beacons situated onspecific and previously known positions to estimate its poseIn other occasions a more complete description of theenvironment is previously available as in the work of Lu etal [113] They use the model provided by the competitionRoboCup Middle Size League Taking this competition intoaccount the objective consists in carrying out an accuratelocalization of the robot With this aim a Monte Carlolocalization method is employed to provide a rough initiallocalization and once the algorithmhas converged this resultis used as the initial value of a matching optimisation algo-rithm in order to perform accurate and efficient localizationtracking

The maps composed of local visual features have beenextensively used along the last few years Classical approacheshave made use of monocular or stereo vision to extract andtrack these local features or landmarks More recently someresearchers have shown that it is feasible to extend theseclassical approaches to be used with omnidirectional visionIn this line Choi et al [114] propose an algorithm to createmaps which is based on an object extraction method usingLucas-Kanade optical flowmotion detection from the imagesobtained by an omnidirectional vision systemThe algorithmuses the outer point of motion vectors as feature points of theenvironment They are obtained based on the corner pointsof an extracted object and using Lucas-Kanade optical flow

Valgren et al [115] propose the use of local features thatare extracted from images captured in a sequence Thesefeatures are used both to cluster the images into nodes and

to detect links between the nodes They employ a variant ofSIFT features that are extracted and matched from differentviewpoints Each node of the topological map contains acollection of images which are considered similar enough(ie a sufficient number of feature matches must existamong the images contained in the node) Once the nodesare constructed they are connected taking into accountfeature matching between the images in the nodes This waythey build a topological map incrementally creating newnodes as the environment is explored In a later work [84]the same authors study how to build topological maps inlarge environments both indoors and outdoors using thelocal features extracted from a set of omnidirectional viewsincluding the epipolar restriction and a clustering method tocarry out localization in an efficient way

Goedeme et al [116] present an algorithm to build atopological model of complex environments They also pro-pose algorithms to solve the problems of localization (bothglobal when the robot has no information about its previousposition and local when this information is available) andnavigation In this approach the authors propose two localdescriptors to extract the most relevant information fromthe images a color enhanced version of SIFT and invariantcolumn segments

The use of bag-of-features approaches has also beenconsidered by some researchers in mobile robotics As anexample Lu et al [117] propose the use of a bag-of-featuresapproach to carry out topological localization The map isbuilt in a previous process consisting mainly in clusteringthe visual information captured by an omnidirectional visionsystem It is performed in two steps First local features areextracted from all the images and they are used to carry out a119870-means clustering obtaining the 119896 clustersrsquo centres Secondthese centres constitute the visual vocabulary which is usedsubsequently to solve the localization problem

Beyond these frameworks the use of omnidirectionalvision systems has contributed to the emergence of newparadigms to create visual maps using some different fun-damentals to those traditionally used with local featuresOne of these frameworks consists in creating topologicalmaps using a global description of the images captured bythe omnidirectional vision system In this area Menegattiet al [98] show how a visual memory can be built usingthe Fourier Signature which presents rotational invariancewhen used with panoramic images The map is built usingglobal appearance methods and a system of mechanicalforces calculated from the similitude between descriptorsLiu et al [118] propose a framework that includes a methodto describe the visual appearance of the environment Itconsists in an adaptive descriptor based on color featuresand geometric information Using this descriptor they createa topological map based on the fact that the vertical edgesdivide the rooms in regions with a uniform meaning in thecase of indoor environments These descriptors are used asa basis to create a topological map through an incrementalprocess in which similar descriptors are incorporated intoeach node A global threshold is considered to evaluatethe similitude between descriptors Also other authors havestudied the localization problem using color information as

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 8: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

8 Journal of Sensors

(a) (b)

(c)

(d) (e)

Figure 7 (a) Sample omnidirectional image and some projections calculated from it (b) unit sphere projection (c) cylindrical projection(d) perspective projection onto a vertical plane and (e) orthographic projection

example Pears and Liang [70] extract corners which arelocated in the floor plane in indoor environments and usehomographies to carry out the tracking Zhou and Li [71]also use homographies but they extract features throughthe Harris corner detector [72] Sometimes geometricallymore complex features are used as Saripalli et al do [73]They carry out a segmentation process to extract predefinedfeatures such as windows and a Kalman filtering to carry outthe tracking and matching of features

412 Local Features Descriptors Among the methods forfeature extraction and description SIFT and SURF can behighlighted On the one hand SIFT (Scale Invariant FeatureTransform) was developed by Lowe [74 75] and providesfeatures which are invariant against scaling rotation changesin lighting conditions and camera viewpoint On the otherhand SURF (Speeded-Up Robust Features) whose standardversion was developed by Bay et al [76 77] is inspired in

SIFT but presents a lower computational cost and a higherrobustness against image transformationsMore recent devel-opments include BRIEF [78] which is designed to be usedin real-time applications at the expense of a lower toleranceto image distortions and transformations ORB [79] whichis based on BRIEF trying to improve its invariance againstrotation and resistance to noise but it is not robust againstchanges of scale BRISK [80] and FREAK [81] which tryto have the robustness of SURF but with an improvedcomputational cost

These descriptors have become popular in mapping andlocalization tasks using mobile robots as many researchersshow such as Angeli et al [82] and Se et al [83] who makeuse of SIFTdescriptors to solve these problems and theworksof Valgren and Lilienthal [84] and Murillo et al [85] whoemploy SURF features extracted fromomnidirectional scenesto estimate the position of the robot in a previously built mapAlso Pan et al [86] present a method based on the use of

Journal of Sensors 9

BRISK descriptors to estimate the position of an unmannedaerial vehicle

413 Using Methods Based on Local Features to Build VisualMaps Using feature-based approaches in combination withprobabilistic techniques it is possible to build metric maps[87] However these methods present some drawbacks forexample it is necessary that the environment be rich inprominent details (otherwise artificial landmarks can beinserted in the environment but this is not always possible)also the detection of such points is sometimes not robustagainst changes in the environment and their description isnot always fully invariant to changes in robot position andorientation Besides camera calibration is crucial in orderto incorporate new measurements in the model correctlyThis way small deviations in either the intrinsic or theextrinsic parameters add some error to the measurementsAt last extracting describing and comparing landmarksare computationally complex processes that often make itinfeasible building the model in real time as the robotexplores the environment

Feature-based approaches have reached a relative matu-rity and some comparative evaluations of their performancehave been carried out such as [88] and [89] These evalu-ations are useful to choose the most suitable extractor anddescriptor to a specific application

42 Methods Based on the Global Appearance of Scenes

421 General Issues The approaches based on the globalappearance of scenes work with the images as a wholewithout extracting any local information Each image isrepresented by means of a unique descriptor which containsinformation on its global appearance This kind of methodspresents some advantages in dynamic andor poorly struc-tured environments where it is difficult to extract stablecharacteristic features or regions from a set of scenes Theseapproaches lead to conceptually simpler algorithms sinceeach scene is described by means of only one descriptorMap creation and localization can be achieved just storingand comparing pairwise these descriptors As a drawbackextracting metric relationships from this information isdifficult thus this family of techniques is usually employedto build topological maps (unless the visual informationis combined with other sensory data such as odometry)Despite their simplicity several difficulties must be facedwhen using these techniques Since no local informationis extracted from the scenes it is necessary to use anycompression and description method that make the processcomputationally feasible Nevertheless the current imagedescription and compression methods permit optimisingthe size of the databases to store the necessary informationand carrying out the comparisons between scenes with arelative computational efficiency Occasionally these descrip-tors do not present invariance neither to changes in therobot orientation or in the lighting conditions nor to otherchanges in the environment (position of objects doorsetc) They will also experience some problems in environ-ments where visual aliasing is present which is a common

phenomenon in indoor environments with repetitive visualstructures

In spite of these disadvantages techniques based onglobal appearance constitute a systematic and intuitive alter-native to solve the mapping and localization problems In theworks that make use of this approach these tasks are usuallyaddressed in two steps The first step consists in creating amodel of the environment In this step the robot captures aset of images and describes each of thembymeans of a uniquedescriptor and from the information of these descriptorsa map is created and some relationships are establishedbetween images or robot posesThis step is known as learningphase Once the environment has been modelled in thesecond step the robot carries out the localization by capturingan image describing it and comparing this descriptor withthe descriptors previously stored in the model This step isalso known as test or auto-localization phase

422 Global Appearance Descriptors The key to the properfunctioning of this kind of methods is in the global descrip-tion algorithmusedDifferent alternatives can be found in therelated literature Some of the pioneer works on this approachwere developed by Matsumoto et al [90] and make a directcomparison between the pixels of a central region of thescenes using a correlation process However considering thehigh computational cost of this method the authors startedto build some descriptors that stored global information fromthe scenes using depth information estimated through stereodivergence [91]

Among the techniques that try to compress globally thevisual information Principal Components Analysis (PCA)[92] can be highlighted as one of the first robust alternativesused PCA considers the images as multidimensional datathat can be projected onto a lower dimension space retainingmost of the original information Some authors like Krose etal [93 94] and Stimec et al [95] make use of this method tocreate robustmodels of the environment usingmobile robotsThe first PCA developments presented two main problemsOn the one hand the descriptors were variant against changesin the orientation of the robot in the ground plane and onthe other hand all the images had to be available to build themodel which means that the map cannot be built online asthe robot explores the environment If a new image has tobe added to the previously built PCA model (eg becausethe robot must update this representation as new relevantimages are captured) it is necessary to start the process fromthe scratch using again all the images captured so far Someauthors have tried to overcome these drawbacks First Joganand Leonardis [96] proposed a version of PCA that providesdescriptors which are invariant against rotations of the robotin the ground plane when omnidirectional images are usedat the expense of a substantial increase of the computationalcost Second Artac et al [97] used an incremental versionof PCA that permits adding new images to a previously builtmodel

Other authors have proposed the implementation ofdescriptors based on the application of the Discrete FourierTransform (DFT) to the images with the aim of extractingthe most relevant information from the scenes In this field

10 Journal of Sensors

some alternatives can be found On the one hand in the caseof panoramic images both the two-dimensional DFT or theFourier Signature can be implemented as Paya et al [28] andMenegatti et al [98] show respectively On the other handin the case of omnidirectional images the Spherical FourierTransform (SFT) can be used as Rossi et al [99] show In allthe cases the resulting descriptor is able to compress mostof the information contained in the original scene in a lowernumber of components Also these methods permit buildingdescriptors which are invariant against rotations of the robotin the ground plane Apart from this they contain enoughinformation to estimate not only the position of the robotbut also its relative orientation At last the computational costto describe each scene is relatively low and each descriptorcan be calculated independently on the rest of descriptorsTaking these features into account in this moment DFToutperforms PCA as far as mapping and localization areconcerned As an example Menegatti et al [100] show howa probabilistic localization process can be carried out withina visual memory previously created by means of the FourierSignature of a set of panoramic scenes as the only source ofinformation

Other authors have described globally the scenes throughapproaches based on gradient either the magnitude or theorientation As an example Kosecka et al [101] make use of ahistogram of gradient orientation to describe each scene withthe goal of creating a map of the environment and carryingout the localization process However some comparisonsbetween local areas of the candidate zones are performed torefine these processes Murillo et al [102] propose to use apanoramic gist descriptor [103] and try to optimise its sizewhile keeping most of the information of the environmentThe approaches based on gradients and gist tend to present aperformance similar to Fourier Transform methods [104]

The information stored in the color channels also consti-tutes a useful alternative to build global appearance descrip-tors The works of Zhou et al [105] show an example of useof this information They propose building histograms thatcontain information on color gradient edges density andtexture to represent the images

423 Using Methods Based on Global Appearance to BuildVisual Maps Finally when working with global appearanceit is necessary to consider that the appearance of an imagestrongly depends on the lighting conditions of the environ-ment representedThis way the global appearance descriptorshould be robust against changes in these conditions Someresearchers have focused on this topic and have proposeddifferent solutions First the problem can be addressed bymeans of considering sets of images captured under differentlighting conditions This way the model would containinformation on the changes that appearance can undergoAs an example the works developed by Murase and Nayar[106] solved the problem of visual recognition of objectsunder changing lighting conditions using this approach Thesecond approach consists in trying to remove or minimisethe effects produced by these changes during the creation ofthe descriptor to obtain a normalised model This approachis mainly based on the use of filters for example to detect

and extract edges [107] since this information tends to bemore insensitive to changes in lighting conditions than theinformation of intensity or homomorphic filters [108] whichtry to separate the luminance and reflectance componentsand minimise the first component which is the one that ismore prone to change when the lighting conditions do

5 Mapping Localization and NavigationUsing Omnidirectional Vision Sensors

This section addresses the problems of mapping localizationandnavigation using the information provided by omnidirec-tional vision sensors First themain approaches to solve thesetasks are outlined and then some relevant works developedwithin these approaches are described

While the initial works tried tomodel the geometry of theenvironment with metric precision from visual informationand arranging the information through CAD (ComputerAssisted Design) models these approaches gave way to sim-pler models that represent the environment through occu-pation grids topological maps or even sequences of imagesTraditionally the problems of map building and localizationhave been addressed using three different approaches

(i) Map building and subsequent localization in theseapproaches first the robot goes through the envi-ronment to map (usually in a teleoperated way)and collects some useful information from a varietyof positions This information is then processed tobuild the map Once the map is available the robotcaptures information from its current position andcomparing this information with the map its currentpose (position and orientation) is estimated

(ii) Continuous map building and updating in thisapproach the robot is able to explore autonomouslythe environment to map and build or update amodel while it is moving The SLAM algorithms(Simultaneous Localization And Mapping) fit withinthis approach

(iii) Mapless navigation systems the robot navigatesthrough the environment by applying some analysistechniques to the last captured scenes such as opticalflow or through visual memories previously recordedthat contain sequences of images and associatedactions but no relation between images This kindof navigation is associated basically with reactivebehaviours

The next subsections present some of the most relevantcontributions developed in each of these three frameworks

51 Map Building and Subsequent Localization Pretty usu-ally solving the localization problem requires having apreviously built model of the environment in such a waythat the robot can estimate its pose by comparing its sensoryinformation with the information captured in the modelIn general depending on how this information is arrangedthese models can be classified into three categories metrictopological or hybrid [109] First metric approaches try to

Journal of Sensors 11

create a model with geometric precision including somefeatures of the environment with respect to a referencesystem These models are created usually using local featuresextracted from images [87] and they permit estimating met-rically the position of the robot up to a specific error Secondtopological approaches try to create compact models thatinclude information from several characteristic localizationsand the connectivity relations between them Both localfeatures and global appearance can be used to create suchmodels [104 110] They usually permit a rough localizationof the robot with a reasonable computational cost Finallyhybrid maps combine metric and topological information totry to have the advantages of both methods The informationis arranged into several layers with topological informationthat permits carrying out an initial rough estimation andmetric information to refine this estimation when necessary[111]

The use of information captured by omnidirectionalvision sensors has expanded in recent years to solve themapping localization and navigation problems The nextparagraphs outline some of these works

Initially a simple way to create a metric model consistsin using some visual beacons which can be seen from avariety of positions and used to estimate the pose of therobot Following this philosophy Li et al [112] present asystem for the localization of agricultural vehicles using somevisual beacons which are perceived using an omnidirectionalvision system These beacons are constituted by four redlandmarks situated in the vertices of the environment wherethese vehicles may move The omnidirectional vision systemmakes use of these four landmarks as beacons situated onspecific and previously known positions to estimate its poseIn other occasions a more complete description of theenvironment is previously available as in the work of Lu etal [113] They use the model provided by the competitionRoboCup Middle Size League Taking this competition intoaccount the objective consists in carrying out an accuratelocalization of the robot With this aim a Monte Carlolocalization method is employed to provide a rough initiallocalization and once the algorithmhas converged this resultis used as the initial value of a matching optimisation algo-rithm in order to perform accurate and efficient localizationtracking

The maps composed of local visual features have beenextensively used along the last few years Classical approacheshave made use of monocular or stereo vision to extract andtrack these local features or landmarks More recently someresearchers have shown that it is feasible to extend theseclassical approaches to be used with omnidirectional visionIn this line Choi et al [114] propose an algorithm to createmaps which is based on an object extraction method usingLucas-Kanade optical flowmotion detection from the imagesobtained by an omnidirectional vision systemThe algorithmuses the outer point of motion vectors as feature points of theenvironment They are obtained based on the corner pointsof an extracted object and using Lucas-Kanade optical flow

Valgren et al [115] propose the use of local features thatare extracted from images captured in a sequence Thesefeatures are used both to cluster the images into nodes and

to detect links between the nodes They employ a variant ofSIFT features that are extracted and matched from differentviewpoints Each node of the topological map contains acollection of images which are considered similar enough(ie a sufficient number of feature matches must existamong the images contained in the node) Once the nodesare constructed they are connected taking into accountfeature matching between the images in the nodes This waythey build a topological map incrementally creating newnodes as the environment is explored In a later work [84]the same authors study how to build topological maps inlarge environments both indoors and outdoors using thelocal features extracted from a set of omnidirectional viewsincluding the epipolar restriction and a clustering method tocarry out localization in an efficient way

Goedeme et al [116] present an algorithm to build atopological model of complex environments They also pro-pose algorithms to solve the problems of localization (bothglobal when the robot has no information about its previousposition and local when this information is available) andnavigation In this approach the authors propose two localdescriptors to extract the most relevant information fromthe images a color enhanced version of SIFT and invariantcolumn segments

The use of bag-of-features approaches has also beenconsidered by some researchers in mobile robotics As anexample Lu et al [117] propose the use of a bag-of-featuresapproach to carry out topological localization The map isbuilt in a previous process consisting mainly in clusteringthe visual information captured by an omnidirectional visionsystem It is performed in two steps First local features areextracted from all the images and they are used to carry out a119870-means clustering obtaining the 119896 clustersrsquo centres Secondthese centres constitute the visual vocabulary which is usedsubsequently to solve the localization problem

Beyond these frameworks the use of omnidirectionalvision systems has contributed to the emergence of newparadigms to create visual maps using some different fun-damentals to those traditionally used with local featuresOne of these frameworks consists in creating topologicalmaps using a global description of the images captured bythe omnidirectional vision system In this area Menegattiet al [98] show how a visual memory can be built usingthe Fourier Signature which presents rotational invariancewhen used with panoramic images The map is built usingglobal appearance methods and a system of mechanicalforces calculated from the similitude between descriptorsLiu et al [118] propose a framework that includes a methodto describe the visual appearance of the environment Itconsists in an adaptive descriptor based on color featuresand geometric information Using this descriptor they createa topological map based on the fact that the vertical edgesdivide the rooms in regions with a uniform meaning in thecase of indoor environments These descriptors are used asa basis to create a topological map through an incrementalprocess in which similar descriptors are incorporated intoeach node A global threshold is considered to evaluatethe similitude between descriptors Also other authors havestudied the localization problem using color information as

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 9: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

Journal of Sensors 9

BRISK descriptors to estimate the position of an unmannedaerial vehicle

413 Using Methods Based on Local Features to Build VisualMaps Using feature-based approaches in combination withprobabilistic techniques it is possible to build metric maps[87] However these methods present some drawbacks forexample it is necessary that the environment be rich inprominent details (otherwise artificial landmarks can beinserted in the environment but this is not always possible)also the detection of such points is sometimes not robustagainst changes in the environment and their description isnot always fully invariant to changes in robot position andorientation Besides camera calibration is crucial in orderto incorporate new measurements in the model correctlyThis way small deviations in either the intrinsic or theextrinsic parameters add some error to the measurementsAt last extracting describing and comparing landmarksare computationally complex processes that often make itinfeasible building the model in real time as the robotexplores the environment

Feature-based approaches have reached a relative matu-rity and some comparative evaluations of their performancehave been carried out such as [88] and [89] These evalu-ations are useful to choose the most suitable extractor anddescriptor to a specific application

42 Methods Based on the Global Appearance of Scenes

421 General Issues The approaches based on the globalappearance of scenes work with the images as a wholewithout extracting any local information Each image isrepresented by means of a unique descriptor which containsinformation on its global appearance This kind of methodspresents some advantages in dynamic andor poorly struc-tured environments where it is difficult to extract stablecharacteristic features or regions from a set of scenes Theseapproaches lead to conceptually simpler algorithms sinceeach scene is described by means of only one descriptorMap creation and localization can be achieved just storingand comparing pairwise these descriptors As a drawbackextracting metric relationships from this information isdifficult thus this family of techniques is usually employedto build topological maps (unless the visual informationis combined with other sensory data such as odometry)Despite their simplicity several difficulties must be facedwhen using these techniques Since no local informationis extracted from the scenes it is necessary to use anycompression and description method that make the processcomputationally feasible Nevertheless the current imagedescription and compression methods permit optimisingthe size of the databases to store the necessary informationand carrying out the comparisons between scenes with arelative computational efficiency Occasionally these descrip-tors do not present invariance neither to changes in therobot orientation or in the lighting conditions nor to otherchanges in the environment (position of objects doorsetc) They will also experience some problems in environ-ments where visual aliasing is present which is a common

phenomenon in indoor environments with repetitive visualstructures

In spite of these disadvantages techniques based onglobal appearance constitute a systematic and intuitive alter-native to solve the mapping and localization problems In theworks that make use of this approach these tasks are usuallyaddressed in two steps The first step consists in creating amodel of the environment In this step the robot captures aset of images and describes each of thembymeans of a uniquedescriptor and from the information of these descriptorsa map is created and some relationships are establishedbetween images or robot posesThis step is known as learningphase Once the environment has been modelled in thesecond step the robot carries out the localization by capturingan image describing it and comparing this descriptor withthe descriptors previously stored in the model This step isalso known as test or auto-localization phase

422 Global Appearance Descriptors The key to the properfunctioning of this kind of methods is in the global descrip-tion algorithmusedDifferent alternatives can be found in therelated literature Some of the pioneer works on this approachwere developed by Matsumoto et al [90] and make a directcomparison between the pixels of a central region of thescenes using a correlation process However considering thehigh computational cost of this method the authors startedto build some descriptors that stored global information fromthe scenes using depth information estimated through stereodivergence [91]

Among the techniques that try to compress globally thevisual information Principal Components Analysis (PCA)[92] can be highlighted as one of the first robust alternativesused PCA considers the images as multidimensional datathat can be projected onto a lower dimension space retainingmost of the original information Some authors like Krose etal [93 94] and Stimec et al [95] make use of this method tocreate robustmodels of the environment usingmobile robotsThe first PCA developments presented two main problemsOn the one hand the descriptors were variant against changesin the orientation of the robot in the ground plane and onthe other hand all the images had to be available to build themodel which means that the map cannot be built online asthe robot explores the environment If a new image has tobe added to the previously built PCA model (eg becausethe robot must update this representation as new relevantimages are captured) it is necessary to start the process fromthe scratch using again all the images captured so far Someauthors have tried to overcome these drawbacks First Joganand Leonardis [96] proposed a version of PCA that providesdescriptors which are invariant against rotations of the robotin the ground plane when omnidirectional images are usedat the expense of a substantial increase of the computationalcost Second Artac et al [97] used an incremental versionof PCA that permits adding new images to a previously builtmodel

Other authors have proposed the implementation ofdescriptors based on the application of the Discrete FourierTransform (DFT) to the images with the aim of extractingthe most relevant information from the scenes In this field

10 Journal of Sensors

some alternatives can be found On the one hand in the caseof panoramic images both the two-dimensional DFT or theFourier Signature can be implemented as Paya et al [28] andMenegatti et al [98] show respectively On the other handin the case of omnidirectional images the Spherical FourierTransform (SFT) can be used as Rossi et al [99] show In allthe cases the resulting descriptor is able to compress mostof the information contained in the original scene in a lowernumber of components Also these methods permit buildingdescriptors which are invariant against rotations of the robotin the ground plane Apart from this they contain enoughinformation to estimate not only the position of the robotbut also its relative orientation At last the computational costto describe each scene is relatively low and each descriptorcan be calculated independently on the rest of descriptorsTaking these features into account in this moment DFToutperforms PCA as far as mapping and localization areconcerned As an example Menegatti et al [100] show howa probabilistic localization process can be carried out withina visual memory previously created by means of the FourierSignature of a set of panoramic scenes as the only source ofinformation

Other authors have described globally the scenes throughapproaches based on gradient either the magnitude or theorientation As an example Kosecka et al [101] make use of ahistogram of gradient orientation to describe each scene withthe goal of creating a map of the environment and carryingout the localization process However some comparisonsbetween local areas of the candidate zones are performed torefine these processes Murillo et al [102] propose to use apanoramic gist descriptor [103] and try to optimise its sizewhile keeping most of the information of the environmentThe approaches based on gradients and gist tend to present aperformance similar to Fourier Transform methods [104]

The information stored in the color channels also consti-tutes a useful alternative to build global appearance descrip-tors The works of Zhou et al [105] show an example of useof this information They propose building histograms thatcontain information on color gradient edges density andtexture to represent the images

423 Using Methods Based on Global Appearance to BuildVisual Maps Finally when working with global appearanceit is necessary to consider that the appearance of an imagestrongly depends on the lighting conditions of the environ-ment representedThis way the global appearance descriptorshould be robust against changes in these conditions Someresearchers have focused on this topic and have proposeddifferent solutions First the problem can be addressed bymeans of considering sets of images captured under differentlighting conditions This way the model would containinformation on the changes that appearance can undergoAs an example the works developed by Murase and Nayar[106] solved the problem of visual recognition of objectsunder changing lighting conditions using this approach Thesecond approach consists in trying to remove or minimisethe effects produced by these changes during the creation ofthe descriptor to obtain a normalised model This approachis mainly based on the use of filters for example to detect

and extract edges [107] since this information tends to bemore insensitive to changes in lighting conditions than theinformation of intensity or homomorphic filters [108] whichtry to separate the luminance and reflectance componentsand minimise the first component which is the one that ismore prone to change when the lighting conditions do

5 Mapping Localization and NavigationUsing Omnidirectional Vision Sensors

This section addresses the problems of mapping localizationandnavigation using the information provided by omnidirec-tional vision sensors First themain approaches to solve thesetasks are outlined and then some relevant works developedwithin these approaches are described

While the initial works tried tomodel the geometry of theenvironment with metric precision from visual informationand arranging the information through CAD (ComputerAssisted Design) models these approaches gave way to sim-pler models that represent the environment through occu-pation grids topological maps or even sequences of imagesTraditionally the problems of map building and localizationhave been addressed using three different approaches

(i) Map building and subsequent localization in theseapproaches first the robot goes through the envi-ronment to map (usually in a teleoperated way)and collects some useful information from a varietyof positions This information is then processed tobuild the map Once the map is available the robotcaptures information from its current position andcomparing this information with the map its currentpose (position and orientation) is estimated

(ii) Continuous map building and updating in thisapproach the robot is able to explore autonomouslythe environment to map and build or update amodel while it is moving The SLAM algorithms(Simultaneous Localization And Mapping) fit withinthis approach

(iii) Mapless navigation systems the robot navigatesthrough the environment by applying some analysistechniques to the last captured scenes such as opticalflow or through visual memories previously recordedthat contain sequences of images and associatedactions but no relation between images This kindof navigation is associated basically with reactivebehaviours

The next subsections present some of the most relevantcontributions developed in each of these three frameworks

51 Map Building and Subsequent Localization Pretty usu-ally solving the localization problem requires having apreviously built model of the environment in such a waythat the robot can estimate its pose by comparing its sensoryinformation with the information captured in the modelIn general depending on how this information is arrangedthese models can be classified into three categories metrictopological or hybrid [109] First metric approaches try to

Journal of Sensors 11

create a model with geometric precision including somefeatures of the environment with respect to a referencesystem These models are created usually using local featuresextracted from images [87] and they permit estimating met-rically the position of the robot up to a specific error Secondtopological approaches try to create compact models thatinclude information from several characteristic localizationsand the connectivity relations between them Both localfeatures and global appearance can be used to create suchmodels [104 110] They usually permit a rough localizationof the robot with a reasonable computational cost Finallyhybrid maps combine metric and topological information totry to have the advantages of both methods The informationis arranged into several layers with topological informationthat permits carrying out an initial rough estimation andmetric information to refine this estimation when necessary[111]

The use of information captured by omnidirectionalvision sensors has expanded in recent years to solve themapping localization and navigation problems The nextparagraphs outline some of these works

Initially a simple way to create a metric model consistsin using some visual beacons which can be seen from avariety of positions and used to estimate the pose of therobot Following this philosophy Li et al [112] present asystem for the localization of agricultural vehicles using somevisual beacons which are perceived using an omnidirectionalvision system These beacons are constituted by four redlandmarks situated in the vertices of the environment wherethese vehicles may move The omnidirectional vision systemmakes use of these four landmarks as beacons situated onspecific and previously known positions to estimate its poseIn other occasions a more complete description of theenvironment is previously available as in the work of Lu etal [113] They use the model provided by the competitionRoboCup Middle Size League Taking this competition intoaccount the objective consists in carrying out an accuratelocalization of the robot With this aim a Monte Carlolocalization method is employed to provide a rough initiallocalization and once the algorithmhas converged this resultis used as the initial value of a matching optimisation algo-rithm in order to perform accurate and efficient localizationtracking

The maps composed of local visual features have beenextensively used along the last few years Classical approacheshave made use of monocular or stereo vision to extract andtrack these local features or landmarks More recently someresearchers have shown that it is feasible to extend theseclassical approaches to be used with omnidirectional visionIn this line Choi et al [114] propose an algorithm to createmaps which is based on an object extraction method usingLucas-Kanade optical flowmotion detection from the imagesobtained by an omnidirectional vision systemThe algorithmuses the outer point of motion vectors as feature points of theenvironment They are obtained based on the corner pointsof an extracted object and using Lucas-Kanade optical flow

Valgren et al [115] propose the use of local features thatare extracted from images captured in a sequence Thesefeatures are used both to cluster the images into nodes and

to detect links between the nodes They employ a variant ofSIFT features that are extracted and matched from differentviewpoints Each node of the topological map contains acollection of images which are considered similar enough(ie a sufficient number of feature matches must existamong the images contained in the node) Once the nodesare constructed they are connected taking into accountfeature matching between the images in the nodes This waythey build a topological map incrementally creating newnodes as the environment is explored In a later work [84]the same authors study how to build topological maps inlarge environments both indoors and outdoors using thelocal features extracted from a set of omnidirectional viewsincluding the epipolar restriction and a clustering method tocarry out localization in an efficient way

Goedeme et al [116] present an algorithm to build atopological model of complex environments They also pro-pose algorithms to solve the problems of localization (bothglobal when the robot has no information about its previousposition and local when this information is available) andnavigation In this approach the authors propose two localdescriptors to extract the most relevant information fromthe images a color enhanced version of SIFT and invariantcolumn segments

The use of bag-of-features approaches has also beenconsidered by some researchers in mobile robotics As anexample Lu et al [117] propose the use of a bag-of-featuresapproach to carry out topological localization The map isbuilt in a previous process consisting mainly in clusteringthe visual information captured by an omnidirectional visionsystem It is performed in two steps First local features areextracted from all the images and they are used to carry out a119870-means clustering obtaining the 119896 clustersrsquo centres Secondthese centres constitute the visual vocabulary which is usedsubsequently to solve the localization problem

Beyond these frameworks the use of omnidirectionalvision systems has contributed to the emergence of newparadigms to create visual maps using some different fun-damentals to those traditionally used with local featuresOne of these frameworks consists in creating topologicalmaps using a global description of the images captured bythe omnidirectional vision system In this area Menegattiet al [98] show how a visual memory can be built usingthe Fourier Signature which presents rotational invariancewhen used with panoramic images The map is built usingglobal appearance methods and a system of mechanicalforces calculated from the similitude between descriptorsLiu et al [118] propose a framework that includes a methodto describe the visual appearance of the environment Itconsists in an adaptive descriptor based on color featuresand geometric information Using this descriptor they createa topological map based on the fact that the vertical edgesdivide the rooms in regions with a uniform meaning in thecase of indoor environments These descriptors are used asa basis to create a topological map through an incrementalprocess in which similar descriptors are incorporated intoeach node A global threshold is considered to evaluatethe similitude between descriptors Also other authors havestudied the localization problem using color information as

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 10: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

10 Journal of Sensors

some alternatives can be found On the one hand in the caseof panoramic images both the two-dimensional DFT or theFourier Signature can be implemented as Paya et al [28] andMenegatti et al [98] show respectively On the other handin the case of omnidirectional images the Spherical FourierTransform (SFT) can be used as Rossi et al [99] show In allthe cases the resulting descriptor is able to compress mostof the information contained in the original scene in a lowernumber of components Also these methods permit buildingdescriptors which are invariant against rotations of the robotin the ground plane Apart from this they contain enoughinformation to estimate not only the position of the robotbut also its relative orientation At last the computational costto describe each scene is relatively low and each descriptorcan be calculated independently on the rest of descriptorsTaking these features into account in this moment DFToutperforms PCA as far as mapping and localization areconcerned As an example Menegatti et al [100] show howa probabilistic localization process can be carried out withina visual memory previously created by means of the FourierSignature of a set of panoramic scenes as the only source ofinformation

Other authors have described globally the scenes throughapproaches based on gradient either the magnitude or theorientation As an example Kosecka et al [101] make use of ahistogram of gradient orientation to describe each scene withthe goal of creating a map of the environment and carryingout the localization process However some comparisonsbetween local areas of the candidate zones are performed torefine these processes Murillo et al [102] propose to use apanoramic gist descriptor [103] and try to optimise its sizewhile keeping most of the information of the environmentThe approaches based on gradients and gist tend to present aperformance similar to Fourier Transform methods [104]

The information stored in the color channels also consti-tutes a useful alternative to build global appearance descrip-tors The works of Zhou et al [105] show an example of useof this information They propose building histograms thatcontain information on color gradient edges density andtexture to represent the images

423 Using Methods Based on Global Appearance to BuildVisual Maps Finally when working with global appearanceit is necessary to consider that the appearance of an imagestrongly depends on the lighting conditions of the environ-ment representedThis way the global appearance descriptorshould be robust against changes in these conditions Someresearchers have focused on this topic and have proposeddifferent solutions First the problem can be addressed bymeans of considering sets of images captured under differentlighting conditions This way the model would containinformation on the changes that appearance can undergoAs an example the works developed by Murase and Nayar[106] solved the problem of visual recognition of objectsunder changing lighting conditions using this approach Thesecond approach consists in trying to remove or minimisethe effects produced by these changes during the creation ofthe descriptor to obtain a normalised model This approachis mainly based on the use of filters for example to detect

and extract edges [107] since this information tends to bemore insensitive to changes in lighting conditions than theinformation of intensity or homomorphic filters [108] whichtry to separate the luminance and reflectance componentsand minimise the first component which is the one that ismore prone to change when the lighting conditions do

5 Mapping Localization and NavigationUsing Omnidirectional Vision Sensors

This section addresses the problems of mapping localizationandnavigation using the information provided by omnidirec-tional vision sensors First themain approaches to solve thesetasks are outlined and then some relevant works developedwithin these approaches are described

While the initial works tried tomodel the geometry of theenvironment with metric precision from visual informationand arranging the information through CAD (ComputerAssisted Design) models these approaches gave way to sim-pler models that represent the environment through occu-pation grids topological maps or even sequences of imagesTraditionally the problems of map building and localizationhave been addressed using three different approaches

(i) Map building and subsequent localization in theseapproaches first the robot goes through the envi-ronment to map (usually in a teleoperated way)and collects some useful information from a varietyof positions This information is then processed tobuild the map Once the map is available the robotcaptures information from its current position andcomparing this information with the map its currentpose (position and orientation) is estimated

(ii) Continuous map building and updating in thisapproach the robot is able to explore autonomouslythe environment to map and build or update amodel while it is moving The SLAM algorithms(Simultaneous Localization And Mapping) fit withinthis approach

(iii) Mapless navigation systems the robot navigatesthrough the environment by applying some analysistechniques to the last captured scenes such as opticalflow or through visual memories previously recordedthat contain sequences of images and associatedactions but no relation between images This kindof navigation is associated basically with reactivebehaviours

The next subsections present some of the most relevantcontributions developed in each of these three frameworks

51 Map Building and Subsequent Localization Pretty usu-ally solving the localization problem requires having apreviously built model of the environment in such a waythat the robot can estimate its pose by comparing its sensoryinformation with the information captured in the modelIn general depending on how this information is arrangedthese models can be classified into three categories metrictopological or hybrid [109] First metric approaches try to

Journal of Sensors 11

create a model with geometric precision including somefeatures of the environment with respect to a referencesystem These models are created usually using local featuresextracted from images [87] and they permit estimating met-rically the position of the robot up to a specific error Secondtopological approaches try to create compact models thatinclude information from several characteristic localizationsand the connectivity relations between them Both localfeatures and global appearance can be used to create suchmodels [104 110] They usually permit a rough localizationof the robot with a reasonable computational cost Finallyhybrid maps combine metric and topological information totry to have the advantages of both methods The informationis arranged into several layers with topological informationthat permits carrying out an initial rough estimation andmetric information to refine this estimation when necessary[111]

The use of information captured by omnidirectionalvision sensors has expanded in recent years to solve themapping localization and navigation problems The nextparagraphs outline some of these works

Initially a simple way to create a metric model consistsin using some visual beacons which can be seen from avariety of positions and used to estimate the pose of therobot Following this philosophy Li et al [112] present asystem for the localization of agricultural vehicles using somevisual beacons which are perceived using an omnidirectionalvision system These beacons are constituted by four redlandmarks situated in the vertices of the environment wherethese vehicles may move The omnidirectional vision systemmakes use of these four landmarks as beacons situated onspecific and previously known positions to estimate its poseIn other occasions a more complete description of theenvironment is previously available as in the work of Lu etal [113] They use the model provided by the competitionRoboCup Middle Size League Taking this competition intoaccount the objective consists in carrying out an accuratelocalization of the robot With this aim a Monte Carlolocalization method is employed to provide a rough initiallocalization and once the algorithmhas converged this resultis used as the initial value of a matching optimisation algo-rithm in order to perform accurate and efficient localizationtracking

The maps composed of local visual features have beenextensively used along the last few years Classical approacheshave made use of monocular or stereo vision to extract andtrack these local features or landmarks More recently someresearchers have shown that it is feasible to extend theseclassical approaches to be used with omnidirectional visionIn this line Choi et al [114] propose an algorithm to createmaps which is based on an object extraction method usingLucas-Kanade optical flowmotion detection from the imagesobtained by an omnidirectional vision systemThe algorithmuses the outer point of motion vectors as feature points of theenvironment They are obtained based on the corner pointsof an extracted object and using Lucas-Kanade optical flow

Valgren et al [115] propose the use of local features thatare extracted from images captured in a sequence Thesefeatures are used both to cluster the images into nodes and

to detect links between the nodes They employ a variant ofSIFT features that are extracted and matched from differentviewpoints Each node of the topological map contains acollection of images which are considered similar enough(ie a sufficient number of feature matches must existamong the images contained in the node) Once the nodesare constructed they are connected taking into accountfeature matching between the images in the nodes This waythey build a topological map incrementally creating newnodes as the environment is explored In a later work [84]the same authors study how to build topological maps inlarge environments both indoors and outdoors using thelocal features extracted from a set of omnidirectional viewsincluding the epipolar restriction and a clustering method tocarry out localization in an efficient way

Goedeme et al [116] present an algorithm to build atopological model of complex environments They also pro-pose algorithms to solve the problems of localization (bothglobal when the robot has no information about its previousposition and local when this information is available) andnavigation In this approach the authors propose two localdescriptors to extract the most relevant information fromthe images a color enhanced version of SIFT and invariantcolumn segments

The use of bag-of-features approaches has also beenconsidered by some researchers in mobile robotics As anexample Lu et al [117] propose the use of a bag-of-featuresapproach to carry out topological localization The map isbuilt in a previous process consisting mainly in clusteringthe visual information captured by an omnidirectional visionsystem It is performed in two steps First local features areextracted from all the images and they are used to carry out a119870-means clustering obtaining the 119896 clustersrsquo centres Secondthese centres constitute the visual vocabulary which is usedsubsequently to solve the localization problem

Beyond these frameworks the use of omnidirectionalvision systems has contributed to the emergence of newparadigms to create visual maps using some different fun-damentals to those traditionally used with local featuresOne of these frameworks consists in creating topologicalmaps using a global description of the images captured bythe omnidirectional vision system In this area Menegattiet al [98] show how a visual memory can be built usingthe Fourier Signature which presents rotational invariancewhen used with panoramic images The map is built usingglobal appearance methods and a system of mechanicalforces calculated from the similitude between descriptorsLiu et al [118] propose a framework that includes a methodto describe the visual appearance of the environment Itconsists in an adaptive descriptor based on color featuresand geometric information Using this descriptor they createa topological map based on the fact that the vertical edgesdivide the rooms in regions with a uniform meaning in thecase of indoor environments These descriptors are used asa basis to create a topological map through an incrementalprocess in which similar descriptors are incorporated intoeach node A global threshold is considered to evaluatethe similitude between descriptors Also other authors havestudied the localization problem using color information as

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 11: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

Journal of Sensors 11

create a model with geometric precision including somefeatures of the environment with respect to a referencesystem These models are created usually using local featuresextracted from images [87] and they permit estimating met-rically the position of the robot up to a specific error Secondtopological approaches try to create compact models thatinclude information from several characteristic localizationsand the connectivity relations between them Both localfeatures and global appearance can be used to create suchmodels [104 110] They usually permit a rough localizationof the robot with a reasonable computational cost Finallyhybrid maps combine metric and topological information totry to have the advantages of both methods The informationis arranged into several layers with topological informationthat permits carrying out an initial rough estimation andmetric information to refine this estimation when necessary[111]

The use of information captured by omnidirectionalvision sensors has expanded in recent years to solve themapping localization and navigation problems The nextparagraphs outline some of these works

Initially a simple way to create a metric model consistsin using some visual beacons which can be seen from avariety of positions and used to estimate the pose of therobot Following this philosophy Li et al [112] present asystem for the localization of agricultural vehicles using somevisual beacons which are perceived using an omnidirectionalvision system These beacons are constituted by four redlandmarks situated in the vertices of the environment wherethese vehicles may move The omnidirectional vision systemmakes use of these four landmarks as beacons situated onspecific and previously known positions to estimate its poseIn other occasions a more complete description of theenvironment is previously available as in the work of Lu etal [113] They use the model provided by the competitionRoboCup Middle Size League Taking this competition intoaccount the objective consists in carrying out an accuratelocalization of the robot With this aim a Monte Carlolocalization method is employed to provide a rough initiallocalization and once the algorithmhas converged this resultis used as the initial value of a matching optimisation algo-rithm in order to perform accurate and efficient localizationtracking

The maps composed of local visual features have beenextensively used along the last few years Classical approacheshave made use of monocular or stereo vision to extract andtrack these local features or landmarks More recently someresearchers have shown that it is feasible to extend theseclassical approaches to be used with omnidirectional visionIn this line Choi et al [114] propose an algorithm to createmaps which is based on an object extraction method usingLucas-Kanade optical flowmotion detection from the imagesobtained by an omnidirectional vision systemThe algorithmuses the outer point of motion vectors as feature points of theenvironment They are obtained based on the corner pointsof an extracted object and using Lucas-Kanade optical flow

Valgren et al [115] propose the use of local features thatare extracted from images captured in a sequence Thesefeatures are used both to cluster the images into nodes and

to detect links between the nodes They employ a variant ofSIFT features that are extracted and matched from differentviewpoints Each node of the topological map contains acollection of images which are considered similar enough(ie a sufficient number of feature matches must existamong the images contained in the node) Once the nodesare constructed they are connected taking into accountfeature matching between the images in the nodes This waythey build a topological map incrementally creating newnodes as the environment is explored In a later work [84]the same authors study how to build topological maps inlarge environments both indoors and outdoors using thelocal features extracted from a set of omnidirectional viewsincluding the epipolar restriction and a clustering method tocarry out localization in an efficient way

Goedeme et al [116] present an algorithm to build atopological model of complex environments They also pro-pose algorithms to solve the problems of localization (bothglobal when the robot has no information about its previousposition and local when this information is available) andnavigation In this approach the authors propose two localdescriptors to extract the most relevant information fromthe images a color enhanced version of SIFT and invariantcolumn segments

The use of bag-of-features approaches has also beenconsidered by some researchers in mobile robotics As anexample Lu et al [117] propose the use of a bag-of-featuresapproach to carry out topological localization The map isbuilt in a previous process consisting mainly in clusteringthe visual information captured by an omnidirectional visionsystem It is performed in two steps First local features areextracted from all the images and they are used to carry out a119870-means clustering obtaining the 119896 clustersrsquo centres Secondthese centres constitute the visual vocabulary which is usedsubsequently to solve the localization problem

Beyond these frameworks the use of omnidirectionalvision systems has contributed to the emergence of newparadigms to create visual maps using some different fun-damentals to those traditionally used with local featuresOne of these frameworks consists in creating topologicalmaps using a global description of the images captured bythe omnidirectional vision system In this area Menegattiet al [98] show how a visual memory can be built usingthe Fourier Signature which presents rotational invariancewhen used with panoramic images The map is built usingglobal appearance methods and a system of mechanicalforces calculated from the similitude between descriptorsLiu et al [118] propose a framework that includes a methodto describe the visual appearance of the environment Itconsists in an adaptive descriptor based on color featuresand geometric information Using this descriptor they createa topological map based on the fact that the vertical edgesdivide the rooms in regions with a uniform meaning in thecase of indoor environments These descriptors are used asa basis to create a topological map through an incrementalprocess in which similar descriptors are incorporated intoeach node A global threshold is considered to evaluatethe similitude between descriptors Also other authors havestudied the localization problem using color information as

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 12: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

12 Journal of Sensors

Ulrich and Nourbakhsh do in [119] where a topologicallocalization framework is proposed using panoramic colorimages and color histograms in a voting schema

Paya et al [63] propose a method to build topologi-cal maps from the global appearance of panoramic scenescaptured from different points of the environment Thesedescriptors are analysed to create a topological map ofthe environment On the one hand each node in the mapcontains an omnidirectional image representing the visualappearance of the place where the image was captured Onthe other hand links in the graph are neighbourly relationsbetween nodes so that when two nodes are connectedthe environment represented in one node is close to theenvironment represented by the other node In this way aprocess based on a mass-spring-damper model is developedand the resulting topological map incorporates geometricalrelationships that situate the nodes close to the real positionsin which the omnidirectional images were captured Theauthors develop algorithms to build this model either in abatch or in an incremental way Subsequently Paya et al[120] also propose a topological map building algorithmand a comparative evaluation of the performance of someglobal appearance descriptors is included The authors studythe effect of some usual phenomena that may happen inreal applications changes in lighting conditions noise andocclusions Once the map is built a Monte Carlo localizationapproach is accomplished to estimate the most probable poseof the vehicle

Also Rituerto et al [121] propose the use of a semantictopological map created from the global appearance descrip-tion of a set of images captured by an omnidirectionalvision system These authors propose the use of a gist-based descriptor to obtain a compact representation of thescene and estimate the similarity between two locationsbased on the Euclidean distance between descriptors Thelocalization task is divided into two steps a first step orglobal localization in which the system does not considerany a priori information about the current localization and asecond step or continuous localization in which they assumethat the image is acquired from the same topological region

Ranganathan et al [122] introduce a probabilistic methodto perform inference over the space of topological mapsThey approximate the posterior distribution over topologiesfrom the available sensor measurements To do it theyperform Bayesian inference over the space of all the possibletopologiesThe application of this method is illustrated usingFourier Signatures of panoramic images obtained from anarray of cameras mounted on the robot

Finally Stimec et al [95] present a method based onglobal appearance to build a trajectory-based map by meansof a clustering process with PCA features obtained from a setof panoramic images

While the previous approaches suppose that the robottrajectory is contained on the ground plane (ie they containinformation of 3 degrees of freedom) some works havealso shown that it is possible to create models that containinformation with a higher number of degrees of freedom Asan example Huhle et al [123] and Schairer et al [60] presentalgorithms formapbuilding and 3D localization based on the

use of the Spherical Fourier Transform and the unit sphereprojection of the omnidirectional information

52 Continuous Map Building and Updating The process tobuild amap and continuously update it while the robot simul-taneously estimates its position with respect to the modelis one of the most complex tasks in mobile robotics Alongthe last few years some approaches have been developedto solve it using omnidirectional visual systems and someauthors have proposed comparative evaluations to knowthe performance of omnidirectional imaging comparing toother classic sensors In this field Rituerto et al [124] carryout a comparative analysis between visual SLAM systemsusing omnidirectional and conventional monocular visionThis approach is based on the use of the Extended KalmanFilter (EKF) and SIFT points extracted from both kinds ofscenes To develop the comparison between both systemsthe authors made use of a spherical camera model whoseJacobian is obtained and used in the EKF algorithm Theresults provided show the superiority of the omnidirectionalsystems in the experiments carried out Also Burbridge etal [125] performed several experiments in order to quantifythe advantage of using omnidirectional vision compared tonarrow field of view cameras They also made use of theEKF but instead of characteristic points their landmarksare the vertical edges of the environment that appear in thescenes In this case the experiments were conducted usingsimulated data and they offer results about the accuracyin localization taking some parameters into account suchas the number of landmarks they integrate into the filterthe field of view of the camera and the distribution of thelandmarks One of the main advantages of the large fieldof view of the omnidirectional vision systems is that theextracted features remain in the image longer as the robotmoves Thanks to it the estimation of the pose of the robotand the features can be carried out more precisely Someother evaluations have compared the efficacy of a SLAMsystem using an omnidirectional vision sensor with respectto a SLAM system using a laser range finder such as thework developed by Erturk et al [126] In this case the authorshave employed multiple simulated environments to providethe necessary visual data to both systems Omnidirectionalvision proves to be a cost-effective solution that is suitablespecially for indoor environments (since outdoor conditionshave a negative influence on the visual data)

In the literature we can also find some approaches whichare based on the classical visual SLAM schema adaptedto be used with omnidirectional visual information andintroducing some slight variations For example Kim andOh [127] propose a visual SLAM system in which verticallines extracted from omnidirectional images and horizontallines extracted from a range sensor are integrated Anotheralternative is presented by Wang et al [128] who proposea map which is composed of many submaps each oneconsisting of all the feature points extracted from an imageand the position of the robot with respect to the global coor-dinate system when this image was captured Furthermoresome other proposals that have provided good results inthe SLAM area such as Large-Scale Direct- (LSD-) SLAM

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 13: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

Journal of Sensors 13

have been proposed and adapted using omnidirectionalcameras computing dense or semidense depth maps in anincremental fashion and tracking the camera using directimage alignment Caruso et al [129] propose an extensionof LSD-SLAM to a generic omnidirectional camera modelalong with an approach to perform stereo operations directlyon such omnidirectional images In this case the proposedmethod is evaluated on images captured with a fisheye lenswith a field of view of 185 deg

More recently some proposals that go beyond the clas-sical approaches have been made In these proposals thefeatures provided by the omnidirectional vision system areused In this area Valiente et al [130] suggest a representationof the environment that tries to optimise the computa-tional cost of the mapping process and to provide a morecompact representation The map is sustained by a reducedset of omnidirectional images denoted as views whichare acquired from certain poses of the environment Theinformation gathered by these views permits modelling largeenvironments and at the same time they ease the observationprocess by which the pose of the robot is retrieved Inthis work a view 119899 consists of a single omnidirectionalimage captured from a certain pose of the robot 119909119897119899 =(119909119897 119910119897 120579119897)119899 and a set of interest points extracted from thatimage Such arrangement permits exploiting the capabilityof an omnidirectional image to gather a large amount ofinformation in a simple snapshot due to its large field of viewSo a view 119899 is constituted by the pose where it was acquiredwith 119899 isin [1 119873] with119873 being the number total of viewsconstituting the final map The number of views initializedin the map directly depends on the sort of environment andits visual appearance In this case the process of localizationis solved by means of an observation model that takes thesimilarity between the omnidirectional images into accountIn a first step a subset of candidate views from the map isselected based on the Euclidean distance between the poseof the current view acquired by the robot and the positionof each candidate Once the set of candidates have beenextracted a similarity measure can be evaluated in orderto determine the highest similarity with the current imageFinally the localization of the robot can be accomplishedtaking into account the correspondences between both viewsand the epipolar restriction

An additional step in building topological maps withomnidirectional images is to create a hierarchical mapstructure that combines metric and topology In this areaFernandez et al [131] present a framework to carry outSLAM (Simultaneous Localization And Mapping) usingpanoramic scenes The mapping is approached in a hybridway constructing simultaneously amodelwith two layers andchecking if a loop closure is produced with a previous node

53 Mapless Navigation Systems In this case the model ofthe environment can be represented as a visual memorythat typically contains sequences or sets of images with noother underlying structure among them In this area severalpossibilities can be considered taking into account if thevisual information is described through local features ofglobal appearance

First some approaches that use local features can befound Thompson et al [132] propose a system that learnsplaces by automatically selecting reliable landmarks frompanoramic images and uses them with localization pur-poses Also they employ normalised correlation during thecomparisons to minimise the effect of changing lightingconditions Argyros et al [133] develop a vision basedmethodfor robot homing using panoramic images too With thismethod the robot can calculate a route that leads it tothe position where an image was captured The methodtracks features in panoramic views and exploits only angularinformation of these features to calculate a control strategyLong range homing is achieved by organizing the featuresrsquotrajectories in a visual memory Furthermore Lourenco etal [134] propose an alternative approach to image-basedlocalization which goes beyond since it takes into accountthat in some occasions it would be possible that the imagesstored in the database and the query image could have beenacquired using different omnidirectional imaging systemsDue to the introduction of nonlinear image distortion in suchimages the difference of the appearance between both ofthem could be significant In this sense the authors proposea method that employs SIFT features extracted from thedatabase images and the query image in order to determinethe localization of the robot

Second as far as the use of global appearance is con-cerned Matsumoto et al [135] propose a navigation methodthat makes use of a sequence of panoramic images to storeinformation from a route and a template matching approachto carry out localization steering angle determination andobstacle detection This way the robot can navigate betweenconsecutive views by using global information from thepanoramic scenes Also Menegatti et al [100] present amethod of image-based localization from the matching ofthe robotrsquos currently captured view with the previously storedreference view trying to avoid perceptual aliasing Theirmethod makes use of the Fourier Signature to represent theimage and to facilitate the image-based localization Theymake use of the properties that this transform presents whenit is used with panoramic images and they also propose aMonte Carlo algorithm for robust localization In a similarway Berenguer et al [136] propose two methods to estimatethe position of the robot by means of the omnidirectionalimage captured On the one hand the first method representsthe environment through a sequence of omnidirectionalimages and the Radon transform is used to describe theglobal appearance of the scenes A rough localization isachieved carrying out the pairwise comparison between thecurrent scene and this sequence of scenes On the other handthe second method tries to build a local topological map ofthe area where the robot is located and this map is used torefine the localization of the robot This way while the firstmethod is a mapless localization the second one would fit inthemethods presented in Section 51 Bothmethods are testedunder different lighting conditions and occlusions and theresults show their effectiveness and robustness Murillo et al[137] propose an alternative approach to solve the localizationproblem through a pyramidal matching process The authorsindicate that the method can work with any low dimensional

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 14: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

14 Journal of Sensors

Table 1 Main features of the frameworks that use omnidirectional vision systems to solve the mapping and localization problems

Reference Approach Type of image Type of map Features Description[112] 51 Omnidirectional Metric Local Colour[113] 51 Omnidirectional Metric Local FAST-CSLBP[114] 51 Omnidirectional Metric Local LKOF[84 115] 51 Omnidirectional Topological Local MSIFT

[116] 51 Omnidirectional Topological Local SIFTInvariant column segments

[98] 51 Panoramic Topological Global Fourier Signature[118] 51 Panoramic Topological Global FACT descriptor[119] 51 Panoramic Topological Global Color histograms[63] 51 Panoramic Topological Global Fourier Signature[120] 51 Panoramic Topological Global Various[121] 51 Omnidirectional Topological Global GIST[122] 51 Panoramic Topological Global Fourier Signature[95] 51 Panoramic Topological Global PCA features[60 123] 51 Omnidirectional Topological Global Spherical Fourier Transform[124] 51 Omnidirectional Metric Local SIFT[125] 52 Panoramic Metric Local Vertical edges[126] 52 Panoramic Metric Local Contours of objects[127] 52 Omnidirectional Metric Local Vertical + horizontal lines[128] 52 Omnidirectional Metric Local SURF[129] 52 Omnidirectional Metric Global Distance map[130] 52 Omnidirectional Metric Local SURF[131] 52 Panoramic Hybrid Global Fourier Signature[132] 52 Panoramic Sequence images Local Landmarks after reliability test[133] 53 Panoramic Sequence images Local KLT corner detection[134] 53 Omnidirectional Sequence images Local SIFT (improved)[135] 53 Panoramic Sequence images Global Portion of the image[100] 53 Panoramic Sequence images Global Fourier Signature

[136] 51 53 Omnidirectional Topologicalsequence images Global Radon transform

[137] 53 Omnidirectional Sequence images Global SURF

[138] 53 Unit sphere projpanor orthographic Sequence images Global Spherical Fourier Transform

Discrete Fourier Transform

features descriptor They proposed the use of descriptors forthe features detected in the images combining topological andmetric information in a hierarchical process performed inthree steps In the first step a descriptor is calculated overall the pixels in the images All the images in the databasewith a difference over a threshold are discarded In a secondstep a set of descriptors of each line in the image is usedto build several histograms per image A matching processis performed obtaining a similarity measurement betweenthe query image and the images contained in the databaseFinally in a third step a more detailed matching algorithm isused taking into account geometric restrictions between thelines

Finally not only is it possible to determine the 2Dlocation of the robot based on these approaches but also3D estimations can be derived from them considering thedescription of the omnidirectional images In this regardAmoros et al [138] present a collection of different techniquesthat provide the relative height between the real pose of therobot and a reference image In this case the environment isrepresented through sequences of images that store the effectof changes in the altitude of the robot and the images aredescribed using their global appearance

To conclude this section Table 1 presents an outline ofthe frameworks presented in this section and their mainfeatures the kind of approach (Sections 51 52 or 53) thetype of image the type of map the kind of features (localor global) and the specific description method employed toextract information from the images

6 Conclusion

During the last decades vision sensors have become arobust alternative in the field of mobile robotics to capturethe necessary information from the environment where therobot moves Among them omnidirectional vision systemshave experienced a great expansion more recently mainlyowing to the big quantity of data they are able to capturein only one scene This wide field of view leads to someinteresting properties that make them specially useful as theonly source of information in mobile robotics applicationswhich leads to improvements in power consumption andcost

Consequently the number of works thatmake use of suchsensors has increased substantially and many frameworkscan be found currently to solve the mapping localization

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 15: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

Journal of Sensors 15

and navigation tasks This review has revolved around thesethree problems and some of the approaches that researcheshave proposed to solve them using omnidirectional visualinformation

To this end this work has started focusing on thegeometry of omnidirectional vision systems specially oncatadioptric vision systems which are the most commonstructures to obtain omnidirectional images It has led tothe study of the different shapes of mirrors and the projec-tions of the omnidirectional information This is speciallyimportant because the majority of works have made use ofperspective projections of the information to build modelsof the environment as they can be interpreted more easilyby a human operator After that the review has homed inon how visual information can be described and handledTwo options are available methods based on the extractiondescription and tracking of local features andmethods basedon global appearance description While local features werethe reference option in most initial works global methodshave gained presence more recently as they are able to buildintuitive models of the environment in which the localizationproblem can be solved using more straightforward methodsbased on the pairwise comparison between descriptorsFinally the work has concentrated on the study of themapping localization and navigation of mobile robots usingomnidirectional information The different frameworks havebeen classified into three approaches (a) map building andsubsequent localization (b) continuous map building andupdating and (c) mapless navigation systems

The great quantity of works on these topics shows howomnidirectional vision and mobile robotics are two veryactive areas where this quick evolution is expected to con-tinue during the next years In this regard finding a relativelyrobust and computationally feasible solution to some currentproblems would definitively help to improve the autonomyof mobile robots and expand their range of applicationsand environments where they can work Among them themapping and localization problems in considerably large andchanging environments and the estimation of the positionand orientation in the space and under realistic workingconditions are worth being addressed to try to arrive at closedsolutions

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work has been supported by the Spanish Governmentthrough the projects DPI 2013-41557-PNavegacion de Robotsen Entornos Dinamicos Mediante Mapas Compactos conInformacion Visual de Apariencia Global and DPI 2016-78361-R Creacion de Mapas Mediante Metodos de AparienciaVisual para la Navegacion de Robots and by the GeneralitatValenciana (GVa) through the project GV2015031 Creacionde Mapas Topologicos a Partir de la Apariencia Global de unConjunto de Escenas

References

[1] G Giralt R Sobek and R Chatila ldquoA multi-level planningand navigation system for a mobile robot a first approach toHILARErdquo in Proceedings of the 6th International Joint Confer-ence on Artificial Intellligence pp 335ndash337 Morgan KaufmannTokyo Japon 1979

[2] O Wijk and H Christensen ldquoLocalization and navigation ofa mobile robot using natural point landmarks extracted fromsonar datardquo Robotics and Autonomous Systems vol 31 no 1-2pp 31ndash42 2000

[3] J D Tardos J Neira P M Newman and J J Leonard ldquoRobustmapping and localization in indoor environments using sonardatardquo International Journal of Robotics Research vol 21 no 4pp 311ndash330 2002

[4] Y Cha and D Kim ldquoOmni-directional image matching forhoming navigation based on optical flow algorithmrdquo inProceed-ings of the 12th International Conference on Control Automationand Systems (ICCAS rsquo12) pp 1446ndash1451 October 2012

[5] J J Leonard and H F Durrant-Whyte ldquoMobile robot local-ization by tracking geometric beaconsrdquo IEEE Transactions onRobotics and Automation vol 7 no 3 pp 376ndash382 1991

[6] L Zhang and B K Ghosh ldquoLine segment based map buildingand localization using 2D laser rangefinderrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo00) vol 3 pp 2538ndash2543 April 2000

[7] R Triebel and W Burgard ldquoImproving simultaneous local-ization and mapping in 3D using global constraintsrdquo inProceedings of the 20th National Conference on ArtificialIntelligencemdashVolume 3 pp 1330ndash1335 2005

[8] C Yu and D Zhang ldquoA new 3D map reconstruction basedmobile robot navigationrdquo in Proceedings of the 8th InternationalConference on Signal Processing vol 4 IEEE Beijing ChinaNovember 2006

[9] A Y Hata and D F Wolf ldquoOutdoor mapping using mobilerobots and laser range findersrdquo in Proceedings of the ElectronicsRobotics and Automotive Mechanics Conference (CERMA rsquo09)pp 209ndash214 September 2009

[10] C-H Chen andK-T Song ldquoComplete coveragemotion controlof a cleaning robot using infrared sensorsrdquo in Proceedings of theIEEE International Conference on Mechatronics (ICM rsquo05) pp543ndash548 July 2005

[11] J Choi S Ahn M Choi and W K Chung ldquoMetric SLAMin home environment with visual objects and sonar featuresrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4048ndash4053October 2006

[12] K Hara S Maeyama and A Gofuku ldquoNavigation path scan-ning system for mobile robot by laser beamrdquo in Proceedingsof the SICE Annual Conference of International Conference onInstrumentation Control and Information Technology pp 2817ndash2821 August 2008

[13] W-C Chang andC-Y Chuang ldquoVision-based robot navigationand map building using active laser projectionrdquo in Proceedingsof the IEEESICE International Symposiumon System Integration(SII rsquo11) pp 24ndash29 IEEE December 2011

[14] G N DeSouza and A C Kak ldquoVision for mobile robotnavigation a surveyrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 24 no 2 pp 237ndash267 2002

[15] F Bonin-Font A Ortiz and G Oliver ldquoVisual navigationfor mobile robots a surveyrdquo Journal of Intelligent and Robotic

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 16: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

16 Journal of Sensors

Systems Theory and Applications vol 53 no 3 pp 263ndash2962008

[16] R Sim and G Dudek ldquoEffective exploration strategies for theconstruction of visual mapsrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems vol3 pp 3224ndash3231 Las Vegas Nev USA October 2003

[17] Q Zhu S Avidan M-C Yeh and K-T Cheng ldquoFast humandetection using a cascade of histograms of oriented gradientsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo06) vol 2 pp1491ndash1498 IEEE June 2006

[18] K Okuyama T Kawasaki and V Kroumov ldquoLocalizationand position correction for mobile robot using artificial visuallandmarksrdquo in Proceedings of the International Conference onAdvanced Mechatronic Systems (ICAMechS rsquo11) pp 414ndash418Zhengzhou China August 2011

[19] M Z Brown D Burschka and G D Hager ldquoAdvances incomputational stereordquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 25 no 8 pp 993ndash1008 2003

[20] R Sim and J J Little ldquoAutonomous vision-based explorationandmapping using hybridmaps andRao-Blackwellised particlefiltersrdquo in Proceedings of the IEEERSJ International Conferenceon Intelligent Robots and Systems (IROS rsquo06) pp 2082ndash2089Beijing China October 2006

[21] Z Yong-Guo C Wei and L Guang-Liang ldquoThe navigation ofmobile robot based on stereo visionrdquo in Proceedings of the 5thInternational Conference on Intelligent Computation Technologyand Automation (ICICTA rsquo12) pp 670ndash673 January 2012

[22] Y Jia M Li L An and X Zhang ldquoAutonomous navigationof a miniature mobile robot using real-time trinocular stereomachinerdquo in Proceedings of the IEEE International Conferenceon Robotics Intelligent Systems and Signal Processing vol 1 pp417ndash421 Changsha China October 2003

[23] N Ayache and F Lustman ldquoTrinocular stereo vision forroboticsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 13 no 1 pp 73ndash85 1991

[24] N Winters J Gaspar G Lacey and J Santos-Victor ldquoOmni-directional vision for robot navigationrdquo in Proceedings of theIEEE Workshop on Omnidirectional Vision (OMNIVIS rsquo00) pp21ndash28 Hilton Head Island SC USA June 2000

[25] J Gaspar N Winters and J Santos-Victor ldquoVision-based nav-igation and environmental representations with an omnidirec-tional camerardquo IEEE Transactions on Robotics and Automationvol 16 no 6 pp 890ndash898 2000

[26] D Bradley A Brunton M Fiala and G Roth ldquoImage-based navigation in real environments using panoramasrdquo inProceedings of the IEEE InternationalWorkshop onHaptic AudioVisual Environments and their Applications (HAVE rsquo05) pp 57ndash59 October 2005

[27] S Nayar ldquoOmnidirectional video camerardquo in Proceedings of theDARPA Image Understanding Workshop pp 235ndash241 1997

[28] L Paya L Fernandez O Reinoso A Gil and D UbedaldquoAppearance-based dense maps creation Comparison of com-pression techniques with panoramic imagesrdquo in Proceedings ofthe International Conference on Informatics in Control Automa-tion and Robotics pp 238ndash246 INSTICC Milan Italy 2009

[29] Point Grey Research Inc ldquoLadybug Camerasrdquo httpswwwptgreycom360-degree-spherical-camera-systems

[30] A Gurtner D G Greer R Glassock L Mejias R A Walkerand W W Boles ldquoInvestigation of fish-eye lenses for small-UAV aerial photographyrdquo IEEE Transactions on Geoscience andRemote Sensing vol 47 no 3 pp 709ndash721 2009

[31] R Wood ldquoXXIII Fish-eye views and vision under waterrdquoPhilosophicalMagazine Series 6 vol 12 no 68 pp 159ndash162 1906

[32] Samsung Gear 360 camera httpwwwsamsungcomglobalgalaxygear-360

[33] S Li M Nakano and N Chiba ldquoAcquisition of spherical imageby fish-eye conversion lensrdquo in Proceedings of the IEEE VirtualReality Conference pp 235ndash236 IEEE Chicago Ill USAMarch2004

[34] S Li ldquoFull-view spherical image camerardquo in Proceedings of the18th International Conference on Pattern Recognition (ICPR rsquo06)vol 4 pp 386ndash390 August 2006

[35] V Peri and S Nayar ldquoGeneration of perspective and panoramicvideo fromomnidirectional videordquo inProceedings of theDARPAImage Understanding Workshop (IUW rsquo97) pp 243ndash246 1997

[36] D W Rees ldquoPanoramic television viewing systemrdquo US PatentApplication 3505465 1970

[37] S Bogner ldquoAn introduction to panospheric imagingrdquo in Pro-ceedings of the IEEE International Conference on Systems Manand Cybernetics 1995 Intelligent Systems for the 21st Centuryvol 4 pp 3099ndash3106 October 1995

[38] K Yamazawa Y Yagi and M Yachida ldquoObstacle detectionwith omnidirectional image sensor HyperOmni visionrdquo inProceedings of the IEEE International Conference onRobotics andAutomation vol 1 pp 1062ndash1067 May 1995

[39] V Grassi Junior and J Okamoto Junior ldquoDevelopment of anomnidirectional vision systemrdquo Journal of the Brazilian Societyof Mechanical Sciences and Engineering vol 28 no 1 pp 58ndash682006

[40] G Krishnan and S K Nayar ldquoCata-fisheye camera forpanoramic imagingrdquo in Proceedings of the IEEE Workshop onApplications of Computer Vision (WACV 08) pp 1ndash8 January2008

[41] A Ohte O Tsuzuki and K Mori ldquoA practical sphericalmirror omnidirectional camerardquo in Proceedings of the IEEEInternational Workshop on Robotic Sensors Robotic and SensorEnvironments (ROSE rsquo05) pp 8ndash13 IEEE Ottawa CanadaOctober 2005

[42] Y Yagi and S Kawato ldquoPanorama scene analysis with conicprojectionrdquo in Proceedings of the IEEE International Workshopon Intelligent Robots and Systems (IROS rsquo90) Towards a NewFrontier of Applications vol 1 pp 181ndash187 July 1990

[43] E Cabral J de Souza andM Hunold ldquoOmnidirectional stereovision with a hyperbolic double lobed mirrorrdquo in Proceedings ofthe 17th International Conference on Pattern Recognition (ICPRrsquo04) vol 1 pp 1ndash9 August 2004

[44] K Yoshida H Nagahara and M Yachida ldquoAn omnidirectionalvision sensor with single viewpoint and constant resolutionrdquoin Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 4792ndash4797 IEEEOctober 2006

[45] S Baker and S Nayar ldquoA theory of catadioptric image for-mationrdquo in Proceedings of the 6th International Conference onComputer Vision pp 35ndash42 January 1998

[46] P Sturm S Ramalingam J-P Tardif S Gasparini and JBarreto ldquoCamera models and fundamental concepts used ingeometric computer visionrdquo Foundations and Trends in Com-puter Graphics and Vision vol 6 no 1-2 pp 1ndash183 2011

[47] N Goncalves and H Araujo ldquoEstimating parameters of non-central catadioptric systems using bundle adjustmentrdquo Com-puter Vision and Image Understanding vol 113 no 1 pp 11ndash282009

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 17: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

Journal of Sensors 17

[48] X Ying and Z Hu ldquoCatadioptric camera calibration usinggeometric invariantsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 26 no 10 pp 1260ndash1271 2004

[49] J Marcato Jr A M G Tommaselli and M V A MoraesldquoCalibration of a catadioptric omnidirectional vision systemwith conic mirrorrdquo ISPRS Journal of Photogrammetry andRemote Sensing vol 113 pp 97ndash105 2016

[50] D Scaramuzza A Martinelli and R Siegwart ldquoA flexibletechnique for accurate omnidirectional camera calibration andstructure from motionrdquo in Proceedings of the 4th IEEE Interna-tional Conference on Computer Vision Systems (ICVS rsquo06) p 45IEEE January 2006

[51] D Scaramuzza R Siegwart and A Martinelli ldquoA robustdescriptor for tracking vertical lines in omnidirectional imagesand its use in mobile roboticsrdquo The International Journal ofRobotics Research vol 28 no 2 pp 149ndash171 2009

[52] C Geyer and K Daniilidis ldquoA unifying theory for centralpanoramic systems and practical implicationsrdquo in ComputerVisionmdashECCV 2000 D Vernon Ed vol 1843 of Lecture Notesin Computer Science pp 445ndash461 Springer Berlin Germany2000

[53] H Friedrich D Dederscheck K Krajsek and RMester ldquoView-based robot localization using spherical harmonics conceptand first experimental resultsrdquo in Pattern Recognition F Ham-precht C Schnarr and B Jahne Eds vol 4713 of Lecture Notesin Computer Science pp 21ndash31 Springer Berlin Germany 2007

[54] H Friedrich D Dederscheck M Mutz and R Mester ldquoView-based robot localization using illumination-invarant sphericalharmonics descriptorsrdquo in Proceedings of the 3rd InternationalConference on Computer Vision Theory and Applications (VISI-GRAPP rsquo08) pp 543ndash550 2008

[55] A Makadia and K Daniilidis ldquoDirect 3D-rotation estimationfrom spherical images via a generalized shift theoremrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition vol 2 pp II-217ndashII-224 IEEEJune 2003

[56] A Makadia L Sorgi and K Daniilidis ldquoRotation estimationfrom spherical imagesrdquo in Proceedings of the 17th InternationalConference on Pattern Recognition (ICPR rsquo04) vol 3 pp 590ndash593 IEEE August 2004

[57] A Makadia and K Daniilidis ldquoRotation recovery from spher-ical images without correspondencesrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 28 no 7 pp 1170ndash1175 2006

[58] T Schairer B Huhle and W Strasser ldquoIncreased accuracyorientation estimation from omnidirectional images using thespherical Fourier transformrdquo in Proceedings of the 3DTV Con-ference The True VisionmdashCapture Transmission and Display of3D Video pp 1ndash4 IEEE Potsdam Germany May 2009

[59] T Schairer B Huhle and W Strasser ldquoApplication of particlefilters to vision-based orientation estimation using harmonicanalysisrdquo in Proceedings of the IEEE International Conference onRobotics and Automation (ICRA rsquo10) pp 2556ndash2561 May 2010

[60] T Schairer B Huhle P Vorst A Schilling and W StrasserldquoVisual mapping with uncertainty for correspondence-freelocalization using gaussian process regressionrdquo inProceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo11) pp 4229ndash4235 IEEE San Francisco CalifUSA 2011

[61] D Cobzas and H Zhang ldquoCylindrical panoramic image-basedmodel for robot localizationrdquo in Proceedings of the IEEERSJ

International Conference on Intelligent Robots and Systems vol4 pp 1924ndash1930 IEEE November 2001

[62] LGerstmayr-HillenO SchluterMKrzykawski andRMollerldquoParsimonious loop-closure detection based on global image-descriptors of panoramic imagesrdquo in Proceedings of the IEEE15th International Conference on Advanced Robotics (ICAR rsquo11)pp 576ndash581 IEEE June 2011

[63] L Paya L Fernandez A Gil and O Reinoso ldquoMap buildingand monte carlo localization using global appearance of omni-directional imagesrdquo Sensors vol 10 no 12 pp 11468ndash11497 2010

[64] M Milford and G Wyeth ldquoPersistent navigation and mappingusing a biologically inspired SLAM systemrdquo The InternationalJournal of Robotics Research vol 29 no 9 pp 1131ndash1153 2010

[65] M Liu C Pradalier and R Siegwart ldquoVisual homing fromscale with an uncalibrated omnidirectional camerardquo IEEETransactions on Robotics vol 29 no 6 pp 1353ndash1365 2013

[66] Y Yagi S Fujimura and M Yachida ldquoRoute representation formobile robot navigation by omnidirectional route panoramaFourier transformationrdquo in Proceedings of the IEEE Interna-tional Conference on Robotics and Automation vol 2 pp 1250ndash1255 May 1998

[67] B Bonev M Cazorla and F Escolano ldquoRobot navigationbehaviors based on omnidirectional vision and informationtheoryrdquo Journal of Physical Agents vol 1 no 1 pp 27ndash35 2007

[68] S Roebert T Schmits and A Visser ldquoCreating a bird-eyeview map using an omnidirectional camerardquo in Proceedingsof the 20th Belgian-Dutch Conference on Artificial IntelligenceUniversiteit Twente Faculteit Elektrotechniek (BNAIC rsquo08) pp233ndash240 Enschede The Netherlands October 2008

[69] E Trucco and K Plakas ldquoVideo tracking a concise surveyrdquoIEEE Journal of Oceanic Engineering vol 31 no 2 pp 520ndash5292006

[70] N Pears and B Liang ldquoGround plane segmentation for mobilerobot visual navigationrdquo in Proceedings of the IEEERSJ Inter-national Conference on Intelligent Robots and Systems pp 1513ndash1518 November 2001

[71] J Zhou and B Li ldquoHomography-based ground detection for amobile robot platform using a single camerardquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo06) pp 4100ndash4105 IEEE May 2006

[72] C Harris and M Stephens ldquoA combined corner and edgedetectorrdquo in Proceedings of the 4th Alvey Vision Conference pp147ndash151 September 1988

[73] S Saripalli G S Sukhatme L O Mejıas and P C CerveraldquoDetection and tracking of external features in an urbanenvironment using an autonomous helicopterrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 3972ndash3977 IEEE April 2005

[74] D G Lowe ldquoObject recognition from local scale-invariantfeaturesrdquo inProceedings of the 7th IEEE International Conferenceon Computer Vision (ICCV rsquo99) vol 2 pp 1150ndash1157 IEEESeptember 1999

[75] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[76] H Bay T Tuytelaars and L Van Gool ldquoSURF speededup robust featuresrdquo in Computer VisionmdashECCV 2006 ALeonardis H Bischof and A Pinz Eds vol 3951 of LectureNotes in Computer Science pp 404ndash417 Springer Berlin Ger-many 2006

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 18: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

18 Journal of Sensors

[77] H Bay A Ess T Tuytelaars and L Van Gool ldquoSpeeded-Up Robust Features (SURF)rdquo Computer Vision and ImageUnderstanding vol 110 no 3 pp 346ndash359 2008

[78] M Calonder V Lepetit C Strecha and P Fua ldquoBRIEF binaryrobust independent elementary features inrdquo in Proceedings ofthe European Conference on Computer Vision pp 778ndash792Springer Crete Greece September 2010

[79] E Rublee V Rabaud K Konolige and G Bradski ldquoORB anefficient alternative to SIFT or SURFrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2564ndash2571 IEEE Barcelona Spain November 2011

[80] S Leutenegger M Chli and R Y Siegwart ldquoBRISK binaryrobust invariant scalable keypointsrdquo in Proceedings of the IEEEInternational Conference on Computer Vision (ICCV rsquo11) pp2548ndash2555 Barcelona Spain November 2011

[81] A Alahi R Ortiz and P Vandergheynst ldquoFREAK fast retinakeypointrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo12) pp 510ndash517 Provi-dence RI USA June 2012

[82] A Angeli S Doncieux J-A Meyer and D Filliat ldquoVisualtopological SLAM and global localizationrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo09) pp 4300ndash4305 IEEE Kobe Japan May 2009

[83] S Se D G Lowe and J J Little ldquoVision-based global local-ization and mapping for mobile robotsrdquo IEEE Transactions onRobotics vol 21 no 3 pp 364ndash375 2005

[84] C Valgren and A J Lilienthal ldquoSIFT SURF amp seasonsAppearance-based long-term localization in outdoor environ-mentsrdquo Robotics and Autonomous Systems vol 58 no 2 pp149ndash156 2010

[85] A C Murillo J J Guerrero and C Sagues ldquoSURF featuresfor efficient robot localization with omnidirectional imagesrdquo inProceedings of the IEEE International Conference onRobotics andAutomation (ICRA rsquo07) pp 3901ndash3907 April 2007

[86] C Pan T Hu and L Shen ldquoBRISK based target localizationfor fixed-wing UAVrsquos vision-based autonomous landingrdquo inProceedings of the IEEE International Conference on Informationand Automation (ICIA rsquo15) pp 2499ndash2503 IEEE LijiangChina August 2015

[87] A Gil O Reinoso M Ballesta M Julia and L Paya ldquoEstima-tion of visual maps with a robot network equipped with visionsensorsrdquo Sensors vol 10 no 5 pp 5209ndash5232 2010

[88] A Gil O M Mozos M Ballesta and O Reinoso ldquoA compara-tive evaluation of interest point detectors and local descriptorsfor visual SLAMrdquo Machine Vision and Applications vol 21 no6 pp 905ndash920 2010

[89] M Ballesta AGil O Reinoso andD Ubeda ldquoAnalysis of visuallandmark detectors and descriptors in SLAM in indoor andoutdoor environmentsrdquo Revista Iberoamericana de Automaticae Informatica Industrial vol 7 no 2 pp 68ndash80 2010

[90] Y Matsumoto K Ikeda M Inaba and H Inoue ldquoVisual navi-gation using omnidirectional view sequencerdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo99) pp 317ndash322 October 1999

[91] Y Matsumoto K Sakai M Inaba and H Inoue ldquoVision-based approach to robot navigationrdquo in Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems pp1702ndash1708 Takamatsu Japan Novemver 2000

[92] M Kirby Geometric Data Analysis An Empirical Approach toDimensionality Reduction and the Study of Patterns A WileyInterscience publication John Wiley amp Sons New York NYUSA 2001

[93] R Moller A Vardy S Kreft and S Ruwisch ldquoVisual hom-ing in environments with anisotropic landmark distributionrdquoAutonomous Robots vol 23 no 3 pp 231ndash245 2007

[94] B Krose R Bunschoten S ten Hagen B Terwijn and N Vlas-sis ldquoHousehold robots look and learn environment modelingand localization from an omnidirectional vision systemrdquo IEEERobotics and Automation Magazine vol 11 no 4 pp 45ndash522004

[95] A Stimec M Jogan and A Leonardis ldquoUnsupervised learningof a hierarchy of topological maps using omnidirectionalimagesrdquo International Journal of Pattern Recognition and Arti-ficial Intelligence vol 22 no 4 pp 639ndash665 2008

[96] M Jogan and A Leonardis ldquoRobust localization usingeigenspace of spinning-imagesrdquo in Proceedings of the IEEEWorkshop onOmnidirectional Vision (OMNIVIS rsquo00) pp 37ndash44Hilton Head Island SC USA June 2000

[97] M Artac M Jogan and A Leonardis ldquoMobile robot localiza-tion using an incremental eigenspace modelrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationvol 1 pp 1025ndash1030 IEEE May 2002

[98] E Menegatti T Maeda and H Ishiguro ldquoImage-based mem-ory for robot navigation using properties of omnidirectionalimagesrdquo Robotics and Autonomous Systems vol 47 no 4 pp251ndash267 2004

[99] F Rossi A Ranganathan F Dellaert and E MenegattildquoToward topological localization with spherical Fourier trans-form and uncalibrated camera inrdquo in Proceedings of the Inter-national Conference on Simulation Modeling and Programmingfor Autonomous Robots pp 319ndash350 Springer Venice ItalyNovember 2008

[100] EMenegattiM Zoccarato E Pagello andH Ishiguro ldquoImage-based Monte Carlo localisation with omnidirectional imagesrdquoRobotics andAutonomous Systems vol 48 no 1 pp 17ndash30 2004

[101] J Kosecka L Zhou P Barber and Z Duric ldquoQualitative imagebased localization in indoors environmentsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition vol 2 pp II-3ndashII-8 June 2003

[102] A CMurillo G Singh J Kosecka and J J Guerrero ldquoLocaliza-tion in urban environments using a panoramic gist descriptorrdquoIEEE Transactions on Robotics vol 29 no 1 pp 146ndash160 2013

[103] A Oliva and A Torralba ldquoBuilding the gist of ascene the roleof global image features in recognitionrdquo in Progress in BrainReasearch Special Issue on Visual Perception vol 155 pp 23ndash362006

[104] L Paya O Reinoso Y Berenguer and D Ubeda ldquoUsingomnidirectional vision to create a model of the environmenta comparative evaluation of global-appearance descriptorsrdquoJournal of Sensors vol 2016 Article ID 1209507 21 pages 2016

[105] C Zhou Y Wei and T Tan ldquoMobile robot self-localizationbased on global visual appearance featuresrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automationpp 1271ndash1276 Taipei Taiwan September 2003

[106] H Murase and S K Nayar ldquoVisual learning and recognition of3-d objects from appearancerdquo International Journal of ComputerVision vol 14 no 1 pp 5ndash24 1995

[107] V P de Araujo R D Maia M F S V DAngelo and G N RDAngelo ldquoAutomatic plate detection using genetic algorithmrdquoin Proceedings of the 6th WSEAS International Conferenceon Signal Speech and Image Processing World Scientific andEngineering Academy and Society (WSEAS rsquo06) pp 43ndash48Stevens Point Wis USA 2006

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 19: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

Journal of Sensors 19

[108] R C Gonzalez and R E Woods Digital Image ProcessingAddison-Wesley Boston Mass USA 3rd edition 1992

[109] S Thrun ldquoRobotic mapping a survey in exploring artificialintelligence inrdquo in The New Milenium pp 1ndash35 Morgan Kauf-mann San Francisco Calif USA 2003

[110] E Garcia-Fidalgo and A Ortiz ldquoVision-based topologicalmapping and localization methods a surveyrdquo Robotics andAutonomous Systems vol 64 pp 1ndash20 2015

[111] K Konolige E Marder-Eppstein and B Marthi ldquoNavigationin hybrid metric-topological mapsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 3041ndash3047 May 2011

[112] M Li K Imou S Tani and S Yokoyama ldquoLocalization ofagricultural vehicle using landmarks based on omni-directionalvisionrdquo in Proceedings of the International Conference on Com-puter Engineering and Applications (ICCEA rsquo09) vol 2 pp 129ndash133 IACSIT Press Singapore 2009

[113] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[114] Y-W Choi K-K Kwon S-I Lee J-W Choi and S-G LeeldquoMulti-robot mapping using omnidirectional-vision SLAMbased on fisheye imagesrdquo ETRI Journal vol 36 no 6 pp 913ndash923 2014

[115] C Valgren A Lilienthal and T Duckett ldquoIncremental topo-logical mapping using omnidirectional visionrdquo in Proceedingsof the IEEERSJ International Conference on Intelligent Robotsand Systems (IROS rsquo06) pp 3441ndash3447 Beijing China October2006

[116] T Goedeme M Nuttin T Tuytelaars and L Van Gool ldquoOmni-directional vision based topological navigationrdquo InternationalJournal of Computer Vision vol 74 no 3 pp 219ndash236 2007

[117] H Lu X Li H Zhang and Z Zheng ldquoRobust place recognitionbased on omnidirectional vision and real-time local visualfeatures for mobile robotsrdquo Advanced Robotics vol 27 no 18pp 1439ndash1453 2013

[118] M Liu D Scaramuzza C Pradalier R Siegwart and Q ChenldquoScene recognition with omnidirectional vision for topologicalmap using lightweight adaptive descriptorsrdquo in Proceedings ofthe IEEERSJ International Conference on Intelligent Robots andSystems (IROS rsquo09) pp 116ndash121 IEEE St Louis Mo USAOctober 2009

[119] I Ulrich and I Nourbakhsh ldquoAppearance-based place recog-nition for topological localizationrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo00) pp 1023ndash1029 April 2000

[120] L Paya F Amoros L Fernandez andOReinoso ldquoPerformanceof global-appearance descriptors in map building and localiza-tion using omnidirectional visionrdquo Sensors vol 14 no 2 pp3033ndash3064 2014

[121] A Rituerto A CMurillo and J J Guerrero ldquoSemantic labelingfor indoor topological mapping using a wearable catadioptricsystemrdquo Robotics and Autonomous Systems vol 62 no 5 pp685ndash695 2014

[122] A Ranganathan E Menegatti and F Dellaert ldquoBayesianinference in the space of topological mapsrdquo IEEE Transactionson Robotics vol 22 no 1 pp 92ndash107 2006

[123] B Huhle T Schairer A Schilling andW Straszliger ldquoLearning tolocalize with Gaussian process regression on omnidirectionalimage datardquo in Proceedings of the 23rd IEEERSJ International

Conference on Intelligent Robots and Systems (IROS rsquo10) pp5208ndash5213 October 2010

[124] A Rituerto L Puig and J Guerrero ldquoComparison of omni-directional and conventional monocular systems for visualSLAMrdquo in Proceedings of the Workshop on OmnidirectionalVision CameraNetworks andNon-classical Cameras (OMNIVISrsquo10) Zaragoza Spain 2010

[125] C Burbridge L Spacek J Condell and U NehmzowldquoMonocular omnidirectional vision based robot localisa-tion and mappingrdquo in TAROS 2008mdashTowards AutonomousRobotic Systems 2008 The University of Edinburgh UnitedKingdom pp 135ndash141 IEEE Computer Society Press 2008httpuirulsteracuk8898

[126] H Erturk G Tuna T V Mumcu and K Gulez ldquoA perfor-mance analysis of omnidirectional vision based simultaneouslocalization andmappingrdquo in Intelligent Computing Technology8th International Conference ICIC 2012 Huangshan China July25ndash29 2012 Proceedings vol 7389 of Lecture Notes in ComputerScience pp 407ndash414 Springer Berlin Germany 2012

[127] S Kim and S Oh ldquoSLAM in indoor environments using omni-directional vertical and horizontal line featuresrdquo Journal ofIntelligent and Robotic Systems vol 51 no 1 pp 31ndash43 2008

[128] KWang G Xia Q Zhu Y Yu YWu and YWang ldquoThe SLAMalgorithm of mobile robot with omnidirectional vision basedon EKFrdquo in Proceedings of the IEEE International Conferenceon Information and Automation (ICIA rsquo12) pp 13ndash18 IEEEShenyang China June 2012

[129] D Caruso J Engel and D Cremers ldquoLarge-scale direct SLAMfor omnidirectional camerasrdquo in Proceedings of the IEEERSJInternational Conference on Intelligent Robots and Systems(IROS rsquo15) pp 141ndash148 Hamburg Germany October 2015

[130] D Valiente A Gil L Fernandez and O Reinoso ldquoA compari-son of EKF and SGD applied to a view-based SLAM approachwith omnidirectional imagesrdquo Robotics and Autonomous Sys-tems vol 62 no 2 pp 108ndash119 2014

[131] L Fernandez L Paya O Reinoso and L M JimenezldquoAppearance-based approach to hybrid metric-topologicalsimultaneous localisation and mappingrdquo IET Intelligent Trans-port Systems vol 8 no 8 pp 688ndash699 2014

[132] S Thompson T Matsui and A Zelinsky ldquoLocalisation usingautomatically selected landmarks from panoramic imagesrdquoin Proceedings of the Australian Conference on Robotics andAutomation (ARAA rsquo00) pp 167ndash172 Melbourne Australia2000

[133] A A Argyros K E Bekris S C Orphanoudakis and LE Kavraki ldquoRobot homing by exploiting panoramic visionrdquoAutonomous Robots vol 19 no 1 pp 7ndash25 2005

[134] M Lourenco V Pedro and J P Barreto ldquoLocalization in indoorenvironments by querying omnidirectional visual maps usingperspective imagesrdquo in Proceedings of the IEEE InternationalConference on Robotics and Automation pp 2189ndash2195 St PaulMinn USA May 2012

[135] YMatsumotoM Inaba andH Inoue ldquoMemory-based naviga-tion using omni-view sequencerdquo in Field and Service Roboticspp 171ndash178 Springer London UK 1998

[136] Y Berenguer L Paya M Ballesta and O Reinoso ldquoPositionestimation and local mapping using omnidirectional imagesand global appearance descriptorsrdquo Sensors vol 15 no 10 pp26368ndash26395 2015

[137] A Murillo J Guerrero and C Sagues ldquoSURF features forefficient robot localization with omnidirectional imagesrdquo in

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 20: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

20 Journal of Sensors

Proceedings of the IEEE International Conference onRobotics andAutomation pp 3901ndash3907 Roma Italy April 2007

[138] F Amoros L Paya O Reinoso D Valiente and L FernandezldquoTowards relative altitude estimation in topological navigationtasks using the global appearance of visual informationrdquo inProceedings of the 9th International Conference on ComputerVision Theory and Applications (VISAPP rsquo14) vol 1 pp 194ndash201 SciTePressmdashScience and Technology Publications January2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 21: A State-of-the-Art Review on Mapping and Localization of ...2016/10/19  · asmapcreationandlocalization[61–63],SLAM[64],and visualservoingandnavigation[65,66].Thisisduetothe fact

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal of

Volume 201

Submit your manuscripts athttpswwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of