human-in-the-loop active electrosense · 2017. 5. 2. · use active electrosense, a near-range...

10
Bioinspir. Biomim. 12 (2017) 014001 doi:10.1088/1748-3190/12/1/014001 NOTE Human-in-the-loop active electrosense Sandra Fang 1 , Michael Peshkin 1 and Malcolm A MacIver 1,2,3 1 Department of Mechanical Engineering, Northwestern University, Evanston, IL, USA 2 Department of Biomedical Engineering and Department of Neurobiology and Physiology, Northwestern University, Evanston, IL, USA 3 Author to whom any correspondence should be addressed. E-mail: [email protected] Keywords: active electrosense, virtual reality, teleoperation, humancomputer interaction, augmented reality Abstract Active electrosense is a non-visual, short range sensing system used by weakly electric sh, enabling such sh to locate and identify objects in total darkness. Here we report initial ndings from the use of active electrosense for object localization during underwater teleoperation with a virtual reality (VR) head-mounted display (HMD). The advantage of electrolocating with a VR system is that it naturally allows for aspects of the task that are difcult for a person to perform to be allocated to the computer. However, interpreting weak and incomplete patterns in the incoming data is something that people are typically far better at than computers. To achieve humancomputer synergy, we integrated an active electrosense underwater robot with the Oculus Rift HMD. The virtual environment contains a visualization of the electric images of the objects surrounding the robot as well as various virtual xtures that guide users to regions of higher information value. Initial user testing shows that these xtures signicantly reduce the time taken to localize an object, but may not increase the accuracy of the position estimate. Our results highlight the advantages of translating the unintuitive physics of electrolocation to an intuitive visual representation for accomplishing tasks in environments where imaging systems fail, such as in dark or turbid water. 1. Introduction Underwater environments are highly unfavorable to visually-guided behavior even under ideal conditions of being near the surface on a clear day [1]. Vision fails with turbidity or darkness, and with this failure comes the cost of slowed or stopped work for underwater tasks such as repair work or disaster recovery. For example, the Deepwater Horizon oil spill in 2010 demonstrated the crucial need for teleoperation sys- tems that are robust to video brown outs’—where propellers of remotely operated vehicles disturb sedi- ment halting work for hours at a timeor the presence of unrened oil in the water that obscures vision. This hindered teleoperators on the surface from effectively interacting with objects and perform- ing repairs. In environments similar to these, long range non-visual sensing approaches such as sonar are ineffective due to scattering and clutter, while zero range sensing such as touch may not be possible or sufciently rich to guide the work. We take inspiration from biological systems and use active electrosense, a near-range sensing modality used by weakly electric sh, as a possible solution for remote object manipulation in environments where vision fails. Nocturnal weakly electric sh inhabit the turbid waters of South American and African rivers and generate a small AC electric eld. Objects in the eld cause perturbations that can be sensed using highly sensitive electroreceptors distributed over their bodies. Any object with a conductivity or capacitance differing from that of water is electrically visible to these sh and can be distinguished based on its mat- erial or dielectric properties [2]. In this paper, we use the SensorPod, a capsule- shaped underwater robot capable of articial electro- sense (gure 1) [35]. The SensorPod emits an oscillat- ing electric eld with emitters at the front and rear of the capsule, and senses voltage perturbations in the eld using two rows of electrodes along the midline on the left and right side of the robot. The SensorPod measures perturbations of an object by taking the RECEIVED 17 May 2016 REVISED 2 November 2016 ACCEPTED FOR PUBLICATION 15 November 2016 PUBLISHED 20 December 2016 © 2016 IOP Publishing Ltd

Upload: others

Post on 24-Jan-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • Bioinspir. Biomim. 12 (2017) 014001 doi:10.1088/1748-3190/12/1/014001

    NOTE

    Human-in-the-loop active electrosense

    Sandra Fang1,Michael Peshkin1 andMalcolmAMacIver1,2,3

    1 Department ofMechanical Engineering, NorthwesternUniversity, Evanston, IL, USA2 Department of Biomedical Engineering andDepartment ofNeurobiology andPhysiology, NorthwesternUniversity, Evanston, IL, USA3 Author towhomany correspondence should be addressed.

    E-mail:[email protected]

    Keywords: active electrosense, virtual reality, teleoperation, human–computer interaction, augmented reality

    AbstractActive electrosense is a non-visual, short range sensing systemused byweakly electric fish, enablingsuchfish to locate and identify objects in total darkness. Here we report initialfindings from the use ofactive electrosense for object localization during underwater teleoperationwith a virtual reality (VR)head-mounted display (HMD). The advantage of electrolocating with aVR system is that it naturallyallows for aspects of the task that are difficult for a person to perform to be allocated to the computer.However, interpretingweak and incomplete patterns in the incoming data is something that peopleare typically far better at than computers. To achieve human–computer synergy, we integrated anactive electrosense underwater robotwith theOculus Rift HMD.The virtual environment contains avisualization of the electric images of the objects surrounding the robot as well as various virtualfixtures that guide users to regions of higher information value. Initial user testing shows that thesefixtures significantly reduce the time taken to localize an object, butmay not increase the accuracy ofthe position estimate. Our results highlight the advantages of translating the unintuitive physics ofelectrolocation to an intuitive visual representation for accomplishing tasks in environments whereimaging systems fail, such as in dark or turbidwater.

    1. Introduction

    Underwater environments are highly unfavorable tovisually-guided behavior even under ideal conditionsof being near the surface on a clear day [1]. Vision failswith turbidity or darkness, and with this failure comesthe cost of slowed or stopped work for underwatertasks such as repair work or disaster recovery. Forexample, the Deepwater Horizon oil spill in 2010demonstrated the crucial need for teleoperation sys-tems that are robust to video ‘brown outs’—wherepropellers of remotely operated vehicles disturb sedi-ment halting work for hours at a time—or thepresence of unrefined oil in the water that obscuresvision. This hindered teleoperators on the surfacefrom effectively interacting with objects and perform-ing repairs. In environments similar to these, longrange non-visual sensing approaches such as sonar areineffective due to scattering and clutter, while zerorange sensing such as touch may not be possible orsufficiently rich to guide thework.

    We take inspiration from biological systems anduse active electrosense, a near-range sensing modalityused by weakly electric fish, as a possible solution forremote object manipulation in environments wherevision fails. Nocturnal weakly electric fish inhabit theturbid waters of South American and African riversand generate a small AC electric field. Objects in thefield cause perturbations that can be sensed usinghighly sensitive electroreceptors distributed over theirbodies. Any object with a conductivity or capacitancediffering from that of water is electrically visible tothese fish and can be distinguished based on its mat-erial or dielectric properties [2].

    In this paper, we use the SensorPod, a capsule-shaped underwater robot capable of artificial electro-sense (figure 1) [3–5]. The SensorPod emits an oscillat-ing electric field with emitters at the front and rear ofthe capsule, and senses voltage perturbations in thefield using two rows of electrodes along themidline onthe left and right side of the robot. The SensorPodmeasures perturbations of an object by taking the

    RECEIVED

    17May 2016

    REVISED

    2November 2016

    ACCEPTED FOR PUBLICATION

    15November 2016

    PUBLISHED

    20December 2016

    © 2016 IOPPublishing Ltd

    http://dx.doi.org/10.1088/1748-3190/12/1/014001mailto:[email protected]://crossmark.crossref.org/dialog/?doi=10.1088/1748-3190/12/1/014001&domain=pdf&date_stamp=2016-12-20http://crossmark.crossref.org/dialog/?doi=10.1088/1748-3190/12/1/014001&domain=pdf&date_stamp=2016-12-20

  • difference in voltage measured across left and rightside pairs.

    1.1. PriorworkThere has been significant work on the problem ofobject characterization and localization with activeelectrosense for robotic systems [5–7]. Alamirattempted to solve the inverse problem by usinggraphical signatures to alleviate the computationalburden [8]. Lebastard et al [9, 10] determined the sizeof a sphere by navigating an electrosense robot aroundthe sphere, Ammari et al [11] developed shape-classification algorithms based on comparing featuresof the electric image that are invariant under rigidmotion or scaling to a collection of learned shapes, andBai et al [3] classified the aspect ratio, size, distance andorientation of spheroids based on signals acquiredduring algorithmically prescribed motions arounddetected object. Bai and Snyder also identified objectproperties by varying frequency and phase [12, 13].Solberg et al [14]used a probabilisticmodel to performlocalization based on a preexisting 2Dmap of the spacesurrounding the object. Nguyen et al [15] performedsparse beamforming with an object of known electricsignature in 2D space to localize 0-2 objects. Milleret al [4] calculated an ergodic trajectory that max-imized the amount of time spent in information-richareas to localize an object or distinguish it from adistractor object, and Silverman et al [16] performed amodified range-only SLAM for the SensorPod itselfusing an incomplete map. Electrosense for pretouchand grasping has also been studied by Smith et al [17].

    Despite this progress, algorithms developed forautomatic electrosense are relatively fragile and relyheavily on simplifications or assumptions such as hav-ing a map of the space beforehand and knowing theelectrical signature of the target object. Not only is thediscipline of artificial electrosense (or ‘machine elec-trosense’ in analogy to machine vision) in its earlystages of development, the inverse problem of recon-structing an object’s position and properties given itselectric image is severely ill-posed [18]. The advantageof teleoperation that is augmented with informationfrom electrosense algorithms is that it naturally allowsfor human-assisted computation, in which certain tasks

    that are computationally difficult or impossible areoutsourced to the human. Adding a human in the loopcreates robustness in algorithms as we are adept atextrapolating from incomplete information and mak-ing decisions quickly in unpredictable environments.A barrier to integrating a human in an electrosensorysystem, however, is that electrosensory signals areunintuitive for people to interpret. Thus, we endea-vored to use a virtual environment (VE), renderedwith the Oculus Rift head-mounted display (HMD)to assist the human operator.

    1.2. VEs for teleoperationIt is worth spending some time defining the termvirtual reality (VR)more precisely. We borrow heavilyfrom Chalmers [19] by using the following definition:VR is the general term for the technology that sustainsa VE, or for the environment itself. This environmenttypically has two or three of the following character-istics: it is (partially or fully) computer generated, it isimmersive, and it is interactive. In recent times, VRhas become more synonymous with using HMDs togenerate a 3D virtual world, although non-immersiveVR can also be achieved through the use of 2D screens.

    When the VE is partially generated—such as byoverlaying virtual models over a live video feed—theresult is augmented reality (AR). Itmay bemore appro-priate to say that we are proposing to perform tele-operationwith electrosense using AR. Althoughwe areforced to render a complete VE due to the real under-water environment being dark or opaque, the environ-ment and model of our robot correspond closely totheir physical counterparts—inasmuch as they areknown. For example, the movement of the robot asseen in the head mounted display is as it occurs in thereal world. For our laboratory prototype, we also ren-der a portion of the known geometry of the tank con-taining the workspace for the electrolocation trial.More or less of the geometry of the world can be ren-dered according to the application and what is knownby the system. The only features that are purely virtualare the hints and the visual representation of the elec-trosensory signals.

    VR has been shown to be beneficial for aiding phy-sical object manipulation and interaction tasks,

    Figure 1. (a) Image of SensorPod over tank. (b)A schematic of the SensorPod viewed from above. The SensorPod has seven differentialvoltage sensors on itsmidline thatmeasure the voltage across its left (L) and right (R) side. Emitters at each end of the robot emit analternating±10Vfield. The front end of the SensorPod is defined to be facing the positive x-axis when the SensorPod is aligned at ◦0with respect to the x-axis. For our study only the sensor pair labeled S4was used.

    2

    Bioinspir. Biomim. 12 (2017) 014001 S Fang et al

  • especially for tasks such as computer-aided design orduring teleoperation [20–23]. In addition, renderingvirtual fixtures, or abstract sensorial data overlaid onthe workspace, can reduce the cognitive load for thehuman operator [24]. We choose a 3D display over a3D screen as it allows the user to orient and move thecamera naturally (by simply tilting their head) and alsooffers better depth perception than a 2D screen. Thusif we consider that electrosense teleoperation is likelyto be integrated with robotic manipulators, there is astrong incentive to develop a 3DVR application.

    We rely on the idea that one can use a predefinedelectric image of an object to localize its position evenin an unknown environment with sparse information.The computer will ‘translate’ electrosense readingsinto visual information by rendering what a userwould perceive if he or she were to be capable of elec-trosense, and also render virtual fixtures that indicatepotentially information-rich locations.

    2. Setup

    2.1.Hardware and communicationsThe setup for augmenting human sensing with activeelectrosense consists of four distinct components(figure 2): the SensorPod which is attached to a 4-DoFgantry (detailed in [3]), as well as MATLAB and Unityapplications, and theOculus RiftHMD.

    For this application only the middle sensing pairlabeled S4 on the SensorPod is used (figure 1). Thegantry talks toMATLAB via TCP/IP. MATLAB deter-mines which virtual fixtures to render, and sends thisinformation to Unity via UDP. Unity then renders theappropriate information in Oculus Rift. Users canrespond to the information and give a control input(using arrow keys) for the gantry position; this infor-mation is passed back to the gantry to update the phy-sical position of the SensorPod.

    The gantry is placed over a tank of water withdimensions of 234cm wide× 175cm long×90cmhigh. Programmed position limits restrict the motionof the SensorPod to a smaller 153cm×85cm work-space to maintain distance from the walls and bottomof the tank. The water height is 46cm and the con-ductivity of the water is 0.04S m−1. At run time, theSensorPod moves at a speed of 2cm s−1 to minimize

    surface water waves that distort the electrosense signal.The SensorPod is able to freely move in the xy-planewithin the soft limits but is constrained so that itsorientation is always 0° with respect to the x-axis, andsuch that the center of the SensorPod is at a depthroughly 17cm from the bottomof the tank.

    2.2. VR interfaceThe user interface (SensorPodVR) consists of a modelof the SensorPod, a scene of a plane indicating thereachable space, and various fixtures rendered on theplane that help visualize the relevant properties of theelectrosensory scene (figure 3). Users control theSensorPod using the arrow keys and the SensorPodthen automatically gathers electrosensory data at eachposition. The rendering is done at 75FPS (frames persecond), although MATLAB updates the position andelectrosense information at 40Hz.

    In figure 3, the dark blue surface, or tank area, cor-responds to the workspace in the physical tank. Move-ment of the SensorPod is confined to this surface. Ontop of the surface, a colored trail consisting of squaresegments varying from blue (0 mV) to red ( +3 mV) isrendered as the center of the SensorPod passes overthe corresponding position in reality. The redder thesegment, the greater the absolute value differentialvoltage signal is at that position. Trail segments corre-spond to a 1cm × 1cm square of the workspace inreality, and thus an electrosense reading will be asso-ciatedwith a unique position in the tank. The tank areais composed of 153 such segments in the xdirection,and 83 segments in the ydirection. This granularitywas found to be most suitable as it provided enoughdetail while simultaneouslymaintaining performance.

    The SensorPod is rendered as a transparent cap-sule-like object. The camera position is locked slightlybehind the SensorPod but can freely rotate dependingon the orientation of the Oculus Rift. As the user navi-gates around the tank area, Unity will render two typesof virtual fixtures. Cylindrical objects indicate thecomputer’s current target position estimate(s). Agreater amount of transparency corresponds to agreater degree of uncertainty. Green units hint at loca-tions where the information on target location isexpected to be higher, which in our case correspond toregions of high voltage perturbation (unlike Fisher

    Figure 2.Communication pathways between the fourmajor components. The SensorPod andGantry sends its position (x, y, z),orientation (θ), and voltage perturbation values (ES, or electrosense data) toMATLAB, which runs a simple algorithm to determinethe relevant pieces of information to send toUnity. Unity then renders this information for theOculus Rift. Control input is passed allthe way back to the SensorPod and gantry, whichwillmove to act out the command. Information between the gantry andMATLAB issent via TCP/IP, whereas information betweenMATLAB andUnity is sent via UDP.

    3

    Bioinspir. Biomim. 12 (2017) 014001 S Fang et al

  • information maximization as used in [4]). Fixturesdisappear and update according to the data gathered.

    We fix the orientation of the SensorPod to be ◦0with respect to the x-axis as the electric image of thetarget is affected by orientation. Only one target islocalized at a time. The target is an aluminum cylinderwith a height and diameter of 7.62cm and is placed ina smaller zone - x50 50 cm,

    - y30 30 cm around the origin. The robot isrestricted to moving in a plane above the cylinder toavoid collisions.

    3. Algorithm for object localization

    3.1. Electric image of the target objectThe electric image of the aluminum cylinder target in2D space is shown in figure 4. The target (center,white) is surrounded by four reddish regions, or peaksof high voltage differential. These regions are

    generated when the SensorPod moves over each gridand records the differential voltage reading at that(x y, ) position in the tank area. Note that the targetobject, however, is located in an area of low differentialvoltage reading. With the SensorPod directly over theobject, the electric field on both sides of the SensorPodis equally perturbed, and the difference in voltage istherefore 0. This is one aspect of differential modeelectrosense that is unintuitive for humans to visualize—it conflicts with our implicit assumption thatmoving our sense organs closer to a target will increasethe quality of information. We use the geometry of theelectric image extensively to compute the target’s likelypositions.

    3.2.Overview of algorithm for peak localizationAlgorithm 1 uses the position and number of the peaksto locate the object. While the program SensorPodVRis running, the algorithm in MATLAB repeatedly

    Figure 3. Sample of application view in SensorPodVR.Users control the transparent capsule like object representing the SensorPod inthe physical world, and trace out a pathwhere the redder the path, the bigger the voltage perturbation is at that point. Various dots andcylinders (virtual fixtures) describe the potential positions of large voltage perturbations and target objects, respectively.

    Figure 4.Electric image of target object in virtual reality application, as seen from a top-down view.We sample the absolute value ofthe voltage perturbations. The center of the four regions of high voltage perturbation (red) are spaced roughly 20cm apart from eachother in the x and y directions. The cylindrical target is in the center of the electric image, at a coordinate of little voltage perturbation(blue). Amodel of the SensorPod (transparent capsule on bottom) is shown for scale in relation to the target in virtual reality.

    4

    Bioinspir. Biomim. 12 (2017) 014001 S Fang et al

  • interpolates the existing information at each visited(x y, ) point using natural neighbor interpolation(MATLAB’s scatteredInterpolant function).The more points the SensorPod has visited and themore data it collected, the more accurate the inter-polation will be. The algorithm looks for potentialpeaks at a position (x y, ) at the visited sites by notingall grid units with the following properties:

    (i) Absolute value of voltage at ( )x y, 3 mV.(ii) Surrounded by at least 3 visited grid-units with

    voltage magnitudes smaller than or equal to thepotential peak’s units.

    (iii) (x y, ) is not on the border of the tank area. Thatis, 2cm <

  • =

    + -

    + -

    =+ +⎜ ⎟

    ⎨⎪⎪

    ⎩⎪⎪ ⎛⎝

    ⎞⎠

    ( )

    ( )( )

    ( )

    ( )

    x y

    x y x y

    x y x y

    x yx x y y

    ,

    10, , 10, , horizontal,

    10, , 10, , vertical,

    ,2

    ,2

    , diagonal.

    3.3

    target target

    mp mp mp mp

    mp mp mp mp

    mp mp1 2 1 2

    3.3.3. Three and four peaksWhen the user has discovered three or four peaks, it islikely that the true position of the target is not morethan a few centimeters from the estimated position.With three peaks, the algorithm will search for thelongest distance between two of the three peaks andtake the midpoint of the points involved in the longestdistance.

    When the user has discovered four peaks, thecomputer will find the convex hull using MATLAB’sconvhull function, and take the estimated targetposition to be the centroid of the convex hull. For-mally, the convex hull is described as the set of pointspi in the set S such that:

    å ål l l= ={ ∣ }pS 1, 0 .ii

    i i i

    The convex hull can be thought of the enclosurecreated by stretching a rubber band around the set ofpoints S. The convex hull function orders the pointsproperly so that the centroid ( )C C,x y of the quad-rilateral can be calculated:

    å= + -=

    -

    + + +( )( ) ( )C Ax x x y x y

    1

    6, 3.4x

    i

    n

    i i i i i i0

    1

    1 1 1

    å= + -=

    -

    + + +( )( ) ( )C Ay y x y x y

    1

    6, 3.5y

    i

    n

    i i i i i i0

    1

    1 1 1

    å= -=

    -

    + +( ) ( )A x y x y

    A

    1

    2,

    where is the signed area.

    3.6i

    n

    i i i i0

    1

    1 1

    3.4. Indicating regions of high perturbationOriginally virtual fixtures consisted only of cylindersrendered at their estimated positions. However, it wasfound that this was confusing for users becausenavigating to the position of the cylinder did not yieldany additional useful information. In order to aidintuition we also render green squares on the floor ofthe tank area to denote possible peak locations. Thesesquares serve as hints that guide the user to a region ofhigh information density, which will improve thealgorithm’s estimate of the target position. If thetarget’s estimated position is centered at (x y, ), unitsare rendered at

    - - + -( ) ( )x y x y10, 10 , 10, 10 ,+ + - +( ) ( )x y x y10, 10 , 10, 10 .

    4. System–user interaction

    We perform hypothesis testing to determine theusefulness of virtual fixtures (green square hints) inaiding users to localize objects. Usefulness is measuredby two metrics: how long it takes users to localize theobject, and how accurate the localization is. The nullhypothesis assumes that there is no difference inperformancewith the use of virtualfixtures.

    4.1. Experimental setupTen students from our laboratory tested Sensor-PodVR. Users were given a set of instructions, briefedon the basics of electrosense, and shown the electricimage in figure 4. This was done to reduce the variancein performance, as some users weremore familiar withactive electrosense than others. All users attempted tolocalize the target using two versions of the application—one with hints, and one without (control case). Inthe control group, users attempted to localize theobject by using only the colored trail, while in theexperimental group, users were able to rely on bothvirtual fixtures and the colored trail. The order chosenfor the two versions for each user was randomized.This was because we did not want familiarity with thesystem to shift the mean of the time taken forlocalization. Half of the users ran the no-hints versionof the application first, and half of the users ran thehints version first.

    The SensorPod starts at the origin (0, 0), and thetarget is placed in a random position in the zone

    - x50 50 cm, - y30 30 cm around theorigin. The timer starts when the user begins to move.When the user has determined that he or she has loca-lized the target, the user would position the center ofthe SensorPod directly over the guessed coordinates,and the timer would stop. Because there is no visualgrid overlaid with the tank area, to compare estimatedposition with the actual coordinates, we moved theSensorPod over the object by looking at the physicalposition of the object and took the difference. Thevalue obtained is the accuracy measurement (error indistance).

    4.2. Results and discussionA total of 10 trials were taken for each version. Welooked at the distribution of the data using a Lillieforstest and if the distribution was close to normal, we rana two-sample t-test to test the hypothesis. If thedistribution was not normal, a ranked-sum test wasused. Results showed that virtual fixtures helpedreduce the overall time it took to localize an objectdown by 44.5% on average but that there was nodifference in accuracy (see tables 1 and 2). Targetposition estimates were within 4 cm of the physicalposition when excluding outliers, which is about halfthe body length of the object (7.62 cm diameter). Thisis an acceptable error, as it indicates that the majority

    6

    Bioinspir. Biomim. 12 (2017) 014001 S Fang et al

  • of attempts at centering the SensorPod over the face ofthe cylinder are successful. Figure 6 shows how timetaken does not correlate with the distance error forfixture and no fixture versions. This may be becauseusers feel more certain of their own estimates whenguided by the computer’s estimate, or because virtualfixtures outline the possible regions of high informa-tion density so that users do not have to guess theirlocations.

    However, if the user has collected an insufficientamount of data, the computer’s interpolation will beoff and may mislead the user by giving a wrong esti-mate of position. In the figure, there are two outliers

    that have a relatively low accuracy, and it is likely thatduring these trials the users prematurely determinedthe position of the target after collecting only a smallset of data. This is one disadvantage of using guides—the error of the interpolation is strongly tied to thenumber of coordinates visited, so users can be misledif they are too hasty in determining the object’s loca-tion. Removing the outliers does not change the afore-mentioned conclusions.

    5. Systemperformance

    5.1. LatencyTable 3 lists the various sources of lag. Delays due tothe data gathering rate and communications werecalculated by averaging the time taken to loop the sametask 1000times. Note that we gather voltage data everyother loop, and interpolate every 20th loop so thecalculated duration is only the average. Calculationsfor the lag due to interpolation during the objectlocalization algorithm is described in the next para-graph. Input–output lag was calculated by filming aslow-motion capture of an operator controlling theSensorPod and recording the time between user inputand SensorPod movement. Of all of these sources oflag, aside from the input–output lag, it was found thatinterpolation during the object localization algorithm

    Table 1.Table ofmean, standard deviations for time versus accuracydata infigure 6, with outliers.We can see that with outliers, theaccuracywith hints appear to be, on average, worse than theaccuracywithout hints. A ranked sum test was computed foraccuracy as the outliers cause the distribution to have a heavy tail.This is one drawback of using hints—users can bemisled by thecomputer’s estimate.

    Time (min) Accuracy (cm)

    Hints Nohints Hints Nohints

    μ 2.28 4.11 3.91 2.22

    σ 0.76 1.37 4.01 1.06

    p 0.0016 (t-test) 0.52 (ranked sum)

    Table 2.Table ofmean, standard deviations for time versusaccuracy data in figure 6, with no outliers. Oncewe remove theoutliers we can see that using hints or no hints does not changethe accuracy.

    Time (min) Accuracy (cm)

    Hints Nohints Hints Nohints

    μ 2.21 4.08 2.11 2.10

    σ 0.74 1.51 0.88 1.11

    p 0.0072 (t-test) 0.98 (t-test)

    Figure 6.Plot of the amount of time users took to localize an object compared to how close theywere to its physical position. Note thatmost guesses arewithin 4 cmof error, which is a littlemore than the radius of the cylinder (3.81 cm). Thusmost attempts are successfulin centering the SensorPod over the cylinder.

    Table 3. Some sources of lag in SensorPodVR.

    Estimated task durations during runtime (ms)

    Unity frame refresh (FPS−1) 13Data gathering rate (voltage and position) 30Communications (UDP) 4Interpolation during Localization algorithm ( )O n2 , max200Input–output delay 1000

    7

    Bioinspir. Biomim. 12 (2017) 014001 S Fang et al

  • was the culprit for the biggest percentage of timetaken.

    For this interpolation, the worst case time com-plexity is ( )O n2 , which occurs when the SensorPodtravels around the perimeter of tank area (figure 7).This is because the computer needs to interpolate overthe convex hull of the visited points, and so both thesize of the convex hull and the number of visited coor-dinates influence the time needed. During a typicalrun in which an object is localized within 500 uniquevisited coordinates, the main while loop inMATLAB runs on average at 40Hz. Other sources oflag do not tend to increase the longer the applica-tion runs.

    Although the lag between user input and Sensor-Pod movement is large, users did not report feelingdisoriented. This may be because this lag does notaffect Unity’s frame refresh rate so the VE stillresponds to the user’s head orientation sufficientlyfast. Some users commented that the lag made it diffi-cult to center the SensorPod correctly over the targettowards the end of the trial, which may have resultedin positioning errors of 1–2cm. However, this error iswell below the acceptable error of 4cm and thus donot affect the conclusion that users benefit from theuse of virtualfixtures.

    6.Noise and distortion of the electricfield

    In a real system there will always be some sort of noiseor non-ideal factor that decreases the precision of theinstrument. This is especially true for the SensorPod asit relies on a differential voltage reading. Any noise ordistortion that causes asymmetry in the electric fieldcan be picked up as a voltage difference by thedifferential op-amps across the sensor pair. Various

    sources of noise include: electrical noise due to thesensor circuitry, voltage offsets due to surface waves atthe air–water boundary, noise due to turbulence andinhomogeneous resistivity of water, and distortions inthe electricfield due to tankwalls.

    All noise factors come together and result in adrifting voltage offset. This offset is the baseline, or theelectric image of the tank at a fixed depth without anyobjects in the water. To reduce offsets due to the base-line, the SensorPod periodically sweeps through a ser-ies of 10 random straight-line trajectories in the tankand samples the current position and the perceivedvoltage differential at each point in the trajectory. Thisinformation is interpolated to create a map of thebaseline at each point in the workspace (figure 8). Cor-ner points of the reachable workspace are alwaysincluded in these trajectories. This baseline voltage issubtracted from the readings we obtain during theruntime of SensorPodVR and is recalculated every2–3 h to counteract the effects of drift. As the baselinecannot be calculated while running the experiment,we also use the VR application to compare the trailcolor of various points in the workspace before andafter each localization attempt to ensure that the offsethas not drastically changed.

    Finally, as mentioned earlier, we restrict the place-ment of the target object in a smaller zone( - x50 50 cm, - y30 30 cm) centeredaround the origin of the tank as our algorithm doesnot yet address the effects of electric field distortionnear tankwalls [3, 16].

    7. Conclusions

    We have demonstrated the feasibility of operating theSensorPod using a VE to localize a target object with a

    Figure 7. Interpolation is only performed once every 20 iterations of the program’smainwhile loop.However, as the number ofvisited positions grows, the time to interpolate increases sharply and causes a noticeable amount of lag in the application. Note that theinterpolation time grows as the square of the number of coordinates visited, but then decreases as the SensorPod begins to fill in thearea enclosed by its circuit.

    8

    Bioinspir. Biomim. 12 (2017) 014001 S Fang et al

  • specific set of properties. By visualizing electrosenseinformation from the target as well as virtual fixturesfor guidance, SensorPodVR helps users effectivelyrespond to information from unintuitive types ofsensing modalities and decrease the time needed forlocalization via human-based computation. By com-bining user intuition with machine computation,SensorPodVR has proven to be an effective early-stageteleoperation system for localization usingelectrosense.

    As SensorPodVR is still in its early stages of devel-opment, there are a few directions one could take itsdevelopment. An immediate next step would be toimprove the electrolocation algorithm. For simplicity,this demonstration and the corresponding algorithmassumes a fixed target shape and material; however,our prior work on estimating shape, size, distance, andmaterial of targets could be integrated to make thealgorithm more generalized and robust [3, 13]. Inaddition to providing the user with hints for discretelocations where information is highest within an elec-trosensory scene, we could hint at the optimal path tosample the space proportional to its expected informa-tion density by integrating ergodicity [4].

    Acknowledgments

    Thisworkwas supported byNSFGrant IIS 1427419.

    References

    [1] Whitcomb LL, YoergerDR, SinghHandHowland J 2000Advances in underwater robot vehicles for deep oceanexploration: navigation, control, and survey operationsRobotics Research: TheNinth Int. Symp. ed JHollerbach andDKoditschek (London: Springer-Verlag) pp 439–48

    [2] LissmannHWandMachinKE 1958Themechanismof objectlocation in gymnarchus niloticus and similar fish J. Exp. Biol.35 451–86

    [3] Bai Y, Snyder J B, PeshkinMAandMacIverMA2015 Findingand identifying simple objects underwater with activeelectrosense Int. J. Robot. Res. 34 1255–77

    [4] Miller LM, SilvermanY,MacIverMA andMurphey TD2016Ergodic exploration of distributed informationTrans. Robot.32 36–52

    [5] Neveln ID, Bai Y, Snyder J B, Solberg J R, CuretOM,LynchKMandMacIverMA2013 Biomimetic and bio-inspired robotics in electric fish research J. Exp. Biol. 2162501–14

    [6] MacIverMA andNelsonME 2001Towards a bioroboticelectrosensory systemAuton. Robots 11 263–6

    [7] Boyer F, Gossiaux PB, JawadB, LebastardV and PorezM2012Model for a sensor inspired by electricfish IEEE Trans. Robot.28 492–505

    [8] AlamirM,OmarO, ServagentN,Girin A, Bellemain P,LebastardV,Gossiaux PB, Boyer F andBouvier S 2010Onsolving inverse problems for electricfish like robots 2010 IEEEInt. Conf. onRobotics and Biomimetics (ROBIO) (Piscataway,NJ: IEEE) pp 1081–86

    [9] LebastardV,ChevallereauC, AmroucheA, JawadB,Girin A,Boyer F andGossiaux PB 2010Underwater robot navigationaround a sphere using electrolocation sense andKalman filter2010 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems(IROS) (Piscataway,NJ: IEEE) pp 4225–30

    [10] LebastardV, ChevallereauC,Girin A, ServagentN,Gossiaux PB andBoyer F 2013 Environment reconstructionand navigationwith electric sense based on a kalmanfilter Int.J. Robot. Res. 32 172–88

    [11] AmmariH, Boulier T, Garnier J andWangH2014 Shaperecognition and classification in electro-sensing Proc. NatlAcad. Sci. 111 11652–7

    [12] Bai Y, Snyder J, SilvermanY, PeshkinMAandMacIverMA2012 Sensing capacitance of underwater objects in bio-inspiredelectrosense 2012 IEEE/RSJ Int. Conf. on Intelligent Robots andSystems (IROS) (Piscataway,NJ: IEEE)pp 1467–72

    [13] Bai Y,Neveln ID, PeshkinMandMacIverMAEnhanceddetection performance in electrosense through capacitivesensingBioinspir. Biomim. 11 055001

    [14] Solberg J R, LynchKMandMacIverMA2008Activeelectrolocation for underwater target localization Int. J. Robot.Res. 27 529–48

    [15] NguyenN,Wiegand I and Jones DL 2009 Sparse beamformingfor active underwater electrolocation IEEE Int. Conf. on

    Figure 8.The interpolatedmap of the baseline (no object differential voltage readings). The corners have significantly larger voltagedifferences than the center of the tank. For this reason, we restrict object localization to the center.

    9

    Bioinspir. Biomim. 12 (2017) 014001 S Fang et al

    http://dx.doi.org/10.1177/0278364915569813http://dx.doi.org/10.1177/0278364915569813http://dx.doi.org/10.1177/0278364915569813http://dx.doi.org/10.1109/TRO.2015.2500441http://dx.doi.org/10.1109/TRO.2015.2500441http://dx.doi.org/10.1109/TRO.2015.2500441http://dx.doi.org/10.1242/jeb.082743http://dx.doi.org/10.1242/jeb.082743http://dx.doi.org/10.1242/jeb.082743http://dx.doi.org/10.1242/jeb.082743http://dx.doi.org/10.1023/A:1012443124333http://dx.doi.org/10.1023/A:1012443124333http://dx.doi.org/10.1023/A:1012443124333http://dx.doi.org/10.1109/TRO.2011.2175764http://dx.doi.org/10.1109/TRO.2011.2175764http://dx.doi.org/10.1109/TRO.2011.2175764http://dx.doi.org/10.1109/robio.2010.5723478http://dx.doi.org/10.1109/robio.2010.5723478http://dx.doi.org/10.1109/robio.2010.5723478http://dx.doi.org/10.1109/iros.2010.5648929http://dx.doi.org/10.1109/iros.2010.5648929http://dx.doi.org/10.1109/iros.2010.5648929http://dx.doi.org/10.1177/0278364912470181http://dx.doi.org/10.1177/0278364912470181http://dx.doi.org/10.1177/0278364912470181http://dx.doi.org/10.1073/pnas.1406513111http://dx.doi.org/10.1073/pnas.1406513111http://dx.doi.org/10.1073/pnas.1406513111http://dx.doi.org/10.1109/iros.2012.6386174http://dx.doi.org/10.1109/iros.2012.6386174http://dx.doi.org/10.1109/iros.2012.6386174http://dx.doi.org/10.1088/1748-3190/11/5/055001http://dx.doi.org/10.1177/0278364908090538http://dx.doi.org/10.1177/0278364908090538http://dx.doi.org/10.1177/0278364908090538

  • Acoustics, Speech and Signal Processing, 2009, ICASSP 2009(Piscataway,NJ: IEEE) pp 2033–36

    [16] SilvermanY, Bai Y, Snyder J B andMacIverMA2012 Locationand orientation estimationwith an electrosensing robot 2012IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)(Piscataway,NJ: IEEE) pp 4218–23

    [17] Smith J R, Garcia E,Wistort R andKrishnamoorthyG2007Electricfield imaging pretouch for robotic graspers IEEE/RSJInt. Conf. on Intelligent Robots and Systems, 2007, IROS 2007(Piscataway,NJ: IEEE) pp 676–83

    [18] UhlmannG2009 Electrical impedance tomography andcalderón’s problem Inverse Problems 25 123011

    [19] ChalmersD J 2016The virtual and the realDisputatio at press[20] BurdeaGC1999 Invited review: the synergy between virtual

    reality and robotics IEEE Trans. Robot. Autom. 15 400–10

    [21] Jankowski J andGrabowski A 2015Usability evaluation of vrinterface formobile robot teleoperation Int. J. Hum.–Comput.Interact. 31 882–9

    [22] LinQ andKuoC 1997Virtual tele-operation of underwaterrobotsProc., 1997 IEEE Int. Conf. onRobotics andAutomation,1997 vol 2 (Piscataway, NJ: IEEE) pp1022–27

    [23] Bejczy AK,KimWS andVenema SC1990The phantomrobot: predictive displays for teleoperationwith time delayProc.1990 IEEE Int. Conf. on Robotics and Automation 1990(Piscataway,NJ: IEEE) pp 546–51

    [24] Rosenberg L B 1993Virtualfixtures: Perceptual tools forteleroboticmanipulation 1993 IEEEVirtual Reality Annual Int.Symp., 1993 (Piscataway,NJ: IEEE) pp 76–82

    10

    Bioinspir. Biomim. 12 (2017) 014001 S Fang et al

    http://dx.doi.org/10.1109/icassp.2009.4960013http://dx.doi.org/10.1109/icassp.2009.4960013http://dx.doi.org/10.1109/icassp.2009.4960013http://dx.doi.org/10.1109/iros.2012.6386167http://dx.doi.org/10.1109/iros.2012.6386167http://dx.doi.org/10.1109/iros.2012.6386167http://dx.doi.org/10.1109/iros.2007.4399609http://dx.doi.org/10.1109/iros.2007.4399609http://dx.doi.org/10.1109/iros.2007.4399609http://dx.doi.org/10.1088/0266-5611/25/12/123011http://dx.doi.org/10.1109/70.768174http://dx.doi.org/10.1109/70.768174http://dx.doi.org/10.1109/70.768174http://dx.doi.org/10.1080/10447318.2015.1039909http://dx.doi.org/10.1080/10447318.2015.1039909http://dx.doi.org/10.1080/10447318.2015.1039909http://dx.doi.org/10.1109/robot.1997.614269http://dx.doi.org/10.1109/robot.1997.614269http://dx.doi.org/10.1109/robot.1997.614269http://dx.doi.org/10.1109/robot.1990.126037http://dx.doi.org/10.1109/robot.1990.126037http://dx.doi.org/10.1109/robot.1990.126037http://dx.doi.org/10.1109/vrais.1993.380795http://dx.doi.org/10.1109/vrais.1993.380795http://dx.doi.org/10.1109/vrais.1993.380795

    1. Introduction1.1. Prior work1.2. VEs for teleoperation

    2. Setup2.1. Hardware and communications2.2. VR interface

    3. Algorithm for object localization3.1. Electric image of the target object3.2. Overview of algorithm for peak localization3.3. Switch cases3.3.1. One peak3.3.2. Two peaks3.3.3. Three and four peaks

    3.4. Indicating regions of high perturbation

    4. System–user interaction4.1. Experimental setup4.2. Results and discussion

    5. System performance5.1. Latency

    6. Noise and distortion of the electric field7. ConclusionsAcknowledgmentsReferences