coordinated tracking of an acoustic signal by a …coordinated tracking of an acoustic signal by a...

18
COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka 1 , Davide Spinello 1 , Daniel J. Stilwell *1 , Aditya S. Gadre 1 , and Wayne L. Neu 2 1 Bradley Department of Electrical and Computer Engineering, Virginia Tech 2 Aerospace and Ocean Engineering Department, Virginia Tech * Phone: (540) 231-3204, Fax: (540) 231-3362, Email: [email protected] We describe an approach to decentralized control and distributed data fu- sion that enables a team of autonomous underwater vehicles to coopera- tively and autonomously localize and track a source of acoustic noise, and we report on field experiments that demonstrate the efficacy of our ap- proach. A principal challenge of subsea coordination is the the extremely low-bandwidth acoustic communication channel that is available under- water. To address this challenge, and to enable coordination despite ex- tremely limited communication, we have devised a new class of decen- tralized control algorithm that utilizes local models of the environment that are embedded onboard each AUV. Each AUV implements a central- ized control law, but with locally estimated variables in the place of state variables that would normally be communicated between vehicles. We show both in theory and in successful field trials that surprisingly little communication is required to implement meaningful underwater vehicle coordination. INTRODUCTION Cooperative sensing of an object is one of the many proposed capabilities for mobile sensor networks. Mobility, coupled with data fusion algorithms and motion control al- gorithms, enables a team of mobile sensors to adjust their relative geometry in real-time to enhance sensing performance. These ideas have been applied to a variety of applica- tions, including target tracking [1, 2, 3, 4], formation and coverage control [5, 6, 7, 8], and environmental tracking and monitoring [9, 10, 11]. We apply these ideas to the problem of tracking and maneuvering relative to an acoustic source using a team of autonomous underwater vehicles equipped with towed hydrophone arrays. Unlike in-air applications which utilize radio-frequency communications, underwater applications are challenged by the severely bandwidth-limited acoustic subsea communication channel. We address the challenge of limited communication by introducing a new class of data fusion and motion control algorithms that are well-suited for applications in which communication between sensor nodes is infrequent. We describe initial results from field trials in which a team of autonomous underwater vehicles (AUVs) cooperatively localize a source of acoustic noise and maneuver to min- imize the joint localization error of the acoustic source. This is equivalent to directing 1

Upload: others

Post on 23-May-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY ATEAM OF AUTONOMOUS UNDERWATER VEHICLES

Darren K. Maczka1, Davide Spinello1, Daniel J. Stilwell∗1, Aditya S. Gadre1, and Wayne

L. Neu2

1Bradley Department of Electrical and Computer Engineering, Virginia Tech2Aerospace and Ocean Engineering Department, Virginia Tech

*Phone: (540) 231-3204, Fax: (540) 231-3362, Email: [email protected]

We describe an approach to decentralized control and distributed data fu-sion that enables a team of autonomous underwater vehicles to coopera-tively and autonomously localize and track a source of acoustic noise, andwe report on field experiments that demonstrate the efficacy of our ap-proach. A principal challenge of subsea coordination is the the extremelylow-bandwidth acoustic communication channel that is available under-water. To address this challenge, and to enable coordination despite ex-tremely limited communication, we have devised a new class of decen-tralized control algorithm that utilizes local models of the environmentthat are embedded onboard each AUV. Each AUV implements a central-ized control law, but with locally estimated variables in the place of statevariables that would normally be communicated between vehicles. Weshow both in theory and in successful field trials that surprisingly littlecommunication is required to implement meaningful underwater vehiclecoordination.

INTRODUCTION

Cooperative sensing of an object is one of the many proposed capabilities for mobilesensor networks. Mobility, coupled with data fusion algorithms and motion control al-gorithms, enables a team of mobile sensors to adjust their relative geometry in real-timeto enhance sensing performance. These ideas have been applied to a variety of applica-tions, including target tracking [1, 2, 3, 4], formation and coverage control [5, 6, 7, 8], andenvironmental tracking and monitoring [9, 10, 11]. We apply these ideas to the problemof tracking and maneuvering relative to an acoustic source using a team of autonomousunderwater vehicles equipped with towed hydrophone arrays. Unlike in-air applicationswhich utilize radio-frequency communications, underwater applications are challenged bythe severely bandwidth-limited acoustic subsea communication channel. We address thechallenge of limited communication by introducing a new class of data fusion and motioncontrol algorithms that are well-suited for applications in which communication betweensensor nodes is infrequent.

We describe initial results from field trials in which a team of autonomous underwatervehicles (AUVs) cooperatively localize a source of acoustic noise and maneuver to min-imize the joint localization error of the acoustic source. This is equivalent to directing

1

Page 2: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

the AUVs to locations at which their geometry relative to the acoustic source yields im-proved localization performance. Each AUV tows a small hydrophone array that measuresthe bearing angle between itself and the acoustic source. To estimate the location of theacoustic source, we utilize a generalized extended Kalman filter for which local estimatesfrom different sensors can be easily fused. Although similar demonstrations have beenperformed with system in air, we successfully demonstrate that our approach to decentral-ized control yields significantly reduced communication requirements and is well-suited tosubsea applications.

In works that address control of sensor motion in mobile sensor networks, the estimationproblem is commonly solved by assuming that the observation noise is independent ofthe process state, see for example [1, 12, 3, 4]. However, as pointed out in [13], thisassumption is not realistic when bearing-only sensors are employed. In order to accountfor state dependent measurement noise, we compute acoustic source location estimatesusing a generalized extended Kalman filter that is proposed in [14].

Cooperative object localization with limited communication is addressed in [4] by cou-pling estimation and motion control algorithms with consensus filters in order to achieveasymptotic agreement between agents. A similar idea appears in [9] for environmentaltracking applications. In this work we address constrained underwater communication dueto low communication bandwidth by considering a new class of distributed motion controlalgorithms that were developed to operate with only occasional communication betweenvehicles. To enable coordination despite limited communication, we embed a local ob-server on board each AUV to estimate states of the other AUVs in the team. The localobserver used herein generalizes the observer proposed in [15] to the case of sensor net-works with limited communication between vehicles. Each AUV simultaneously imple-ments a centralized control law, but with locally estimated variables in the place of statevariables that would be communicated directly from other AUVs in a centralized imple-mentation. Asymptotic convergence of the estimators allows for the implementation of asystem which is asymptotically equivalent to the centralized one proposed in [1].

PLATFORMS AND SENSORS

Virginia Tech 475 AUV

Experiments are conducted using the Virginia Tech 475 autonomous underwater vehicle(AUV). The 475 AUV is a small, low-cost, yet fully field-deployable AUV that is utilizedfor a wide variety of experimental activities and demonstrations. It costs only $10,000 incomponents and machining labor, and has hosted a variety of mission sensors and missionsoftware modules that were developed by researchers at Virginia Tech and at other insti-tutions. Specifications of the 475 are listed in Table 1. The 475 was designed to facilitaterapid integration of new payloads. Hardpoints on the bottom of the AUV are availablefor mounting external payloads, and a watertight bulkhead connector is also available toprovide regulated power and a bi-directional data interface to the payload.

The AUV uses the WHOI micromodem for communication and ranging. The syn-

2

Page 3: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

chronous transmission feature of the modem enables an AUV to compute the range to anode in the network whenever the AUV receives a data packet from that node. For theexperimental activities described herein, navigation is accomplished by ranging betweenAUVs using a distributed navigation filter.

Table 1. Specification of the 475 AUV.

Parameter Specification

Length 34 inchesDiameter 4.75 inchesMass 18.3 lbs.Propulsion/Control Brushless direct-drive DC motor and four independently

controlled flapsCPU/Software x86 compatible; LINUX OS, database server architecture

utilizing TCP/IP client/server connectionsCommunications 900MHz RF modem; Wi-Fi with external antenna; WHOI

micromodem for acoustic communicationNavigation GPS, transponder-based acoustic navigation; time-

synchronized acoustic navigation for AUV to AUVranging and AUV to chase boat ranging; gyro-stabilizeddead reckoning

Endurance 8+ hours at 3 knots

Bearing sensor

A custom towed hydrophone array was designed and fabricated to support experimentsin distributed data fusion and decentralized control. Shown in Figure 2, the towed arrayis a uniform linear array consisting of eight hydrophone transducer elements enclosed ina streamlined package. As the intent was to tow the array with the existing Virginia Tech475 AUVs, a primary design constraint was the physical size and the hydrodynamic ef-ficiency of the array. To meet these requirements, the array was designed to be as smallas possible while still maintaining neutral buoyancy and acoustic transparency. Onboardelectronics includes an Ethernet interface for communicating with the AUV, analog signalconditioning, frequency shifting(mixing), and data acquisition.

No data processing occurs on the array hardware; this task is offloaded to the host vehi-cle. The software written for the host vehicle accepts raw data transmitted over Ethernet andprocesses it to identify a usable signal. Once a usable signal has been identified, a beam-forming algorithm extracts bearing information, which is then fed through a generalizedextended Kalman filter [16]. The output of the filter is made available to other processesfor tracking and control.

Bearing measurements of an acoustic source by the towed-array are most accurate whenthe acoustic source is broadside to the sensor, and accuracy becomes increasingly poor asthe acoustic source appears closer to endfire. In other words, the noise statistics of the

3

Page 4: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

Figure 1. Virginia Tech 475 AUV.

Figure 2. Virginia Tech 475 AUV with towed hydrophone array.

4

Page 5: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

Figure 3. Virginia Tech 475 AUV with hydrophone mount.

bearing sensor are a function of the bearing angle. While this fact is well-known, it playsan important role in our work, and we briefly present a noise model for the bearing sensor.

As depicted in Figure 4, the position of the acoustic source is denoted x and the posi-tion of sensor i is denoted qi. The heading angle of sensor i, as might be measured by amagnetic compass, is denoted ψi. The vectors eN and eE are the north and east basis vec-tors, respectively. Each sensor obtains a bearing measurement to the source, denoted zi anddefined by

zi = h(x, qi, ψi) + vi (1)

h(x, qi, ψi) = γ(x, qi, ψi)−π

2(2)

where γ(x, qi, ψi) is the relative angle of the sensor with respect to the source, as shownin Figure 4. The bearing angle measured by the sensor is zero when the acoustic source isbroadside to the sensor, thus the term−π

2in (2). The sensor noise vi is zero mean Gaussian,

vi ∼ N(0, σi(x, qi, ψi))

As the sensor obtains discrete-time measurements, we further assume that each samplevi[k] is independent. Of particular interest is that the covariance of the measurement noiseσi is dependent on the state of the sensor and the acoustic source. For a uniform linearacoustic array it is shown in [17] that

σi = E{v>i vi

}=

κg

cos2 h(3)

5

Page 6: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

where κ is a constant depending on physical parameters of the sensor array and g is theinverse of the signal to noise ratio which is dependent on the distance between the acousticsource and the sensor, see [1]

eE

eN

γi

ψi

βisource

x

qi

x−qi

Figure 4. Geometric configuration of a bearing-only sensor and acoustic source at time instance k

Generalized extended Kalman filter for state-dependent sensor noise

The objective is to estimate the source position x[k] using time-varying measurementszi[k]. Typically this class of source tracking problems is solved by applying an extendedKalman filter, see for example [1] and [18], which assumes that the measurement noise isindependent from the state of the system. However, in our case the sensor noise statisticsare dependent on the state of the system, which violates the assumptions required for theKalman filter. To correctly address the state-dependent noise in the measurements, weutilize a generalized extended Kalman filter described in [14] to generate an estimate xiand the covariance Pi.

As with the extended Kalman filter, the modified algorithm consists of prediction andupdate steps. The prediction steps are identical to a standard Kalman filter. The local stateand covariance update equations are

xi[k|k] = xi[k|k − 1] (4a)

−[P−1i [k|k − 1] +Ri[k]

]−1si[k]

si[k] =

[− ζiσi∇>x hi +

1

(1− ζ2

i

σi

)∇>x σi

]xi[k|k−1]

(4b)

Ri[k] =

[1

σi∇>x hi∇xhi (4c)

+ζi

2σ2i

(∇>x hi∇xσi +∇>x σi∇xhi

)+

1

4σ2i

(ζ2i

σi+

1

lnσi

)∇>x σi∇xσi

]xi[k|k−1]

6

Page 7: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

P−1i [k|k] = P−1

i [k|k − 1] + Ui[k] (4d)

Ui[k] =

[1

σi∇>x hi∇xhi +

1

2σ2i

∇>x σi∇xσi

]xi[k|k−1]

(4e)

where ζi = zi − hi. Note that if σi did not depend on x then the gradient term ∇xσ wouldbe zero and (4) would reduce to the standard extended Kalman filter update equations.Since our nonlinear control formulation involves computing gradients of the covariancewith respect to the system state, using a filter that correctly treats state-dependent noise isshown to make a significant difference in the performance of the closed-loop system.

Data Fusion

Note that a generalized extended Kalman filter (4) is implemented on each sensor. When-ever a sensor receives data from another sensor, the external information is fused to obtaina source position estimate that accounts for the shared data. In [16] data fusion equationsare derived by considering the joint probability distribution of measurements taken fromdifferent sensors. Using the maximum likelihood approach one obtains equations analo-gous to (4) that account for shared measurements and predictions. In practice, unknownand unequal biases in the sensors’ measurements, along with long periods of no communi-cation, restrict the utility of a rigorously developed approach to data fusion based on jointlikelihood functions. Instead, we have found that consensus algorithms work very well inpractice for the data fusion problem addressed herein. Consensus algorithms represent awide class of weighted averaging protocols whose asymptotic behaviors are well-known inboth deterministic and stochastic settings (see, for example, [19, 20]). Indeed, we find thata true average of estimates is appropriate for our measurements.

Let Ii[k] be the set of indices of all sensors that communicate with vehicle i at time k,and |Ii[k]| the cardinality of the set Ii[k]. Also note that i ∈ Ii[k], for all k. The fusedestimate of the source state for vehicle i is obtained as

xfi[k|k] =

1

|Ii[k]|∑j∈Ii[k]

xj[k|k] (5)

Note that since i ∈ Ii[k] the operation on the right-hand side of (5) is always well defined.In particular, if at time k vehicle i does not receive any estimate then xf

i[k|k] = xi[k|k],where xi[k|k] is given by (4).

Time-varying network topology

Since communication between vehicles is sparse, the corresponding network topologyis always disconnected. Networks that are not connected in a frozen-time sense pose tech-nical challenges for analysis and design of decentralized control systems. To address thischallenge, we have shown that it is sufficient that a formally defined average network beconnected if the network switches sufficiently fast among topologies. The required switch-ing rate defines our notion of a network time-constant. For a given mission specification,

7

Page 8: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

we can compute the network time-constant as a function of the system dynamics and deter-mine how fast agents must communicate. These ideas are discussed more fully and appliedto multi-vehicle coordination problems in [20, 21], among others.

OBSERVER-BASED DECENTRALIZED CONTROL

To enable coordination with limited communication, we embed a local observer onboardeach AUV to estimate the location, sensor output, and target estimate of every other AUV.Then each AUV independently implements the same centralized control law, but with es-timated variables in the place of state variables that would be communicated directly fromother vehicles in a centralized implementation. In general, this approach presents signifi-cant technical challenges because there is no separation principle for decentralized systems.In other words, the closed-loop system composed of observers and control laws embeddedon each AUV may not be stable even if the control law and the observer error dynamics areeach individually stable. Thus our principal contribution is to develop methods for simul-taneously designing control laws and distributed observers with guaranteed performanceproperties (e.g., stability).

We consider an acoustic source modeled by the dynamics

x[k + 1] = fs(x[k]) + v(k)

where v(k) ∼ N(0, σ). The team of N mobile sensors (e.g., AUVs) is modeled by thedynamics

qi[k + 1] = fi(qi[k], ui[k]), i ∈ {1, . . . , N}

where ui is the control signal for sensor i.

We make no assumptions on the fidelity of the sensor (AUV) motion model within ourgeneral framework, although we have found that a simple point-mass model where thestates consist of position variables and perhaps velocity variables is useful. We have alsoemployed a kinematic model that explicitly accounts for the heading of the sensor for situ-ations where heading is important.

The state of the entire system, consisting of N sensors and an acoustic source, is repre-sented

q = [q>s , q>1 , . . . , q

>N ]>

Each mobile sensor maintains an estimate of the entire system state, denoted qi. Note thatqi is an estimate of q and not an estimate of qi. For notational convenience, we collect allof the state estimates, the outputs, and the inputs (control signals) into vectors,

q = [q>1 , . . . , q>N ]>

g(q) = [g(q1)>, . . . , g(qN)>]>

u(q) = [u1(q1)>, . . . , u(qN)>]>

8

Page 9: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

Our task is now to implement control signals ui(qi) on each mobile sensor so that thesystem behaves as desired, and to implement observers on each mobile sensor so that eachestimate qi asymptotically agrees with all other estimates and the true system state q. Ourapproach is to simultaneously minimize an objective function J(q) corresponding to thedesired behavior of the team of sensors and an additional quadratic term corresponding tothe observers’ estimation error. Thus we seek approaches that minimize the functional

F (q, q, k) = J(q) +1

2(g(q)− g(q))>A[k](g(q)− g(q)) (6)

The matrix A[k] models the intermittent communication that occurs between mobile sen-sors. When there is no communication between two sensors, the corresponding terms inA is zero and right-most term in (6) has no effect on the corresponding components of q.When there is communication, the corresponding terms in A are unity.

The desired behavior of the team is encoded in the functional J(q). A variety of behav-iors are possible, including trajectory (or location) estimate error minimization, formationflying, and multiple source discrimination, among many others. In this paper, we con-sider minimization of the trajectory estimation error and formation flying relative to theestimated location of the acoustic source.

Gradient descent

Minimization of (6) using gradient descent is discussed in detail in [22], and we provideonly an outline here. Each mobile sensor computes the control signal

u(q) = −Γ∇qJ(q)

using its estimate of the system state q, where Γ is diagonal matrix of control gains. Sen-sor i uses only the component ui(q) of u(q) for its local control decision, but uses theremaining components of u(q) to update its estimate of the state of other mobile sensors inthe absence of communication from other sensors. When communicated information fromother sensors is available, the same gradient approach yields

q[k + 1] = q[k] + T (u(q[k]) +K(q[k])A[k](g(q[k])− 1N ⊗ g(q[k])))

where the observer gain K(q) is derived in [22], T is the period between control updates,and 1N is the N -vector with all entries unity. Note that the gradient approach yields astandard observer structure when information from other sensors is available.

Receding horizon nonlinear optimization

Gradient descent works well in many circumstances and has a small computational re-quirement. However, state trajectories resulting from gradient descent can become trappedin local minima of the objective function F , and these local minima may represent phys-ically undesirable solutions. To address the phenomena of local minima, we adopt a re-ceding horizon control (RHC) approach [23]. RHC is a control technique in which the

9

Page 10: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

finite-time optimal controls are computed for the finite-time horizon H . The optimal con-trol is usually recomputed periodically with a period that is much shorter than H .

The objective function is defined

VH(x, u, k) =k+H∑i=k

L(x[i], u[i]) +Q(x[k +H]) (7)

where L is the integral cost and Q defines the terminal cost. At each control time k an opti-mality problem is solved which generates a control sequence u∗ = {u∗[k], · · · , u∗[k +H]}that minimizes the objective function (7) and obtains the solution

V ∗H(x, u) = min {VH(x, u)|u ∈ U , x ∈ X} (8)

where U and X are sets that describe constraints on the input and state, respectively. Thisfinite-time constrained optimization problem can be solved numerically using a variety ofmethods, including the Nonmonotone Spectral Projected Gradient Method [24]

RESULTS

Field Experiments

Gradient descent was demonstrated in a field trail at Claytor Lake, a 4,500 acre hydro-electric impoundment of the New River near Dublin, VA. For this initial demonstration,conducted summer 2008, a static acoustic source was utilized, and the objective of themobile sensors was to maneuver so that the localization error of the acoustic source wasminimized. That is, we chose J(q) in (6) to be

J(q) = detU−1(q) (9)

where U−1 is defined in (4e) and q consists of position variables since it is assumed that theacoustic source is stationary.

Two Virginia Tech 475 AUVs were equipped with towed array sensors to measure therelative bearing to an acoustic source located on a support craft. The vehicles were able tocommunicate with one another using a WHOI micromodem. The goal of the experimentswas to demonstrate that with only intermittent communication the two vehicles could con-trol trajectories to jointly minimize the cost function (6). Intuition along with inspectionof the cost function suggest that at steady-state, both vehicles should circle the estimatedlocation of the acoustic source so that their sensors are broadside to the acoustic source andso that both vehicles are separated by π/2 around the circle.

The results of the demonstration are shown in Figure 5. The acoustic source was mountedon a chase boat that moved throughout the experiment due to wind. Its position was withinthe area enclosed by the solid circle for the duration of the mission. The two vehiclesstart at the position denoted by the solid triangle and are commanded on an initial straightline path. At the position denoted by the solid square the towed array sensor reports an

10

Page 11: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

initial bearing measurement to the acoustic source, and the gradient descent controller isactivated. The curved path around the source position results from the controller driving therelative bearing of each vehicle to π/2. The AUVs choose to do this because the bearingsensor works best when the acoustic source is broadside to the sensor. The dotted lines arevisual aids that show same-time positions for the AUVs. It is evident that the velocity of redAUV increases while the velocity of blue AUV decreases, and that the relative separationbetween vehicles increases over the length of the mission. The commanded speed of eachvehicle is plotted in Figure 6. The dots in Figure 6 denote times when communicationallowed a fusion event to occur. Blue dots indicate that blue AUV received a data packet,while red dots indicate that red AUV receives a data packet. Speed commands are in therange of 0.8 m/s to 1.5 m/s. Several fusion events occur in the beginning the mission,which cause each local observer to achieve a reasonable level of agreement. There are fewfusion events in the middle of the mission, but each local observer maintains an estimate ofthe evolving state of the other vehicle, and the combined actions of each vehicle proceedas expected. It is interesting to note that while no fusion events occur towards the end ofthe mission, the commanded speeds begin fluctuating to slow the sensor separation basedon the information inferred by the local observers which indicated that the vehicles areapproaching their optimal positions. The magnitude of the acoustic source estimation erroris plotted in Figure 7, which shows that the local position estimates converge when fusionevents occur. During a period of no communication in the middle of the experiment thetwo estimates diverge from each other which is expected due to unmodeled biases presentin each sensor.

The vehicles achieve a desired relative bearing and geometry with respect to the targetthat minimizes the joint localization error of the acoustic source. Importantly, we do notexplicitly command one vehicle to speed up while the other slows down. The behaviorof each vehicle is an emergent consequence of making joint decisions, aided by a localobserver when no communication occurs, to minimize the cost function (6).

11

Page 12: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

Figure 5 Vehicle trajectories; triangles denotes initial condition, squares denoteslocation of vehicles when bearing angle is first measured, the black circle indicatestrue position of acoustic source throughout the experiment

Figure 6. Speed of AUVs during demonstration shown in Figure 5.

12

Page 13: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

Figure 7 Agreement between AUVs on the location estimate of the acoustic sourceduring demonstration shown in Figure 5.

13

Page 14: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

Receding horizon control example

Receding horizon optimal control has been implemented in simulation with the dualobjectives of estimating the trajectory of a moving acoustic source and formation flyingrelative to the acoustic source. Initially, two AUVs utilize a similar objective function as in(9). The functionals in the receding horizon objective function (7) are

L(q, u) = c1 detU−1(q) (10)Q(q) = c2 detU−1(q) (11)

where c1 and c2 are constants. Once an initial maneuver to reduce trajectory estimationerror is complete, the AUVs utilize a new objective function that enables them to performformation flying relative to the moving acoustic source. Distributed trajectory estimationcontinues during formation flying, although the AUVs no longer maneuver explicitly todecrease estimation error. During formation flying, the functionals in the receding hori-zon objective functionals encode the distance between two AUVs, the orientation of theformation with respect to the velocity vector of the acoustic source, and the relative anglebetween the average velocity of the AUVs and the velocity of the acoustic source.

The simulation results show the trajectory of two sensors and a single acoustic sourceover a time period of three minutes. During the first 60 seconds of the simulation the sensorsuse the estimation covariance objective function to improve estimation performance. Thedeterminant of estimation error covariance is shown in Figure 8(b). The initial maneuver,in which the AUVs maneuver to decrease estimation error, appears in Figure 8(a). We seethat the sensors maneuver to simultaneously increase their relative spacing with respect tothe acoustic source and orient themselves so that the acoustic source is broadside to thesensor.

After 60 seconds the controller switches to the formation objective as shown in Figure9. The objectives are (1) the distance between two AUVs is 20m, the orientation of theformation with respect to the velocity of the acoustic source is 0, and the relative anglebetween the average velocity of the AUVs and the velocity of the acoustic source is π/2.The choices causes the AUVs to maneuver away at a right angle from the estimated path ofthe acoustic source. For the purposes of illustration, the acoustic sources changes directionduring the time period between Figures 9 (b) and (c). Throughout the simulation, weassume that each AUV successfully transmits a data packet ever 20 seconds.

14

Page 15: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

0 50 100 150 200 250 300 350

0

50

100

150

200

x (m)

y (m

)

sourcesensorstart location

0 10 20 30 40 50 600

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 8 Simulation of two AUVs maneuvering relative to a moving acoustic source;(a) initial maneuver to minimize trajectory estimation error; (b) determinant of esti-mation error covariance with respect to time.

15

Page 16: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

0 50 100 150 200 250 300 350

0

50

100

150

200

x (m)

y (m

)

sourcesensorsensorstart location

0 50 100 150 200 250 300 350

0

50

100

150

200

x (m)

y (m

)

(a) (b)

0 50 100 150 200 250 300 350

0

50

100

150

200

x (m)

y (m

)

0 50 100 150 200 250 300 350

0

50

100

150

200

x (m)

y (m

)

(c) (d)

Figure 9 Continuation of simulation shown in Figure 8 where two AUVs haveswitched to formation control relative to a moving acoustic source; (a) t = 60 sec-onds, (b) t = 90 seconds, (c) t = 120 seconds, (d) t = 150 seconds.

16

Page 17: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

ACKNOWLEDGMENT

The authors are extremely grateful support of the Office of Naval Research via grantN000140710434.

REFERENCES[1] T. H. Chung, J. W. Burdick, and R. M. Murray, “Decentralized motion control of

mobile sensing agents in a network,” in Proceedings of the IEEE International Con-ference on Robotics and Automation, Orlando, Florida, May 2006.

[2] T. H. Chung, V. Gupta, J. W. Burdick, and R. M. Murray, “On a decentralized ac-tive sensing strategy using mobile sensor platforms in a network,” in Proceedings ofthe IEEE conference on Decision and Control, Paradise Island, Bahamas, December2004.

[3] S. Martınez and F. Bullo, “Optimal sensor placement and motion coordination fortarget tracking,” Automatica, vol. 42, no. 4, pp. 661–668, 2006.

[4] P. Yang, R. A. Freeman, and K. M. Lynch, “Distributed cooperative active sensingusing consensus filters,” in Proceedings of the IEEE International Conference onRobotics and Automation, Roma, Italy, Feb. 2007.

[5] C. Belta and V. Kumar, “Abstraction and control for groups of robots,” IEEE Trans-actions on Robotics, vol. 20, no. 5, pp. 865–875, October 2004.

[6] J. Cortes, S. Martınez, T. Karatas, and F. Bullo, “Coverage control for mobile sensingnetworks,” IEEE Transactions on Robotics and Automation, vol. 20, no. 2, pp. 243–255, 2004.

[7] J. A. Fax and R. M. Murray, “Information flow and cooperative control of vehicleformations,” IEEE Transactions on Automatic Control, vol. 49, no. 9, pp. 1465–1476,September 2004.

[8] R. A. Freeman, P. Yang, and K. M. Lynch, “Distributed estimation and control ofswarm formation statistics,” in Proceedings of the American Control Conference,Minneapolis, Minnesota USA, June 14-16 2006, pp. 749–755.

[9] M. Porfiri, D. G. Roberson, and D. J. Stilwell, “Tracking and formation control ofmultiple autonomous agents: A two-level consensus approach,” Automatica, vol. 43,no. 8, pp. 1318–1328, 2007.

[10] S. Simic and S. Sastry, “Distributed environmental monitoring using random sensornetworks,” in Proceeding of the 2nd International Workshop on Information Process-ing in Sensor Networks, Palo Alto, CA, 2003, pp. 582–592.

[11] S. Susca, S. Martınez, and F. Bullo, “Monitoring environmental boundaries with arobotic sensor network,” in Proceedings of the American Control Conference, 2006,pp. 2072–2077.

17

Page 18: COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A …COORDINATED TRACKING OF AN ACOUSTIC SIGNAL BY A TEAM OF AUTONOMOUS UNDERWATER VEHICLES Darren K. Maczka1, Davide Spinello1, Daniel

[12] A. Farina, “Target tracking with bearings-only measurements,” Signal Processing,vol. 78, pp. 61–78, 1999.

[13] A. Logothetis, A. Isaksson, and R. J. Evans, “An information theoretic approach toobserver path design for bearings-only tracking,” in Proceedings of the 36th Confer-ence on Decision and Control, San Diego, California, Dec. 1997, pp. 3132–3137.

[14] D. Spinello and D. J. Stilwell, “Nonlinear estimation with state-dependent Gaussianobservation noise,” submitted for publication.

[15] K. Shimizu, “Nonlinear state observers by the gradient descent method,” Anchorage,Alaska, USA, pp. 616–622, September 25-27 2000.

[16] D. Spinello and D. J. Stilwell, “Nonlinear estimation with state-dependent gaussianobservation noise,” Virginia Polytechnic Institute and State University, Tech. Rep.,2008. [Online]. Available: http://www.unmanned.vt.edu/discovery/reports.html

[17] A. Gadre, M. Roan, and D. J. Stilwell, “Sensor error model for a uniformlinear array,” Virginia Polytechnic Institute and State University, Tech. Rep., 2008.[Online]. Available: http://www.unmanned.vt.edu/discovery/reports.html

[18] P. Yang, R. Freeman, and K. Lynch, “Distributed cooperative active sensing usingconsensus filters,” in Proc. IEEE International Conference on Robotics and Automa-tion, 2007, pp. 405–410.

[19] W. Ren, R. Beard, and E. Atkins, “A survey of consensus problems in multi-agentcoordination,” in Proc. American Control Conference the 2005, 2005, pp. 1859–1864vol. 3.

[20] M. Porfiri and D. Stilwell, “Consensus seeking over random weighted directedgraphs,” vol. 52, no. 9, pp. 1767–1773, 2007.

[21] M. Porfiri, D. G. Roberson, and D. J. Stilwell, “Fast switching analysis of linearswitched systems using exponential splitting,” SIAM Journal of Control and Opti-mization, vol. 47, no. 5, p. 2582 – 2597, 2008.

[22] A. S. Gadre, D. K. Maczka, D. Spinello, B. R. McCarter, D. J. Stilwell, W. L. Neu,M. J. Roan, and J. H. Hennage, “Cooperative localization of an acoustic source us-ing towed hydrophone arrays,” in Proc. IEEE Workshop on Autonomous UnderwaterVehicles, 2008.

[23] H. Michalska and D. Q. Mayne, “Robust receding horizon control of constrainednonlinear systems,” vol. 38, no. 11, pp. 1623–1633, 1993.

[24] E. G. Birgin, J. M. Martınez, and M. Raydan, “Nonmonotone spectral projectedgradient methods on convex sets,” SIAM Journal on Optimization, vol. 10, no. 4, pp.1196–1211, 2000. [Online]. Available: http://link.aip.org/link/?SJE/10/1196/1

18