sum of squares method for sensor network...

24
Sum of Squares Method for Sensor Network Localization Jiawang Nie * November 27, 2006 Abstract We formulate the sensor network localization problem as finding the global minimizer of a quartic polynomial. Then sum of squares (SOS) relaxations can be applied to solve it. However, the general SOS relaxations are too expensive to implement for large problems. Exploiting the special features of this polynomial, we propose a new structured SOS relaxation, and discuss its various properties. When distances are given exactly, this SOS relaxation often returns true sensor locations. At each step of interior point methods solving this SOS relaxation, the complexity is O(n 3 ), where n is the number of sensors. When the distances have small perturbations, we show that the sensor locations given by this SOS relaxation are accurate within a constant factor of the perturbation error under some technical assumptions. The performance of this SOS relaxation is tested on some randomly generated problems. Key words: Sensor network localization, graph realization, distance geometry, polynomials, semidefinite program (SDP), sum of squares (SOS), error bound. 1 Introduction An important problem in communication and information theory that has been paid much attention recently is sensor network localization. The basic description of this problem is as follows. For a sequence of unknown vectors (also called sensors) x1,x2, ··· ,xn in Euclidean space R d (d =1, 2, ··· ), we need to find their coordinates such that the distances (not neces- sarily all) between these sensors and the distances (not necessarily all) to other fixed sensors a1, ··· ,am (also called anchors) are equal to some given numbers. To be more specific, let A = {(i, j ) [n] × [n]: xi xj 2 = dij }, and B = {(i, k) [n] × [m]: xi a k 2 = e ik }, where dij ,e ik are given distances and [n]= {1, 2, ··· ,n}. Then the problem of sensor network localization is to find vectors {x1,x2, ··· ,xn} such that xi xj 2 = dij for every (i, j ) ∈A and xi a k 2 = e ik for every (i, k) ∈B. Denote by G(A) the graph whose nodes are [n] and whose edge set is A. For simplicity of notation, let D =(dij ,e ik ) (i,j)∈A,(i,k)∈B be the given distance data. Sensor network localization is also known as the graph realization or distance geometry [7]. Given a graph G =(V,E), graph realization is to assign each vertex a vector such that the distances between these vectors are equal to the given numbers associated to the edges. The distance geometry problem is to find atom positions of a molecule so that the distances between some atom pairs equal some given numbers. Distance geometry problems arise in the determination of protein structure. Institute for Mathematics and its Applications (IMA), University of Minnesota, 400 Lind Hall, 207 Church SE, Minneapolis, MN 55455. Email: [email protected] 1

Upload: others

Post on 17-Jul-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

Sum of Squares Method for Sensor Network Localization

Jiawang Nie ∗

November 27, 2006

Abstract

We formulate the sensor network localization problem as finding the global minimizerof a quartic polynomial. Then sum of squares (SOS) relaxations can be applied to solveit. However, the general SOS relaxations are too expensive to implement for large problems.Exploiting the special features of this polynomial, we propose a new structured SOS relaxation,and discuss its various properties. When distances are given exactly, this SOS relaxationoften returns true sensor locations. At each step of interior point methods solving this SOSrelaxation, the complexity is O(n3), where n is the number of sensors. When the distanceshave small perturbations, we show that the sensor locations given by this SOS relaxation areaccurate within a constant factor of the perturbation error under some technical assumptions.The performance of this SOS relaxation is tested on some randomly generated problems.

Key words: Sensor network localization, graph realization, distance geometry, polynomials,semidefinite program (SDP), sum of squares (SOS), error bound.

1 Introduction

An important problem in communication and information theory that has been paid muchattention recently is sensor network localization. The basic description of this problem is asfollows. For a sequence of unknown vectors (also called sensors) x1, x2, · · · , xn in Euclideanspace Rd(d = 1, 2, · · · ), we need to find their coordinates such that the distances (not neces-sarily all) between these sensors and the distances (not necessarily all) to other fixed sensorsa1, · · · , am (also called anchors) are equal to some given numbers. To be more specific, letA = {(i, j) ∈ [n] × [n] : ‖xi − xj‖2 = dij}, and B = {(i, k) ∈ [n] × [m] : ‖xi − ak‖2 = eik},where dij , eik are given distances and [n] = {1, 2, · · · , n}. Then the problem of sensor networklocalization is to find vectors {x1, x2, · · · , xn} such that ‖xi − xj‖2 = dij for every (i, j) ∈ Aand ‖xi − ak‖2 = eik for every (i, k) ∈ B. Denote by G(A) the graph whose nodes are [n] andwhose edge set is A. For simplicity of notation, let D = (dij , eik)(i,j)∈A,(i,k)∈B be the givendistance data.

Sensor network localization is also known as the graph realization or distance geometry[7]. Given a graph G = (V, E), graph realization is to assign each vertex a vector such thatthe distances between these vectors are equal to the given numbers associated to the edges.The distance geometry problem is to find atom positions of a molecule so that the distancesbetween some atom pairs equal some given numbers. Distance geometry problems arise in thedetermination of protein structure.

∗Institute for Mathematics and its Applications (IMA), University of Minnesota, 400 Lind Hall, 207 Church SE,Minneapolis, MN 55455. Email: [email protected]

1

Page 2: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

The sensor locations can be determined by solving the polynomial system

‖xi − xj‖22 = d

2ij , (i, j) ∈ A

‖xi − ak‖22 = e

2ik, (i, k) ∈ B.

For a small number of sensors, it might be possible to compute sensor locations by solvingthese equations. We refer to [29] for methods solving polynomial equations. However, solvingpolynomial system can be very expensive when there are a lot of sensors. Furthermore, thispolynomial system may be inconsistent if the distances dij or eik have errors which often occurin practice. In general, the sensor network localization problem is NP-hard ([1, 18, 26]), evenfor the simplest case d = 1.

Sensor network localization can also be formulated in term of global optimization. Obvi-ously, x1, · · · , xn are true sensor locations if and only if the optimal value of problem

minx,··· ,xn∈Rd

(i,j)∈A

∣‖xi − xj‖22 − d

2ij

∣ +∑

(i,k)∈B

∣‖xi − ak‖22 − e

2ik

∣ (1.1)

is zero. So the localization problem is equivalent to finding the global minimizer of this prob-lem. This optimization problem is nonsmooth, nonconvex, and it is also NP-hard to find globalsolutions, since it is equivalent to sensor network localization problem. So approximationmethods are of great interests. Recently, semidefinite programming (SDP) and second-ordercone programming (SOCP) relaxations are proposed to solve this problem approximately.

The basic idea of SDP relaxation is to think of the quadratic terms as new variables andadd one linear matrix inequality (LMI) they satisfy. Let X = [x1, x2, · · · , xn] ∈ Rd×n be thematrix variable of sensor locations. Notice the identity:

‖xi − xj‖22 = h

TijX

TXhij

where hij = ei − ej . Here ei is the i-th standard unit vector in Rn. By introducing a newvariable Y , the problem (1.1) becomes

minX∈Rd×n

(i,j)∈A

∣hij Y hij − d2ij

∣ +∑

(i,k)∈B

[

ei

−ak

]T [

Y XT

X Id

] [

ei

−ak

]

− e2ik

s.t. Y = XTX.

Here the quadratic constraint Y = XT X is nonconvex. The SDP relaxation replaces thisnonconvex equality by the convex inequality Y º XT X, which is the same as

[

Y XT

X Id

]

º 0.

The SDP relaxation of (1.1) is

minX∈R

d×n

Y ∈Rd×d

(i,j)∈A

∣hij Y hij − d2ij

∣ +∑

(i,k)∈B

[

ei

−ak

]T [

Y XT

X Id

] [

ei

−ak

]

− e2ik

(1.2)

s.t.

[

Y XT

X Id

]

º 0. (1.3)

We refer to [8, 9, 15, 27] for more details about SDP relaxation methods.The SOCP relaxation comes in a similar way. By introducing new variables tij , problem

(1.1) becomes

minxi,tij ,tik

(i,j)∈A

∣tij − d2ij

∣ +∑

(i,k)∈B

∣tik − e2ik

s.t. tij = ‖xi − xj‖22

tik = ‖xi − ak‖22.

2

Page 3: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

Replacing nonconvex equalities tij = ‖xi − xj‖22 (tik = ‖xi − ak‖2

2) by convex inequalitiestij ≥ ‖xi − xj‖2

2 (tik ≥ ‖xi − ak‖22), we get the SOCP relaxation

minxi,tij ,tik

(i,j)∈A

∣tij − d2ij

∣ +∑

(i,k)∈B

∣tik − e2ik

∣ (1.4)

s.t. tij ≥ ‖xi − xj‖22 (1.5)

tik ≥ ‖xi − ak‖22. (1.6)

Usually the SOCP relaxation (1.4)-(1.6) is weaker than the SDP relaxation (1.2)-(1.3), but(1.4)-(1.6) is easier to solve. We refer to [11, 30] for work in this area.

One common feature of SDP and SOCP relaxations is that the computed sensor locationsare very inaccurate when the solution of the localization problem is not unique, because manynumerical schemes for SDP or SOCP like primal-dual interior point methods often return theanalytic center of the solution set. The motivation of this paper is to seek other efficientapproaches which can find multiple (if possible) locations.

Now let us come back to the equivalent optimization problem (1.1). Obviously, only theglobal solutions to (1.1) give true sensor locations while local solutions do not. In general, thereare no efficient algorithms to find global minimizers for general nonlinear functions. However,if the objective functions are multivariate polynomials, sum of squares (SOS) relaxations canbe applied to solve the problem approximately, and in many situations the global minimizerscan be found. We propose to apply SOS relaxations to solve the sensor network localizationproblem.

In problem (1.1), the objective is a sum of absolute values, and hence not a polynomial. Sothe SOS methods are not applicable. However, if we replace the absolute values by squares,we can get a new optimization problem

f∗ := min

X∈Rd×nf(X) :=

(i,j)∈A

(

‖xi − xj‖22 − d

2ij

)2+

(i,k)∈B

(

‖xi − ak‖22 − e

2ik

)2. (1.7)

f(X) can be thought of as the squared L2 norm of the error for a given guess X. The goodproperty is that f(X) is now a polynomial function of degree four. Therefore, the SOS methodsare applicable.

However, if we directly apply the general SOS methods, the computation is very expensiveand not practical for large problems. The main contribution of this paper is to propose asparse SOS relaxation based on the special structures of f(X), which is much easier to solve.The properties and implementations of this particular relaxation are discussed. Usually truesensor locations can be returned. When the distances are perturbed, the solutions returnedby this SOS relaxation are shown to be accurate within a factor of the perturbation error.

The notations used in this paper are: R denotes the set of real numbers; N denotes theset of nonnegative integers; A º 0 (≻ 0) means matrix A is symmetric positive semidefinite(definite); AT denotes the transpose of matrix A; SN

+ denotes the cone of symmetric positivesemidefinite matrices of length N ; for a finite set S, |S| denotes its cardinality; ≡ meansan identity; for any β = (β1, · · · , βd) ∈ Nd and xi = (x1i, · · · , xdi) ∈ Rd, x

βi = x

β11i · · ·xβd

di ;|β| = β1 + · · ·+ βd; supp(p) denotes the support of polynomial p(x); for any α = (α1, · · · , αn)with each αi ∈ Nd and X = [ x1, · · · , xn ] ∈ Rd×n, Xα =

1≤i≤n

xαii ; |α| = |α1| + · · · + |αn|;

for any x ∈ Rn, ‖x‖2 =√

∑n

i=1 x2i ; for any X = (Xij) ∈ Rd×n, ‖X‖F =

ij X2ij . For any

two given symmetric matrices W and S, W • S denotes their inner product, i.e., W • S =∑

i,j WijSij = trace(WS).

This paper is organized as follows. Section 2 gives a brief overview of SOS relaxationsfor minimizing polynomials; Section 3 proposes a structured SOS relaxation to minimize the

3

Page 4: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

polynomial (1.7); Section 4 derives an error bound for the proposed SOS relaxation whendistances have errors; Section 5 presents some numerical simulations; lastly Section 6 drawssome conclusions.

2 Sum of squares (SOS) method

Recently, SOS relaxation receives considerable attention in the global optimization of multi-variate polynomial functions. In this section, we give a very brief introduction in this area.We refer to [17, 19, 20, 21] for more details.

2.1. SOS polynomials

A polynomial p(z) in z = (z1, · · · , zN ) is said to be SOS if p(z) ≡ ∑

i pi(z)2 for somepolynomials pi(z). Obviously, if p(z) is SOS, then p(z) is nonnegative, i.e., p(z) ≥ 0 for allz ∈ RN . For instance, the polynomial

(

z41 + z

42 + z

43 + z

44 − 4z1z2z3z4

)

≡1

3

{

(z21 − z

22 − z

24 + z

23)2 + (z2

1 + z22 − z

24 − z

23)2 + (z2

1 − z22 − z

23 + z

24)2+

2(z1z4 − z2z3)2 + 2(z1z2 − z3z4)

2 + 2(z1z3 − z2z4)2}

is SOS. This identity immediately implies that

z41 + z

42 + z

43 + z

44 − 4z1z2z3z4 ≥ 0, ∀ (z1, z2, z3, z4) ∈ R

4,

which is an arithmetic-geometric mean inequality. However, the nonnegative polynomials arenot necessarily SOS. In other words, the set of SOS polynomials (which is a cone) is properlycontained in the set of nonnegative polynomials (which is a larger cone). The process ofapproximating nonnegative polynomials by SOS polynomials is called SOS relaxation.

The advantage of SOS polynomials over nonnegative polynomials is that it is more tractableto check whether a polynomial is SOS. To test whether a polynomial is SOS is equivalent totest the feasibility of some SDP [20, 21], which has efficient numerical methods. To illustratethis, suppose polynomial p(z) has degree 2ℓ (SOS polynomials must have even degree). Thenp(z) is SOS if and only if [20, 21] there exists a symmetric matrix W º 0 such that

p(z) ≡ mℓ(z)TW mℓ(z)

where mℓ(z) is the column vector of monomials up to degree ℓ. For instance,

m2(z1, z2) = [ 1, z1, z2, z21 , z1z2, z

22 ]T .

As is well-known, the number of monomials in z with degrees up to ℓ is(

N+ℓ

)

. Thus the size

of matrix W is(

N+ℓ

)

. This number can be very large. However, for fixed ℓ (e.g.,ℓ = 2),(

N+ℓ

)

is polynomial in N . On the other hand, it is NP-hard (about N) to tell whether a polynomialis nonnegative whenever 2ℓ ≥ 4 (even when ℓ is fixed)[17].

2.2. SOS relaxation in polynomial optimization

Let g(z) =∑

α∈G gαzα be a polynomial in z. Here G is the support of g(z). Consider theglobal optimization problem

g∗ := min

z∈RNg(z).

4

Page 5: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

This problem is NP-hard when deg(g) ≥ 4. Recently, SOS relaxation attracts much attentionin solving this problem. The standard SOS relaxation is that

g∗sos := max

γγ

s.t. g(z) − γ being SOS .

Obviously we have that g∗sos ≤ g∗. In practice, SOS relaxation provides very good approxima-

tions, and often gives exact global minimum, i.e., g∗sos = g∗, even though theoretically there

are many more nonnegative polynomials than SOS polynomials [6]. SOS relaxation is said tobe exact if g∗

sos = g∗. In terms of SDP, the SOS relaxation can also be written as

g∗sos := max

γ,Wγ (2.1)

s.t. g(z) − γ ≡ mℓ(z)TWmℓ(z) (2.2)

W º 0 (2.3)

where 2ℓ = deg(g). Notice that (2.2) is an identity about z. The above program is convexabout (γ, W ). A lower bound g∗

sos can be computed by solving the resulting SDP. It can beshown [17] that the dual of (2.1)-(2.3) is

g∗mom := min

y

α∈Ggαyα (2.4)

s.t. Mℓ(y) º 0 (2.5)

y(0,··· ,0) = 1. (2.6)

Here Mℓ(y) is the moment matrix generated by moment vector y = (yα)α∈G . Each yα is calledthe α-th moment. The rows and columns of moment matrix Mℓ(y) are indexed by integervectors in NN . Each entry of Mℓ(y) is defined as Mℓ(y)(α, β) := yα+β for all |α|, |β| ≤ ℓ. Forinstance, when ℓ = 2 and N = 2, the vector

y = [ 1, y1,0, y0,1, y2,0, y1,1, y0,2, y3,0, y2,1, y1,2, y0,3, y4,0, y3,1, y2,2, y1,3, y0,4 ],

has moment matrix

M2(y) =

1 y1,0 y0,1 y2,0 y1,1 y0,2

y1,0 y2,0 y1,1 y3,0 y2,1 y1,2

y0,1 y1,1 y0,2 y2,1 y1,2 y0,3

y2,0 y3,0 y2,1 y4,0 y3,1 y2,2

y1,1 y2,1 y1,2 y3,1 y2,2 y1,3

y0,2 y1,2 y0,3 y2,2 y1,3 y0,4

.

For SOS relaxation (2.1)-(2.3) and its dual (2.4)-(2.6), the strong duality holds [17], i.e.,g∗

sos = g∗mom. Hence g∗

mom is also a lower bound for the global minimum g∗ of g(z).Now let us show how to extract minimizer(s) from optimal solutions to (2.4)-(2.6). Let y∗

be one optimal solution. If moment matrix Mℓ(y∗) has rank one, then there exists one vector

w such that Mℓ(y∗) = wwT . Normalize w so that w(0,··· ,0) = 1. Set z∗ = w(2 : N + 1). Then

Mℓ(y∗) = wwT immediately implies that y∗ = mℓ(z

∗), i.e., y∗α = (z∗)α. So g∗

mom = g(z∗).This says that a lower bound of g(z) is attained at one point z∗. So z∗ is one global minimizer.When moment matrix Mℓ(y

∗) has rank more than one, the process described above does notwork. However, if Mℓ(y

∗) satisfies the so-called flat extension condition

rank Mk(y∗) = rank Mk+1(y∗)

5

Page 6: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

for some 0 ≤ k ≤ ℓ − 1, we can extract more than one minimizers (in this case the globalsolution is not unique). When flat extension condition is met, it can be shown [10] that thereexist distinct vectors u1, · · · , ur such that

Mk(y∗) = λ1mk(u1) · mk(u1)T + · · · + λrmk(ur) · mk(ur)

T

for some λi > 0,∑r

i=1 λi = 1. Here r = rankMk(y∗). The set {u1, · · · , ur} is called a r-atomic representing support of moment matrix Mk(y∗). Vectors u1, · · · , ur can be shown tobe global minimizers of polynomial g(z). These minimizers can be obtained numerically bysolving some particular eigenvalue problem. We refer to [10] for flat extension conditions inmoment problems and [14] for extracting minimizers.

2.3. Exploiting sparsity in SOS relaxation

As mentioned in subsection 2.1, the length of matrix W in SOS relaxation is(

N+ℓ

)

whichcan be very huge if either N or ℓ is large. So the SOS relaxation is expensive to solve wheneither N or ℓ is large. However, if polynomial g(z) is sparse, i.e., its support G = supp(g) issmall, the size of the resulting SDP can be reduced significantly. Without loss of generality,assume (0, · · · , 0) ∈ G. Then supp(g) = supp(g − γ) for any number γ.

Suppose g(z) − γ =∑

i φi(z)2 is an SOS decomposition. Then by Theorem 1 in [24]

supp(φi) ⊂ G0 :=

(

the convex hull of1

2Ge

)

where Ge = {α ∈ G : α is an even integer vector}. The size of set G0 can be furtherly reduced[16, 31]. Here we briefly describe the technique proposed by Waki et al. [31].

For polynomial g(z), define its associated graph G = ([N ], E) such that (i, j) ∈ E if andonly if zizj appears in some monomial of g(z). [31] proposed to represent g(z) − γ as

g(z) − γ ≡K

i=1

si(z), each si(z) being SOS with supp(si) ⊂ Ci

where {C1, C2, · · · , CK} is the set of all maximal cliques of graph G (this is not necessarilytrue when g(z) − γ is SOS). Usually it is difficult to find all the maximal cliques of a generalgraph G. [31] proposed to replace {C1, C2, · · · , CK} by the set of all maximal cliques of onechordal extension of G. We refer to [5] for properties of chordal graphs. For chordal graphs,there are efficient methods for finding all the maximal cliques. Chordal extension essentiallyuses the sparse symbolic Cholesky factorization. Let R be the correlative sparsity pattern (csp)matrix of g(z), i.e., R is a random symmetric matrix such that R(i, j) = 0 for all zizj(i 6= j)not appearing in any monomial of g(z). Using csp matrix, [31] proposed to find a chordalextension G′ of G whose maximal cliques of G′ can be found efficiently.

There is much work in exploiting sparsity in SOS relaxations for minimizing polynomials.We refer to [12, 16, 22, 31] and the references therein.

3 SOS relaxations for sensor network localization

This section discusses how to apply SOS relaxations to solve problem (1.7). The variableis X = (xji)1≤j≤d,1≤i≤n. Here each xi = [ x1i, · · · , xdi ]T is the i-th sensor location to becomputed. Since f(X) is a quartic polynomial, SOS relaxations can be applied directly tofind global minimizers. The standard SOS relaxation for (1.7) is

f∗sos := max γ (3.1)

s.t. f(X) − γ ≡ m2(X)TWm2(X), (3.2)

W º 0. (3.3)

6

Page 7: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

Here m2(X) stands for the vector of all monomials in X (in graded reverse lexicographicalorder) of degrees up to two. Its dual is

f∗mom := min

y

α∈P

fαyα (3.4)

s.t. M2(y) º 0 (3.5)

y(0,··· ,0) = 1. (3.6)

By strong duality, we have f∗sos = f∗

mom ≤ f∗. The total number of decision variables inproblem (1.7) is d · n. The size of matrix W and M2(y) is

(

n·d+22

)

= O(n2d2), which ispolynomial in n, d. To solve (3.1)-(3.6), there are polynomial time algorithms (e.g., interiorpoint methods). On the other hand, the complexity to solve problem (1.7) is NP-hard. Sotheoretically we cannot expect SOS relaxation (3.1)-(3.3) to solve (1.7) correctly for everyinstance. But in practice, SOS relaxations usually provide very good approximations.

Theorem 3.1. If the sensor network localization problem admits a solution for the given dataD = (dij , eik), then SOS relaxation is exact, i.e., f∗

sos = f∗ = 0.

Proof. Since the problem admit a solution, there exists X = [x1, · · · , xn] ∈ Rd×n such that

‖xi − xj‖2 = dij , ∀ (i, j) ∈ A, ‖xi − ak‖2 = eik, ∀ (i, k) ∈ B.

Thus f(X) = 0 and then f∗ ≤ 0. Since f(X) is already SOS, we have f∗sos ≥ 0. Therefore

f∗sos = 0 since f∗

sos ≤ f∗. ¤

Remark 3.2. When distance data D admit a solution, the above conclusion that f∗sos = f∗ =

0 is trivial. Then, does our SOS relaxation make any sense ? In this case, the obtained lowerbound f∗

sos = 0 is not interesting. However, the solution y∗ to the dual problem (3.4)-(3.6)can help find sensor locations x1, x2, · · · , xn, as will be shown in the following example.

Example 3.3. Consider a simple example studied in [30], with n = 1, d = 2, m = 2. A =∅,B = {(1, 1), (1, 2)}, d11 = d12 = 2. The anchors are (±1, 0). For this problem, the SDPand SOCP relaxations return the origin (0, 0), while the true solutions are obviously (0,±

√3).

Now we formulate it as a polynomial optimization and then apply SOS relaxation to solve it.Problem (1.7) now becomes

minx11,x21

p(x11, x21) := ((x11 + 1)2 + x221 − 4)2 + ((x11 − 1)2 + x

221 − 4)2.

The SOS relaxation (3.1)-(3.3) is

max γ

s.t. p(x11, x21) − γ ≡ m2(x11, x21)TWm2(x11, x21),

W º 0.

Its dual problem is

miny

18 − 4y2,0 − 12y0,2 + 2y4,0 + 4y2,2 + 2y0,4

1 y1,0 y0,1 y2,0 y1,1 y0,2

y1,0 y2,0 y1,1 y3,0 y2,1 y1,2

y0,1 y1,1 y0,2 y2,1 y1,2 y0,3

y2,0 y3,0 y2,1 y4,0 y3,1 y2,2

y1,1 y2,1 y1,2 y3,1 y2,2 y1,3

y0,2 y1,2 y0,3 y2,2 y1,3 y0,4

º 0.

7

Page 8: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

Primal-dual interior point methods can be used to solve these two SDPs simultaneously. Herewe use software SeDuMi [28] to solve them. The optimal solution is γ∗ = −2.21 · 10−9 and

W∗ =

18.0000 0.0000 −0.0000 −3.4082 0.0000 −6.00000.0000 2.8165 0.0000 0.0000 0.0000 −0.0000

−0.0000 0.0000 0.0000 −0.0000 0.0000 −0.0000−3.4082 0.0000 −0.0000 2.0000 0.0000 1.1361

0.0000 0.0000 0.0000 0.0000 1.7278 −0.0000−6.0000 −0.0000 −0.0000 1.1361 −0.0000 2.0000

M∗ =

1.0000 −0.0000 0.0000 0.0000 −0.0000 3.0000−0.0000 0.0000 −0.0000 −0.0000 0.0000 −0.0000

0.0000 −0.0000 3.0000 0.0000 −0.0000 0.00000.0000 −0.0000 0.0000 0.0000 −0.0000 0.0000

−0.0000 0.0000 −0.0000 −0.0000 0.0000 −0.00003.0000 −0.0000 0.0000 0.0000 −0.0000 8.9999

.

The moment matrix M∗ has rank two, and satisfies the flat extension condition. Using thetechnique from [14], we can extract two points (0.0000,±1.7321). This example shows thatthe SOS relaxation not only finds the global minimum of f(X), but also returns two globalminimizers.

The softwares Gloptipoly [13] and SOSTOOLS [23] can be applied directly to solve problem(1.7). If we have a small number of sensors, Gloptipoly and SOSTOOLS work very well.However, for practical problems with a lot of sensors, the implementations of Gloptipoly andSOSTOOLS are usually very expensive. The reason is that the size of matrix in (3.1)-(3.3)or (3.4)-(3.6) is the number

(

n·d+22

)

, which can be very large. For instance, when n = 50 andd = 2, this number is bigger than 5000. So it is not practical to apply directly the generalSOS solvers for sensor network localization problem.

However, it will make a big difference if we can exploit the sparsity of polynomial f(X).From (1.7), we can see that f(X) has very particular structures. Notice that f(X) onlyhas a few number of monomials of degree up to four. Therefore, the techniques to exploitsparsity introduced in Section 2.3 can be applied here. The method described in [31] has beenimplemented in software SparsePOP [32]. It can help solve larger problems.

We should mention that the performance of SparsePOP depends on the distribution of theedges in A. For instance, when G(A) is sequentially connected, i.e.,

A = {(1, 2), (2, 3), · · · , (n, n − 1)},

SparsePOP can be applied to solve the localization problem for (d, n) = (2, 100) in PCs (e.g, aLaptop with 512MB), provided there are enough anchors to make the localization unique. Onthe other hand, if the edge set A is randomly sparse, the implementation of SparsePOP is stillvery expensive. SparsePOP randomly generates positive definite csp matrix (see Section 2.3)and then uses it to find a chordal extension by sparse Cholesky factorization. So if the cspmatrix of f(X) is banded with small bandwidth, SparsePOP works very well. However, forgeneral unstructured csp matrices, their Cholesky factors are often dense. So when A is sparsebut not banded, the implementation of SparsePOP is still expensive.

What further can we do in applying SOS ? This is the motivation of this section.

3.1. A more structured SOS relaxation

In the previous analysis of exploiting the sparsity, we ignored the fact that f(X) is already

8

Page 9: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

given in SOS form:

f(X) =∑

(i,j)∈A

(

‖xi − xj‖22 − d

2ij

)2+

1

|Si|∑

k:(i,k)∈B

(

‖xi − ak‖22 − e

2ik

)2

where Si = {j : (i, j) ∈ A}. This leads us to represent f(X) − γ as

f(X) − γ =∑

(i,j)∈Asij(xi, xj)

where sij(xi, xj) are SOS polynomials only in variables (xi, xj), instead of all x1, · · · , xn.This structure can help save the computation significantly. For this SOS representation, thecorresponding SOS relaxation has the special form

f∗∗sos := max γ (3.7)

s.t. f(X) − γ ≡∑

(i,j)∈Am2(xi, xj)

TWij m2(xi, xj) (3.8)

Wij º 0, ∀ (i, j) ∈ A. (3.9)

The size of matrix Wij is(

2d+22

)

= (d + 1)(2d + 1), which is independent of n. The totalnumber of decision variables in (3.7)-(3.9) is O(d4|A|). The dual of (3.7)-(3.9) is

f∗∗mom := min

y=

(

yνij(α): (i,j)∈A,

α∈N2d, |α|≤4

)

(i,j)∈A

α=(α1,α2)

(α1,α2)∈Nd×N

d

|α1|+|α2|≤4

fijα yνij(α) +

(i,k)∈B

β∈Nd

|β|≤4

fikβ yηi(β) (3.10)

s.t. M2(ij(y)) º 0, (i, j) ∈ A (3.11)

y(0,··· ,0) = 1 (3.12)

where f ijα , f ik

β are the coefficients of polynomials

(

‖xi − xj‖22 − d

2ij

)2=

α=(α1,α2)

(α1,α2)∈Nd×N

d

|α1|+|α2|≤4

fijα x

α1i x

α2j ,

(

‖xi − ak‖22 − e

2ik

)2=

β∈Nd

|β|≤4

fikβ x

βi .

and νij(α), ηi(β) ∈ Nnd are the index vectors such that

νij(α)(k) =

0 k ∈ [1, d(i − 1)]

α(k − di + d) k ∈ [d(i − 1) + 1, di]

0 k ∈ [di + 1, (j − 1)d]

α(k − dj + 2d) k ∈ [d(j − 1) + 1, dj]

0 k ∈ [dj + 1, dn]

ηi(β)(k) =

0 k ∈ [1, d(i − 1)]

β(k − di + d) k ∈ [d(i − 1) + 1, di]

0 k ∈ [di + 1, dn]

.

Here ij : Nnd → N2d is the projection map defined such that

ij(y)(ζ) = yνij(ζ), ∀ζ ∈ N2d

.

Theorem 3.4. For SOS relaxation (3.7)-(3.9) and its dual (3.10)-(3.12), the strong dualityf∗∗

sos = f∗∗mom holds.

9

Page 10: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

Proof. By standard duality theory for convex program, it suffices to show that (3.11) admitsa strict interior point. We choose yα as

yα :=

Rd×n Xα exp{−‖X‖2F }dX

Rd×n exp{−‖X‖2F }dX

.

For every nonzero vector c = (cα)|α|≤2,α=(α1,α2)∈N2d , we have

cT

M2(ij(y)) c =

Rd×n

(

|α|≤2 cαxα1i x

α2j

)2

exp{−‖X‖2F }dX

Rd×n exp{−‖X‖2F }dX

> 0.

Then we have that M2(ij(y)) ≻ 0 for every (i, j) ∈ A, which completes the proof. ¤

Theorem 3.5. If the sensor network localization problem admits a solution for the given dataD = (dij , eik)(i,j)∈A, (i,k)∈B, then SOS relaxation (3.7)-(3.9) is exact, i.e., f∗∗

sos = f∗ = 0.

Proof. From Theorem 3.1, we know f∗ = 0. Since f(X) is already SOS with the representationof the form (3.2), we have that f∗∗

sos ≥ 0. and then f∗∗sos = 0 since f∗∗

sos ≤ f∗. ¤

Remark 3.6. Similar to Remark 3.2, when distance data D = (dij , eik)(i,j)∈A, (i,k)∈B admita solution, the obtained lower bound f∗∗

sos = 0 is not interesting. But the solution to the dualproblem (3.10)-(3.12) can help find the sensor locations x1, x2, · · · , xn. Let us illustrate thisby another example.

Example 3.7. Consider the sensor network problem, as described in Figure 1, with four

a1

a2

a3

a4

x1

x2

x3

x4

0.5858

0.5858

0.58580.5858

1

11

1

Figure 1: ¤: anchor locations, ◦: sensor locations.

sensors and four anchors

a1 =

[

11

]

, a2 =

[

1−1

]

, a3 =

[

−1−1

]

, a4 =

[

−11

]

.

The network is as follows

A ={

(1, 2), (1, 4), (2, 3), (3, 4)}

, B ={

(1, 1), (2, 2), (3, 3), (4, 4)}

.

The distances are given by

d12 = d14 = d23 = d34 = s = 2 −√

2, e11 = e22 = e33 = e44 = 1.

10

Page 11: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

Let X = [x1, x2, x3, x4]. Then the polynomial given by (1.7) is

f(X) = (‖x1 − x2‖22 − s)2 + (‖x1 − x4‖2

2 − s)2+

(‖x2 − x3‖22 − s)2 + (‖x3 − x4‖2

2 − s)2+

(‖x1 − a1‖22 − 1)2 + (‖x2 − a2‖2

2 − 1)2+

(‖x3 − a3‖22 − 1)2 + (‖x4 − a4‖2

2 − 1)2.

For this problem, its SOS relaxation (3.7)-(3.8) is

max γ

s.t. f(X) − γ ≡ m2(

[

x1

x2

]

)T · W12 · m2(

[

x1

x2

]

) + m2(

[

x1

x4

]

)T · W14 · m2(

[

x1

x4

]

)+

m2(

[

x2

x3

]

)T · W23 · m2(

[

x2

x3

]

) + m2(

[

x3

x4

]

)T · W34 · m2(

[

x3

x4

]

)

W12, W14, W23, W34 ∈ S15+ .

The dual problem (3.7)-(3.8) can be written down similarly. For example, the LMI for pair(1, 2) looks like

[

Y11 Y12

Y T12 Y22

]

º 0

where

Y11 =

y00000 y10000 y01000 y00100 y00010

y10000 y20000 y11000 y10100 y10010

y01000 y11000 y02000 y01100 y01010

y00100 y10100 y01100 y00200 y00110

y00010 y10010 y01010 y00110 y00020

,

Y12 is

y20000 y11000 y10100 y10010 y02000 y01100 y01010 y00200 y00110 y00020

y30000 y21000 y20100 y20010 y12000 y11100 y11010 y10200 y10110 y10020

y21000 y12000 y11100 y11010 y03000 y02100 y02010 y01200 y01110 y01020

y20100 y11100 y10200 y10110 y02100 y01200 y01110 y00300 y00210 y00120

y20010 y11010 y10110 y10020 y02010 y01110 y01020 y00210 y00120 y00030

and Y22 is

y40000 y31000 y30100 y30010 y22000 y21100 y21010 y20200 y20110 y20020

y31000 y22000 y21100 y21010 y13000 y12100 y12010 y11200 y11110 y11020

y30100 y21100 y20200 y20110 y12100 y11200 y11110 y10300 y10210 y10120

y30010 y21010 y20110 y20020 y12010 y11110 y11020 y10210 y10120 y10030

y22000 y13000 y12100 y12010 y04000 y03100 y03010 y02200 y02110 y02020

y21100 y12100 y11200 y11110 y03100 y02200 y02110 y01300 y02110 y01120

y21010 y12010 y11110 y11020 y03010 y02110 y02020 y01210 y01120 y01030

y20200 y11200 y10300 y10210 y02200 y01300 y01210 y00400 y00310 y00220

y20110 y11110 y10210 y10120 y02110 y01210 y01120 y00310 y00220 y00130

y20020 y11020 y10120 y10030 y02020 y01120 y01030 y00220 y00130 y00040

.

In the above indices, 0 = 0000. Solving SOS relaxation (3.7)-(3.9), we get lower boundf∗

sos = −1.7719 · 10−9. Solving the dual (3.10)-(3.12), we get

y∗10000 = 0.2929, y∗

01000 = 0.2929, y∗00100 = 0.2929, y∗

00010 = −0.2929,

y∗01000 = −0.2929, y∗

00100 = −0.2929, y∗00010 = −0.2929, y∗

00001 = 0.2929.

11

Page 12: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

The moment matrices M2(ij(y∗) have rank one. So the minimizer can be extracted easily,

which is

x∗1 =

[

0.29290.2929

]

, x∗2 =

[

0.2929−0.2929

]

, x∗3 =

[

−0.2929−0.2929

]

, x∗4 =

[

−0.29290.2929

]

.

They are exactly the true locations(

±(1 −√

22

),±(1 −√

22

))

(we ignore rounding errors).

3.2. The algorithm and complexity

Now we discuss how to extract minimizer(s) X∗ = [ x∗1, · · · , x∗

n ] from the SOS relax-ation (3.7)-(3.8). For each 1 ≤ i ≤ n, the i-th sensor location x∗

i can be extracted from themoment matrix M2(i(y

∗)) if it satisfies the flat extension condition, where i is the projectionmap defined such that

i(y)(ζ) = yηi(ζ), ∀ζ ∈ Nd.

Let Vi be the set of all the vectors which can be extracted from the moment matrix M2(i(y∗)).

So x∗i ∈ Vi. If Vi is a singleton, then x∗

i has a unique choice.The situation will be subtle if some Vi has cardinality more than one. Suppose for some

(i, j) ∈ A we have |Vi| > 1 and |Vj | > 1. Can x∗i (x∗

j ) be arbitrary from the set Vi (Vj)? Theanswer to is usually no. Let us suppose there are two sensors x1 and x2 and the distance dataD is given such that the only two possible locations are

x∗1 =

[

1−1

]

, x∗2 =

[

−11

]

, or x∗1 =

[

−11

]

, x∗2 =

[

1−1

]

.

Then we can see

V1 = V2 =

{[

1−1

]

,

[

−11

]}

.

But obviously we cannot choose x∗i (x∗

j ) such that

x∗1 =

[

1−1

]

, x∗2 =

[

1−1

]

, or x∗1 =

[

−11

]

, x∗2 =

[

−11

]

.

Now what is the rule for choosing x∗i (x∗

j ) from Vi (Vj)? So far we have not yet used theinformation of moment matrix M2(ij(y

∗)). If M2(ij(y)) also satisfies the flat extensioncondition, we can extract a pair of sensor locations (x∗

i , x∗j ) from M2(ij(y

∗)). Let Xij be setof all such pairs that can be extracted from M2(ij(y

∗)). Now we are wondering whether Vi

and Xij are consistent, i.e., does (x∗i , x∗

j ) ∈ Xij imply that x∗i ∈ Vi, x∗

j ∈ Vj? This induces thefollowing theorem.

Theorem 3.8. Let y∗ be one optimal solution to (3.10)-(3.12). Suppose all M2(ij(y∗)) ((i, j) ∈

A) satisfy the flat extension condition. Let Vi,Xij be defined as above. Then for any (x∗i , x∗

j ) ∈Xij we must have x∗

i ∈ Vi, x∗j ∈ Vj.

Proof. Let Xij = {(x(1)i , x

(1)j ), · · · , (x

(r)i , x

(r)j )} be one r-atomic representing support of mo-

ment matrix M2(ij(y∗)). Then we have decomposition

M2(ij(y∗)) =

r∑

ℓ=1

λℓ m2(

[

x(ℓ)i

x(ℓ)j

]

)m2(

[

x(ℓ)i

x(ℓ)j

]

)T

for some λ1, · · · , λr > 0,∑r

ℓ=1 λℓ = 1. Notice that M2(i(y∗)) is a submatrix of M2(ij(y

∗)),which immediately implies

M2(i(y∗)) =

r∑

ℓ=1

λℓ m2(x(ℓ)i )m2(x

(ℓ)i )T

.

12

Page 13: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

This means that {x(1)i , · · · , x

(r)i )} is one r-atomic representing support of moment matrix

M2(i(y∗)) (some x

(ℓ)i might be same). By definition of Vi, we have {x(1)

i , · · · , x(r)i } ⊂ Vi.

Similarly we have {x(1)j , · · · , x

(r)j } ⊂ Vj . ¤

Theorem 3.9. Let y∗ be one optimal solution to (3.10)-(3.12). Assume all M2(ij(y∗)) ((i, j) ∈

A) satisfy the flat extension condition. Then any X∗ = [ x∗1, · · · , x∗

n ] such that (x∗i , x∗

j ) ∈Xij ((i, j) ∈ A) is a global solution to (1.7).

Proof. Fix X∗ = [ x∗1, · · · , x∗

n ] with (x∗i , x∗

j ) ∈ Xij ((i, j) ∈ A). Since moment matrixM2(ij(y

∗)) satisfies flat extension condition, we have decomposition

M2(ij(y∗)) = λijm2(

[

x∗i

x∗j

]

)m2(

[

x∗i

x∗j

]

)T + Mij

where 1 ≥ λij > 0 and Mij º 0. Now let λ = min(i,j)∈A λij > 0 and

Mij = (λij − λ)m2(

[

x∗i

x∗j

]

)m2(

[

x∗i

x∗j

]

)T + Mij º 0.

Notice that Mij and Mij are also moment matrices. Without loss of generality, we can assumeλ < 1, since otherwise each M2(ij(y

∗)) has rank one and X∗ is obviously a global minimizer.Define vector y = (yνij(α) : (i, j) ∈ A, |α| ≤ 4, α ∈ R2d) as follows

yνij(α) = (x∗i )

α1(

x∗j

)α2 .

Let y = (yνij(α) : (i, j) ∈ A, |α| ≤ 4, α ∈ R2d) be such that y∗ = λy + (1 − λ)y. Then we have

M2(ij(y∗)) = λM2(ij(y)) + (1 − λ)M2(ij(y))

and (1 − λ)M2(ij(y)) = Mij . Obviously vector y is feasible for (3.11)-(3.12) since

M2(ij(y)) =1

1 − λMij º 0.

Let L(y) be the linear objective function defined in (3.10). Since y∗ is optimal, we haveL(y∗) ≤ L(y) and L(y∗) ≤ L(y). By linearity of L, it holds

L(y∗) = λL(y) + (1 − λ)L(y) = f∗∗mom.

Therefore, L(y) = f∗∗mom since 0 < λ < 1. On the other hand, by definition of y, we get

f(X∗) = L(y) = f∗∗mom, which implies that X∗ is a global minimizer of (1.7). ¤

Remark 3.10. In the above theorems about Vi and Xij , we need the assumption that theflat extension condition holds. It poses some restrictions on the ranks of moment matricesand their submatrices. What is the geometric meaning behind this condition ? Actually, flatextension is a definition frequently used in the area of truncated moment problem. Consider afinite sequence of moments γ = (γα)α∈Nn,|α|≤2s (s is an integer). The flat extension conditionsays that there exists an integer 0 ≤ k < s such that

rank Mk(γ) = rank Mk+1(γ).

If this condition holds, then γ has a finite atomic representing measure, i.e.,

Ms(γ) = λ1ms(u1)ms(u1)T + · · · + λrms(ur)ms(ur)

T

where λi > 0, ui are distinct, and r = rank Mk(γ). In other words, flat extension conditionguarantees a finite atomic representing measure whose support has cardinality r. But the

13

Page 14: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

converse might not be true, that is, it is possible that flat extension condition fails and γ

has a representing measure whose support is infinite or finite but greater than r. When flatextension condition fails, the rank of Ms(γ) must be greater than the rank of Ms−1(γ). Thenthe truncated moment sequence γ might have a finite atomic representing measure but withsupport cardinality greater than r, or might have an infinite representing measure, or it mightnot have a representing measure. We refer to [10] for more details about the meanings androles of flat extension condition in truncated moment problem. Now we come back to thesensor network localization problem. Consider moment matrix M2(i(y

∗)). If it satisfies theflat extension condition, then there exists 0 ≤ k ≤ 1 such that

rank Mk(i(y∗)) = rank Mk+1(i(y

∗)).

We know rank M0(i(y∗)) = 1 and rank M1(i(y

∗)) ≤ d + 1, since the size of M1(i(y∗))

is d + 1. The flat extension condition guarantees that M2(i(y∗)) has a finite atomic rep-

resenting measure with support cardinality rankM1(i(y∗)). When this condition fails, it is

because either M2(i(y∗)) does not have a representing measure, or M2(i(y

∗)) has an infi-nite representing measure, or M2(i(y

∗)) has a finite representing measure whose support hascardinality greater than rankM1(i(y

∗)). Thus, if sensor xi has more than d + 1 locations,flat extension condition might fail and then we are not able to extract xi by applying thetechnique in [14]. When sensor xi has at most d + 1 possible locations, it is also possible thatM2(i(y

∗)) fails the flat extension condition. It is not clear about the corresponding geometricmeaning behind this phenomenon. Notice that sensor network localization problem is NP-hardeven if the localization solution is unique, and sparse SOS relaxation (3.7)-(3.9) and its dual(3.10)-(3.12) are solvable in polynomial time. So even if sensor xi has a unique location, wecannot expect that M2(i(y

∗)) satisfies the flat extension condition, unless NP=P. A similaranalysis can be applied to moment matrix M2(ij(y

∗)). If the sensor pair (xi, xj) has morethan 2d + 1 locations, the flat extension condition might fail. Similarly, even when (xi, xj)has at most 2d + 1 possible locations this condition might still fail. It is not clear about thegeometric meaning lying behind. This is an important and interesting future work.

Now we present the algorithm to solve sensor network localization as follows.

Algorithm 3.11 (Sensor Network Localization via Structured SOS Relaxation).

Input: d, n, D = (dij , eik)(i,j)∈A, ((i,j)∈B

Output: V1,V2, · · · ,Vn and Xij ((i, j) ∈ A)

Begin

Step 1: Solve the SDP problem (3.10)-(3.12). Get optimal solution y∗.

Step 2: For each 1 ≤ i ≤ n, find the set Vi of vectors x∗i from M2(i(y

∗)).

Step 3: For every (i, j) ∈ A with |Vi| + |Vj | > 2, find the set Xij .

End

Remark 3.12. The Algorithm 3.11 works when all the moment matrices M2(i(y∗)) and

M2(ij(y∗)) satisfy the flat extension condition. If some M2(i(y

∗)) or M2(ij(y∗)) does not

satisfy the flat extension condition, it is usually because some sensor locations cannot bedetermined (see Remark 3.10). In such situations, we can apply a tiny perturbation to thecoefficients in (3.10), and then simply set x∗

ji = y∗ηi(ej) (ej is the j-th unit vector in Rd), which

often gives a good approximation for the true location.

Now let us give the complexity for solving sparse SOS relaxation (3.7)-(3.9) and its dual(3.10)-(3.12). Notice that the total number of dual variables is

n

(

d + 4

4

)

+ |A|{(

2d + 4

4

)

− 2

(

d + 4

4

)}

.

14

Page 15: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

In practice d is usually 1, 2 or 3. So we can assume d is a constant. Thus the total numberof dual variables is O(n + |A|). If we use interior point methods (e.g., dual-scaling algorithm[2, 3, 4]) to solve this problem, the complexity at each step would be O

(

(n + |A|)3)

. However,if we exploit the special structures of this sparse SOS relaxation, this complexity bound canbe further reduced to O(n3).

Notice that in moment matrix M2(ij(y)) each entry is a single moment of the formy(α1,α2,··· ,αn) with αk ∈ Nd and |αi| + |αj | ≤ 4, where only αi, αj are nonzero and all otherαk (k 6= i, j) are zero vectors. Fix the pair (i, j) ∈ A. Then the moment y(0··· 0αi,0··· 0αj 0··· 0)with αi 6= 0, αj 6= 0 will not appear in any other moment matrix M2(ij(y)) ((k, ℓ) 6= (i, j)).The number of all such moments in M2(ij(y)) is

σ(d)define

:=

(

2d + 4

4

)

− 2

(

d + 4

4

)

=1

12(d + 1)(d + 2)(7d

2 + 9d − 6).

Order the edge pairs in A in the way such that

A = {(i1, j1), (i2, j2), · · · , (i|A|, j|A|)}.

Then we relabel the moments yα as follows. Let the moments which only appear in M2(i1j1(y))be z1, z2, · · · , zσ(d), let the moments which only appear in M2(i2j2(y)) be zσ(d)+1, · · · , z2σ(d),and similarly for subsequent pairs (ik, jk) (k = 3, · · · , |A|). The moments which only appearin M2(i|A|j|A|

(y)) are z(|A|−1)σ(d)+1,· · · ,z|A|σ(d). Now we consider the moment of the formy(0··· 0αi0··· 0), i.e., only the i-th subvector of length d in the index of yα is nonzero. The mo-ment y(0··· 0αi0··· 0) will appear in M2(kℓ(y)) whenever k = i or ℓ = i. For each fixed i, the

number of such moments is(

d+44

)

. So the total number of such moments for one sensor is

ω(d) :=(

d+44

)

. Now we let the moments of the form y(α10··· 0) be z|A|σ(d)+1, · · · , z|A|σ(d)+ω(d),let the moments of the form y(0··· 0α20··· 0) be z|A|σ(d)+ω(d)+1, · · · , z|A|σ(d)+2ω(d), and similar forall subsequent k ≥ 3. Lastly, we get a new labeling for the dual variables yα in (3.10)-(3.12),which are

z1, · · · , z|A|σ(d), · · · , z|A|σ(d)+nω(d).

Let N = |A|σ(d)+nω(d). Therefore the sparse SOS relaxation (3.7)-(3.9) and its dual (3.10)-(3.12) can be written in the standard SDP form

min C • W s.t. A(W ) = b, W º 0, (3.13)

max bTz s.t. A

∗z + S = C, S º 0. (3.14)

In (3.13)-(3.14), A∗z =∑N

k=1 zkAk and A(W )k = Ak •W (k = 1, · · · , N). Ak, C are constantmatrices and b is a constant vector. The definition of Ak, C, b is explicit from (3.10)-(3.12).Notice that Ak, C, W, S are block diagonal matrices of length |A|

(

2d+22

)

, with each block having

size(

2d+22

)

. W is the primal variable and S, z are dual variables.Now we apply the dual-scaling algorithm [2, 3, 4] to solve (3.13)-(3.14). Applying Newton’s

method to A(W ) = b, A∗z + S = C, and W = νS−1 generates the system

A(W + ∆W ) = b

A∗∆z + ∆S = 0,

νS−1∆SS

−1 + ∆W = νS−1 − X.

Once ∆z is known, ∆S and ∆W can be given explicitly. ∆z satisfies the linear system

ν

A1 • S−1A1S−1 · · · A1 • S−1ANS−1

.... . .

...AN • S−1A1S

−1 · · · AN • S−1ANS−1

∆z = b − νA(S−1). (3.15)

15

Page 16: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

From the labeling of z and block structure of Ak, we can see that Ak and Aℓ do not havecommon nonzero blocks whenever zk and zℓ (k, ℓ ≤ |A|σ(d)) come from different momentmatrices M2(ij(y)). Since S is also a block matrix like Ak, Aℓ, it holds that Ak•S−1AℓS

−1 = 0whenever zk and zℓ (k, ℓ ≤ |A|σ(d)) come from different moment matrices. Therefore thecoefficient matrix in (3.15) has the sparsity pattern

F1 0 0 00 F2 0 0

0 0. . . 0

0 0 0 F|A|

G

GT H

.

Each block matrix Fk has constant size σ(d). Matrix G is of dimension |A|σ(d) × nω(d), andH is of dimension nω(d) × nω(d). If we use block LU factorization to solve linear system(3.15), it can be done in O

(

n3 + n|A|)

operations. Since S is block diagonal, computing S−1

costs O(|A| + n3) operations. Obviously, the cardinality of A is at most 12n(n − 1). So the

complexity of dual-scaling algorithm for one step is O(n3).Similarly, we can also get the complexity bound for SDP relaxation (1.2)-(1.3). But the

resulting linear system in interior point methods (e.g., dual-scaling algorithm) for (1.2)-(1.3)usually does not have the sparsity pattern as (3.15) does. The complexity for (1.2)-(1.3) ateach step is usually O

(

(n + |A|)3)

. When |A| = O(n), the complexities are almost the same.However, if |A| = O(n1+τ ) with 0 < τ ≤ 1, the complexity for sparse SOS relaxation is better.In this case, sparse SOS relaxation is faster to solve for large problems.

4 Error bound for the perturbation

In practice, the distance data D = (dij , eik)(i,j)∈A),eik ((i,j)∈B may not be given exactly andoften have errors due to the inaccuracy in measurements. We now consider the case that D isperturbed by random noises ǫij and ǫik, i.e.,

dij = dij(1 + ǫij), (i, j) ∈ Aeik = eik(1 + ǫik), (i, k) ∈ B.

Here dij are true distance between sensor xi and xj , and eik are true distance between sensorxi and anchor ak. Throughout this section, we make two assumptions.

Assumption 4.1. The distance data D = (dij , eik)(i,j)∈A (i,k)∈B is localizable, i.e., there exists

X = [x1, · · · , xn] such that ‖xi − xj‖2 = dij , ‖xi − ak‖2 = eik.

Assumption 4.2. The graph G(A) is connected.

If G(A) is not connected, then G(A) can be decomposed into several connected components.On each component, we can solve an independent sensor network localization problem.

For the convenience of notations, let ε = (ǫij , ǫik)ij∈A, (i,k)∈B be the perturbation and δ bethe maximum error

δ = ‖ε‖∞ = max

(

max(i,j)∈A

|ǫij |, max(i,k)∈B

|ǫik|)

.

Without loss of generality, assume ε = ε(t) = tτ for some constant vector

τ = (τij , τik)(i,j)∈A,(i,k)∈B

with ‖τ‖∞ = 1. Here t is a parameter belonging to interval [0, δ].

16

Page 17: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

Let y∗ be one optimal solution to (3.10)-(3.12) with distance data D = (dij , eik)(i,j)∈A, (i,k)∈B,X∗ = [x∗

1, · · · , x∗n] be sensor locations extracted from moment matrix M2(y

∗), y be oneoptimal solution to (3.10)-(3.12) for distance data D = (dij , eik)(i,j)∈A (i,k)∈B and X =[x1, · · · , xn] be true sensor locations. The goal of this section is to estimate the error‖x∗

i − xi‖ (1 ≤ i ≤ n).

Notice that the coefficients f ijα , f ik

β in (3.10) are linear functions with respect to vector

(d2ij , d

4ij , e

2ik, e

4ik)(i,j)∈A, (i,k)∈B,

and hence they are also functions of t. Actually they are univariate polynomials in t of degreefour. Denote the objective function in (3.10) by L(y, ε(t)). We can see that L(y, ε(t)) is linearwith respect to y, but nonlinear (quartic polynomial) in ε(t). In terms of parameter t, problem(3.4)-(3.6) becomes

f∗(ε(t)) := min

yL(y, ε(t)) (4.1)

s.t. M2(ij(y)) º 0, ∀ (i, j) ∈ A (4.2)

y(0,··· ,0) = 1. (4.3)

Denote by y(t) the optimal solution of the above. Then y(0) = y, y(δ) = y∗.

Assumption 4.3. For each t ∈ [0, δ], the solution y(t) to problem (4.1)-(4.3) is unique, andall the moment matrices M2(ij(y(t))) have rank one.

Remark 4.4. If some moment matrix M2(ij(y(t))) has rank more than one, it is usuallybecause either the SOS relaxation (3.7)-(3.9) fails (f∗∗

sos < f∗, which is very rare from ourcomputational experience), or there is more than one global solution to (1.7). In the lattercase, the sensor locations cannot be uniquely determined from the given distance data D. Insuch situations, the solution set is not a singleton and its error analysis is more complicated.So we do not discuss that case here.

Under Assumption 4.3, we can extract a unique minimizer X(t) = [x1(t), · · · , xn(t)] fromthe moment matrix M2(ij(y(t))). Then we have X(0) = X, X(δ) = X∗.

Lemma 4.5. As t → 0, it holds that ‖X(t) − X‖2 → 0.

Proof. First, we prove that X(t) must be bounded when t → 0. Otherwise, there exists asequence {tk} converging to zero such that ‖X(tk)‖2 → ∞. Under Assumption 4.3, we knowX(t) is the unique global minimizer for polynomial f(X). So we have

f(X(t)) ≤ f(0) =∑

(i,j)∈Ad4ij +

(i,k)∈Be4ik ≤ (1 + δ)4

(i,j)∈Ad4ij +

(i,k)∈Be4ik

,

which implies that {f(X(tk))} is bounded. Let 0 = {i ∈ [n] : ∃ ℓ (i, ℓ) ∈ B }. If for somei ∈ 0, xi(tk) goes to infinity as tk → 0, then

f(X(tk)) ≥(

‖xi(tk) − ar‖22 − e

2i,ℓ

)2 → ∞

which is impossible. So for every i ∈ 0, the sequence {xi(tk)}∞k=1 is bounded. Now let

1 =

0 ∪{

1 ≤ i ≤ n : ∃j ∈ 0

s.t. (i, j) ∈ A or (j, i) ∈ A}

.

Since the graph G(A) is connected, 1 contains 0 properly if 0 6= [n]. If for some i ∈ 1\0,xi(tk) goes to infinity, then there exists j ∈ 0

f(X(tk)) ≥(

‖xi(tk) − xj(tk)‖22 − d

2ij

)2 → ∞

17

Page 18: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

which is also impossible since {xj(tk)} is bounded. So for every i ∈ 1, {xi(tk)} is bounded.Repeat this process, we can get a chain of strictly increasing subsets of [n]

0

( 1

( 2

( 3

( · · ·

such that for every i ∈ ℓ {xi(tk)} is bounded. Since each i is a subset of finite set [n], theremust exist an integer r such that r = [n]. Thus for every i ∈ [n], xi(tk) is bounded, whichcontradicts that X(tk) → ∞.

Now we show that ‖X(t) − X‖2 → 0. Otherwise, there exists a sequence {tk} → 0 suchthat X(tk) → X 6= X. Since X(t) is bounded when t → 0, we know that the minimizersof f(X, ε(t)) must be contained in a big ball B(0, R) for R large enough. So f∗(ε(t)) =min{f(X, ε(t)) : X ∈ B(0, R)}. By continuity, we know f∗(ε(tk)) → f∗(ε(0)). By Assump-tion 4.1, the sensor network localization is localizable for ε(0) = 0. So f(0) = 0 = f(X). SinceX(t) → X, we also get f(X) = 0. This implies that problem (4.1) − (4.3) has two distinctoptimal solutions, which is not possible by Assumption 4.3.

Now we are able to derive the error bound.

Theorem 4.6. For problem (4.1)-(4.3), assume the strong second order sufficient conditionshold, and the optimal solution set of the linearized (with respect to ε(t)) problem is nonempty(see Section 4.1 in [33]). Under Assumption 4.3, then it holds:

(i) ‖X∗ − X‖F ≤ L · δ for some constant L;

(ii) f∗∗mom ≤ δ2(1 + δ)2

{

(i,j)∈A d4ij +

(i,k)∈B e4ik

}

.

Proof. (i) We prove by applying Theorem 4.1.12 in [33]. Consider (y, ε) as (x, u) in Theo-rem 4.1.12 in [33]. We just check the assumptions there hold. First, by Lemma 4.5, we knowthat X(t) → X. Second, by the proof of Theorem 3.4, there exists a vector y such thatM2(ij(y)) ≻ 0 for every (i, j) ∈ A. So the Robinson constraint qualification holds at y forε = 0. Thirdly, other two assumptions of Theorem 4.1.12 in [33] are also satisfied at thistheorem. Thus we have ‖y(t) − y‖2 = O(t).

By Assumption 4.3, the minimizers X(t) can be obtained directly from vector y(t) (seethe trick introduced in Section 2.3). Actually xi(t) are the entries of y(t) with indices corre-sponding to monomials of degree one. So we have ‖X(t) − X‖F ≤ ‖y(t) − y‖2 = O(t). Lett = δ. Then ‖X∗ − X‖F = O(δ), and hence our claim (i) holds.

(ii) Since y∗ is optimal, we have that

f∗∗mom ≤ f(X) =

(i,j)∈A

(

‖xi − xj‖22 − d

2ij

)2+

(i,k)∈B

(

‖xi − ak‖22 − e

2ik

)2.

Then it holds

f∗mom ≤

(i,j)∈Ad4ij

(

(1 + ǫij)2 − 1

)2+

(i,k)∈Be4ik

(

(1 + ǫik)2 − 1)2

≤ δ2(1 + δ)2

(i,j)∈Ad4ij +

(i,k)∈Be4ik

which justifies our claim (ii).

Theorem 4.6 shows that the perturbed solution is accurate within a factor of perturbationerror occurring in the distance data D, under some technical assumptions. This is observedin Example 5.3.

18

Page 19: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

5 Some numerical simulations

In this section, we show some numerical implementations in solving sensor network localizationvia SOS methods. As we discussed in Section 3, the general SOS solvers cannot be applied tosolve large problem. So we use the sparse SOS relaxation (3.7)-(3.12) introduced in Section 3.1.

We randomly generate test problems which are similar to those given in [9], and then testthe performance of SOS relaxation (3.7)-(3.12). Here 500 sensors x∗

1, · · · , x∗500 (n = 500) are

randomly generated from the unit square [−0.5, 0.5]×[−0.5, 0.5].. Anchors (a1, a2, a3, a4, m =4) are chosen to be the four points (±0.45,±0.45). We apply the sparse SOS relaxation(3.7)-(3.12) to find sensor locations. The computed locations are denoted by x1, · · · , xn. Theaccuracy of the computed locations is be measured by the Root Mean Square Distance (RMSD)which is defined as

RMSD =

(

1

n

n∑

i=1

‖xi − x∗i ‖2

2

) 12

.

We use RMSD and the consumed CPU time to test the performance. SOS relaxation (3.7)-(3.12) will be solved by the sparse SDP solver SeDuMi [28]. The computations are all imple-mented on a Linux machine with 0.98 GB memory and 1.46 GHz CPU. For these randomlygenerated problems, the SDP relaxation (1.2)-1.3 cannot be implemented due to the shortageof memory. But SOS relaxation (3.7)-(3.12) can be solved efficiently.

Example 5.1. Randomly generate 500 sensors x∗1, · · · , x∗

500 from the unit square [−0.5, 0.5]×[−0.5, 0.5]. The edge set A is chosen as follows. Initially set A = ∅. Then for each i from 1 to500, compute the set Ii = {j ∈ [500] :, ‖x∗

i − x∗j‖2 ≤ 0.3, j ≥ i}; if |Ii| ≥ 10, let Ai the subset

of Ii consisting of the 10 smallest integers; otherwise, let Ai = Ii; then let A = A ∪ {(i, j) :j ∈ Ai}. The edge set B is chosen such that B = {(i, k) ∈ [n] × [m] : ‖x∗

i − ak‖2 ≤ 0.3}, i.e.,every anchor is connected to all the sensors that are within distance 0.3. For every (i, j) ∈ Aand (i, k) ∈ B, let the distances be

dij = ‖x∗i − x

∗j‖2, eik = ‖x∗

i − ak‖2.

There are no errors in the distances. The computed results are plotted in Figure 2. Thetrue sensor locations (denoted by circles) and the computed locations (denoted by stars) areconnected by solid lines.

From Figure 2, we find that all the stars are located inside circles, which implies that SOS re-laxation provides high quality locations. In this example, all the moment matrices M2(ij(y

∗))in (3.10)-(3.12) have numerical rank one. So every Vi has cardinality one. By Theorem 3.9,X∗ = [ x∗

1, · · · , x∗60 ] with x∗

i ∈ Vi is the global minimizer of (1.7) and hence a solution tosensor network localization. The RMSD for SOS relaxation (3.7)-(3.12) is 2.9 · 10−6 (the com-puted locations will be exact if we ignore rounding errors involved in floating point operations).Computing the coefficients for SOS relaxation (3.7)-(3.12) and the preprocessing of SeDuMitake about 4045 CPU seconds (1.12 hours). The interior point method in SeDuMi consumesabout 1079 CPU seconds (18 minutes). We generate this random examples 20 times. Everytime the RMSD is in the order O(10−6) and the CPU time consumed by SOS relaxation(3.7)-(3.12) is almost the same.

In Example 5.1, the set A and B are sufficient to determine the sensor location uniquely.However, if A and B do not contain enough edges, some senors may be uniquely localizable,but the others might not be. In such situations, the SOS relaxation (3.7)-(3.12) is able to findthe correct locations for those sensors that are unique localizable, and give estimates for theother sensors that are not uniquely localizable. Let us see another example.

19

Page 20: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5

−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

RMSD = 2.9e−6

Figure 2: 500 sensors, sufficient edges, SOS relaxation (3.7)-(3.12)

Example 5.2. We generate random test problems almost in the same way as in the Exam-ple 5.1, except the following: if |Ii| ≥ 3, let Ai the subset of Ii consisting of the 3 smallestintegers; otherwise, let Ai = Ii. Then the number of edges might not be sufficient to determinethe sensor locations. Assume there are no distance errors. The computed results are plotted inFigure 3. The true sensor locations (denoted by circles) and the computed locations (denotedby stars) are connected by solid lines.

From Figure 3, we can see that most stars are located inside circles, while a few are outsidecircles. The RMSD for SOS relaxation (3.7)-(3.12) is 0.0118. For most edges (i, j), themoment matrices M2(ij(y

∗)) satisfy the flat extension condition. Only 22 moment matricesM2(ij(y

∗)) does not satisfy this condition. So we find that only 22 sensor locations are notcorrect (with error greater than 10−2), and all the others are correct (with accuracy 10−4).The consumed computational time is now less than the previous example. Computing thecoefficients for SOS relaxation (3.7)-(3.12) and the preprocessing of SeDuMi take about 315CPU seconds (5.25 minutes). The interior point method in SeDuMi consumes about 718CPU seconds (11.9 minutes). From Section 3.2, we know the SOS relaxation has complexityO(n3 +n|A|). This example has a much smaller set A than the previous one does. So it takesmuch less processing time to initialize SeDuMi.

The above two examples show that the SOS relaxation (3.7)-(3.12) can solve large scalesensor network localization problem very efficiently and accurately. Now we conclude thissection by one example demonstrating the relationship between distance error and sensorlocation error.

20

Page 21: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5

−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

RMSD = 1.1e−2

Figure 3: 500 sensors, insufficient edges, SOS relaxation (3.7)-(3.12)

Example 5.3. Choose A,B similarly as in Example 5.1. This usually makes the sensornetwork location problem have a unique solution. Then perturb the distances as follows

dij = ‖x∗i − x

∗j‖2(1 + ǫ · randn)

eik = ‖x∗i − ak‖2(1 + ǫ · randn)

where ǫ ∈ {10−4, 5 · 10−4, 10−3, 5 · 10−3, 10−2, 5 · 10−2, 0.1, 0.2, 0.3 } and randn ∈ N (0, 1).The computed errors RMSD are in the following table:

ǫ 0.0001 0.0005 0.0010 0.0050 0.0100 0.0500 0.1000 0.2000 0.3000

RMSD 3.1 · 10−5 1.8 · 10−4 2.8 · 10−4 0.0017 0.0031 0.0195 0.0420 0.0799 0.1852

The plot of RMSD\ε versus distance error ǫ is in Figure 4.

We can see that the sensor location error RMSD has the same magnitude as the distance errorǫ, which is consistent with the error analysis in Section 4.

6 Conclusions

In this paper, we formulate the sensor network localization problem as finding the globalminimizer(s) of a quartic polynomial. Exploiting the special features of this polynomial, wepropose a sparse SOS relaxation. The properties of this SOS relaxation are discussed. Undersome technical assumptions, we show that the computed sensor locations are accurate within

21

Page 22: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

10−4

10−3

10−2

10−1

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

ε

RM

SD

/ ε

Figure 4: The plot of RMSD/ǫ versus accuracy parameter ǫ.

a factor of the perturbation error. Our numerical simulations show that this SOS relaxationcan solve large localization problem efficiently and accurately.

One natural concern is the reliability of SOS relaxation (3.7)-(3.12). By Theorem 3.5,we know the SOS relaxation is exact whenever the given distance data admits a solution.Furthermore, under the flat extension condition, by Theorem 3.9, we can also find true sensorlocations (more than one solution can be returned if the localization is not unique). If thegiven distances have errors, (1.7) can be considered as a least squares problem. However,as we have discussed in Section 3, our SOS relaxation is not guaranteed to find true sensorlocations for every instance, since the problem is NP-hard. To increase our confidence of theSOS relaxation, one may replace the polynomial in (1.7) by the randomly weighted polynomial

(i,j)∈Aξij

(

‖xi − xj‖22 − d

2ij

)2+

(i,k)∈Bςik

(

‖xi − ak‖22 − e

2ik

)2

where ξij , ςik are random positive numbers.

One important and interesting future work is to study the geometric meaning of flat exten-sion condition. The perturbation error analysis when the localization solution is not uniqueis also interesting.

Acknowledgement. The author would like to thank Pratik Biswas, Jim Demmel, LaurentEl Ghaoui, Bernd Sturmfels, Paul Tseng and Yinyu Ye for helpful discussions.

References

[1] J. Aspnes, D. Goldberg and Y.R. Yang. On the computational complexity of sensornetwork localization. Lecture Notes in Computer Science (3121), Springer-Verlag, 2004,pp.32-44.

[2] S. J. Benson, Y. Ye, and X. Zhang. Solving large-scale sparse semidefinite programs forcombinatorial optimization. SIAM Journal on Optimization, 10 (2000), pp. 443-461.

22

Page 23: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

[3] S. J. Benson and Y.Ye. DSDP3: Dual scaling algorithm for general positive semidefiniteprogramming. Tech. Report ANL/MCS-P851-1000, Mathematics and Computer ScienceDivision, Argonne National Laboratory, Feb. 2001.

[4] S. J. Benson and Y. Ye. DSDP5: A software package implementing the dual-scalingalgorithm for semidefinite programming. Tech. Report ANL/MCS-TM-255, Mathematicsand Computer Science Division, Argonne National Laboratory, Argonne, IL, June 2002.

[5] J. Blair and B. Peyton. An introduction to chordal graphs and lieque trees. In J. George,J. Gilbert, and J. Liu, editors, graph theory and sparse matrix computations, pages 1-30,Springer Verlag, 1993.

[6] G. Blekherman. Volumes of nonnegative polynomials, sums of squares, and powers oflinear forms, preprint, arXiv:math.AG/0402158.

[7] L. Blumenthal. Theory and applications of distance geometry, Chelsea Publishing Com-pany, Bronx, New York, 1970.

[8] P. Biswas and Y. Ye. Semidefinite programming for ad hoc wireless sensor network local-ization. Proc. 3rd IPSN 46-54, 2004.

[9] P. Biswas, T.C. Liang, K.C. Toh, T.C. Wang and Y. Ye. Semidefinite ProgrammingApproaches for Sensor Network Localization with Noisy Distance Measurements. IEEETransactions on Automation Science and Engineering 3:4 (2006) 360-371.

[10] R.E. Curto and L.A. Fialkow. The truncated complex K-moment problem. Trans. Amer.Math. Soc. 352 (2000) 2825–2855.

[11] L. Doherty, K. S. J. Pister and L. El Ghaoui. Convex Position Estimation in WirelessSensor Networks. Proc. 20th IEEE Infocom , Vol. 3 (2001), 1655-1663.

[12] K. Gatermann and P. Parrilo. Symmetry groups, semidefinite programs, and sums ofsquares. Journal of Pure and Appl. Algebra, Vol. 192, No. 1-3, pp. 95-128, 2004.

[13] D. Henrion and J. Lasserre. GloptiPoly: Global optimization over polynomials with Mat-lab and SeDuMi. ACM Trans. Math. Soft., 29:165-194, 2003.

[14] D. Henrion and J. Lasserre. Detecting global optimality and extracting solutions in Glop-tiPoly. In Positive Polynomials in Control, D. Henrion and A. Garulli, eds., Lecture Noteson Control and Information Sciences, Springer Verlag, 2005.

[15] N. Krislock, V. Piccialli and H. Wolkowicz. Robust semidefinite programming approachesfor sensor network localization with anchors. CORR 2006-12, May. 2006. http://orion.uwaterloo.ca/~hwolkowi/.

[16] M. Kojima, S. Kim and H. Waki. Sparsity in Sums of Squares of Polynomials. Mathe-matical ProgrammingVol.103 (1) 45-62.

[17] J. Lasserre. Global optimization with polynomials and the problem of moments. SIAMJournal on Optimization, 11 (2001), No. 3, 796–817.

[18] J. More and Z. Wu. Global continuation for distance geometry problems, SIAM Journalon Optimization, 7(1997), 814-836.

[19] J. Nie, J. Demmel and B. Sturmfels. Minimizing Polynomials via Sum of Squares over theGradient Ideal. Mathematical Programming, Series A, Vol. 106 (2006), No. 3, 587-606.

[20] P. Parrilo and B. Sturmfels. Minimizing polynomial functions, Proceedings of the DI-MACS Workshop on Algorithmic and Quantitative Aspects of Real Algebraic Geometryin Mathematics and Computer Science (March 2001), (eds. S. Basu and L. Gonzalez-Vega), American Mathematical Society, 2003, pp. 83–100.

[21] P. Parrilo. Semidefinite Programming relaxations for semialgebraic problems. Mathemat-ical Programming, Ser. B 96 (2003), No. 2, 293–320.

23

Page 24: Sum of Squares Method for Sensor Network Localizationmath.ucsd.edu/~njw/PUBLICPAPERS/sennetloc_cmopt.pdf · 2 Sum of squares (SOS) method Recently, SOS relaxation receives considerable

[22] P. Parrilo. Exploiting structure in sum of squares programs. Proceedings for the 42ndIEEE Conference on Decision and Control, Maui, Hawaii, 2003.

[23] S. Prajna, A. Papachristodoulou and P. Parrilo. SOSTOOLS User’s Guide. Website:http://www.mit.edu/~parrilo/SOSTOOLS/.

[24] B. Reznick. Extremal psd forms with few terms. Duke Math. J., 45, 363-374 (1978).

[25] B. Reznick. Some concrete aspects of Hilbert’s 17th problem. In Contemporary Mathe-matics, volume 253, pages 251-272. American Mathematical Society, 2000.

[26] J. Saxe. Embeddability of weighted graphs in k-space is strongly NP-hard, in Proc. 17thAllerton Conference in Communications, Control, and Computing, Monticello, IL, 1979,480-489.

[27] A. Man-cho So and Y. Ye. The theory of semidefinite programming for sensor networklocalization. To appear in Math. Prog.. Website: http://www.stanford.edu/~yyye.

[28] J.F. Sturm. SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones,Optimization Methods and Software, 11&12(1999)625-653.

[29] B. Sturmfels. Solving systems of polynomial equations. Amer.Math.Soc., CBMS regionalconferences series, No. 97, providence, Rhode Island, 2002.

[30] P. Tseng. Second-order cone programming relaxation of sensor network localization, Au-gust, 2005, to appear in SIAM Journal on Optimization.

[31] H. Waki, S. Kim, M. Kojima and M. Muramatsu. Sums of Squares and Semidefinite Pro-gramming Relaxations for Polynomial Optimization Problems with Structured Sparsity.SIAM Journal on Optimization, Vol.17 (1) 218-242 (2006).

[32] H. Waki, S. Kim, M. Kojima and M. Muramatsu. SparsePOP: a sparse semidefinite pro-gramming relaxation of polynomial optimization problems. http://www.is.titech.ac.jp/~kojima.

[33] H. Wolkowicz, R. Saigal, and L. Vandenberghe, editors. Handbook of semidefinite pro-gramming. Kluwer’s Publisher, 2000.

24