efficient algorithms for geometric optimization · searching, prune-and-search techniques for...

47
Efficient Algorithms for Geometric Optimization PANKAJ K. AGARWAL Duke University AND MICHA SHARIR Tel Aviv University We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution. We then describe a wide range of applications of these and other techniques to numerous problems in geometric optimization, including facility location, proximity problems, statistical estimators and metrology, placement and intersection of polygons and polyhedra, and ray shooting and other query-type problems. Categories and Subject Descriptors: A.1 [General]: Introductory and Survey; F.2.2 [Theory of Computation]: Analysis of Algorithms and Problems—Geometrical problems and computations; I.1.2 [Computing Methodologies]: Algorithms—Analysis of algorithms General Terms: Algorithms, Design Additional Key Words and Phrases: Clustering, collision detection, linear programming, matrix searching, parametric searching, proximity problems, prune- and-search, randomized algorithms 1. INTRODUCTION Combinatorial optimization typically deals with problems of maximizing or minimizing a function of one or more variables subject to a large number of inequality (and equality) constraints. Many problems can be formulated as combinatorial optimization problems, which has made this a very active area Both authors are supported by a grant from the U.S.-Israeli Binational Science Foundation. P. K. Agarwal has also been supported by National Science Foundation Grant CCR-93-01259, Army Research Office MURI grant DAAH04-96-1-0013, a Sloan fellowship, and an NYI award and matching funds from Xerox Corp. M. Sharir has also been supported by NSF Grants CCR-94-24398 and CCR-93-11127, a Max-Planck Research Award, and a grant from the G.I.F., the German-Israeli Foundation for Scientific Research and Development. A preliminary version of this article appeared as: P. K. Agarwal and M. Sharir, Algorithmic techniques for geometric optimization, in Computer Science Today: Recent Trends and Developments, LNCS 1000, J. van Leeuwen, Ed., 1995, pp. 234–253. Authors’ addresses: P. K. Agarwal, Center for Geometric Computing, Department of Computer Science, Box 90129, Duke University, Durham, NC 27708-0129; M. Sharir, School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel, and Courant Institute of Mathematical Sciences, New York University, New York, NY 10012. Permission to make digital / hard copy of part or all of this work for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication, and its date appear, and notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and / or a fee. © 1999 ACM 0360-0300/99/1200–0412 $5.00 ACM Computing Surveys, Vol. 30, No. 4, December 1998

Upload: others

Post on 17-May-2020

4 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

Efficient Algorithms for Geometric OptimizationPANKAJ K. AGARWAL

Duke University

AND

MICHA SHARIR

Tel Aviv University

We review the recent progress in the design of efficient algorithms for variousproblems in geometric optimization. We present several techniques used to attackthese problems, such as parametric searching, geometric alternatives to parametricsearching, prune-and-search techniques for linear programming and relatedproblems, and LP-type problems and their efficient solution. We then describe awide range of applications of these and other techniques to numerous problems ingeometric optimization, including facility location, proximity problems, statisticalestimators and metrology, placement and intersection of polygons and polyhedra,and ray shooting and other query-type problems.

Categories and Subject Descriptors: A.1 [General]: Introductory and Survey; F.2.2[Theory of Computation]: Analysis of Algorithms and Problems—Geometricalproblems and computations; I.1.2 [Computing Methodologies]:Algorithms—Analysis of algorithms

General Terms: Algorithms, Design

Additional Key Words and Phrases: Clustering, collision detection, linearprogramming, matrix searching, parametric searching, proximity problems, prune-and-search, randomized algorithms

1. INTRODUCTION

Combinatorial optimization typicallydeals with problems of maximizing orminimizing a function of one or more

variables subject to a large number ofinequality (and equality) constraints.Many problems can be formulated ascombinatorial optimization problems,which has made this a very active area

Both authors are supported by a grant from the U.S.-Israeli Binational Science Foundation. P. K.Agarwal has also been supported by National Science Foundation Grant CCR-93-01259, Army ResearchOffice MURI grant DAAH04-96-1-0013, a Sloan fellowship, and an NYI award and matching funds fromXerox Corp. M. Sharir has also been supported by NSF Grants CCR-94-24398 and CCR-93-11127, aMax-Planck Research Award, and a grant from the G.I.F., the German-Israeli Foundation for ScientificResearch and Development. A preliminary version of this article appeared as: P. K. Agarwal and M.Sharir, Algorithmic techniques for geometric optimization, in Computer Science Today: Recent Trendsand Developments, LNCS 1000, J. van Leeuwen, Ed., 1995, pp. 234–253.Authors’ addresses: P. K. Agarwal, Center for Geometric Computing, Department of Computer Science,Box 90129, Duke University, Durham, NC 27708-0129; M. Sharir, School of Mathematical Sciences, TelAviv University, Tel Aviv 69978, Israel, and Courant Institute of Mathematical Sciences, New YorkUniversity, New York, NY 10012.Permission to make digital / hard copy of part or all of this work for personal or classroom use is grantedwithout fee provided that the copies are not made or distributed for profit or commercial advantage, thecopyright notice, the title of the publication, and its date appear, and notice is given that copying is bypermission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute tolists, requires prior specific permission and / or a fee.© 1999 ACM 0360-0300/99/1200–0412 $5.00

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 2: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

of research during the past half century.In many applications, the underlyingoptimization problem involves a con-stant number of variables and a largenumber of constraints that are inducedby a given collection of geometric ob-jects; we refer to such problems as geo-metric-optimization problems. In suchcases one expects that faster and sim-pler algorithms can be developed by ex-ploiting the geometric nature of theproblem. Much work has been done ongeometric-optimization problems duringthe last 20 years, and many new elegantand sophisticated techniques have beendeveloped and successfully applied tothem. The aim of this article is to sur-vey the main techniques and applica-tions of this kind.

The first part of this survey describesseveral general techniques that haveled to efficient algorithms for a varietyof geometric-optimization problems, themost notable of which is linear pro-gramming. The second part lists manygeometric applications of these tech-niques and discusses some of them inmore detail.

The first technique that we present iscalled parametric searching. Althoughrestricted versions of parametricsearching existed earlier (see, e.g., Eis-ner and Severance [1976]), the full-scaletechnique was presented by Megiddo[1979, 1983a] in the late 1970s andearly 1980s. The technique was origi-nally motivated by so-called parametric-optimization problems in combinatorialoptimization, and did not receive muchattention by the computational geome-try community until the late 1980s. Inthe last decade, though, it has becomeone of the major techniques for solvinggeometric-optimization problems effi-ciently. We outline the technique in de-tail in Section 2, first exemplifying it onthe slope-selection problem [Cole et al.1989], and then presenting various ex-tensions of the technique.

Despite its power and versatility,parametric searching has certain draw-backs, which we discuss next. Conse-quently, there have been several recent

attempts to replace parametric search-ing by alternative techniques, includingrandomization,1 expander graphs,2 geo-metric cuttings [Agarwal et al. 1993c;Bronnimann and Chazelle 1994], andmatrix searching.3 We present these al-ternative techniques in Section 3.

Almost concurrently with the develop-ment of the parametric-searching tech-nique, Megiddo [1983b, 1984] devisedanother ingenious technique for solvinglinear programming and several relatedoptimization problems. This technique,now known as decimation or prune-and-search, was later refined and extendedby Dyer [1984], Clarkson [1986], andothers. The technique can be viewed asan optimized version of parametricsearching, in which certain specialproperties of the problem allow one toimprove further the efficiency of thealgorithm. For example, this techniqueyields linear-time deterministic algo-rithms for linear programming and forseveral related problems, including thesmallest-enclosing-ball problem, whenthe dimension is fixed. (However, thedependence of the running time of thesealgorithms on the dimension is at bestexponential.) We illustrate the tech-nique in Section 4 by applying it tolinear programming.

In the past decade, randomized algo-rithms have been developed for a vari-ety of problems in computational geom-etry and in other fields; see, forexample, the books by Mulmuley [1994]and by Motwani and Raghavan [1995].Clarkson [1995] and Seidel [1991] giverandomized algorithms for linear pro-gramming, whose expected time is lin-ear in any fixed dimension, which aremuch simpler than their earlier deter-ministic counterparts. The dependenceon the dimension of the running time of

1 Please see Agarwal and Sharir [1996a], Chan[1998], Clarkson and Shor [1989], and Matousek[1991b].2 Please see Ajtai and Megiddo [1996], Katz[1995], and Katz and Sharir [1993, 1997].3 Please see Frederickson [1991], Fredericksonand Johnson [1982, 1983, 1984], and Glozman etal. [1995].

Algorithms for Geometric Optimization • 413

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 3: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

these algorithms is better (althoughstill exponential). Actually, Clarkson’stechnique is rather general, and is alsoapplicable to a variety of other geomet-ric-optimization problems. We describethis technique in Section 5.

Further significant progress in linearprogramming was made in the begin-ning of the 1990s, when new random-ized algorithms for linear programmingwere obtained independently by Kalai[1992], and by Matousek et al. [1996;Sharir and Welzl 1992] (these two algo-rithms are essentially dual versions ofthe same technique). The expected num-ber of arithmetic operations performedby these algorithms is subexponentialin the input size, and is still linear inany fixed dimension, so the operationsconstitute an important step toward thestill open goal of obtaining stronglypolynomial algorithms for linear pro-gramming. (Recall that the polynomial-time algorithms by Khachiyan [1980]and by Karmarkar [1984] are notstrongly polynomial, as the number ofarithmetic operations performed bythese algorithms depends on the size ofthe coefficients of the input con-straints.) This new technique is pre-sented in Section 6. The algorithm inMatousek et al. [1996] and Sharir andWelzl [1992] is actually formulated in ageneral abstract framework, which fitsnot only linear programming but manyother problems. Such LP-type problemsare also reviewed in Section 6, includingthe connection, recently noted byAmenta [1994a,b], between abstract lin-ear programming and Helly-type theo-rems.

In the second part of this article, wesurvey many geometric applications ofthe techniques described in the firstpart. These applications include prob-lems involving facility location (e.g.,finding p congruent disks of smallestpossible radius whose union covers agiven planar point set), geometric prox-imity (e.g., computing the diameter of apoint set in three dimensions), statisti-cal estimators and metrology (e.g., com-puting the smallest width annulus that

contains a given planar point set),placement and intersection of polygonsand polyhedra (e.g., finding the largestsimilar copy of a convex polygon thatfits inside a given polygonal environ-ment), and query-type problems (e.g.,the ray-shooting problem, in which wewant to preprocess a given collection ofobjects, so that the first object hit by aquery ray can then be determined effi-ciently).

Numerous nongeometric-optimizationproblems have also benefited from thetechniques presented here (see Agar-wala and Fernandez-Baca [1996], Co-hen and Megiddo [1993], Frederickson[1991], Gusfield et al. [1994], and Nor-ton et al. [1992] for a sample of suchapplications), but we focus only on geo-metric applications.

Although the common theme of mostof the applications reviewed here is thatthey can be solved efficiently usingparametric-searching, prune-and-search,LP-type, or related techniques, each ofthem requires a problem-specific, andoften fairly sophisticated, approach. Forexample, the heart of a typical applica-tion of parametric searching is the de-sign of efficient sequential and parallelalgorithms for solving the appropriateproblem-specific “decision procedure”(see the following for details). We pro-vide details of these solutions for someof the problems, but omit them for mostof the applications due to lack of space.

PART I: TECHNIQUES

The first part of the survey describesseveral techniques commonly used ingeometric-optimization problems. Wedescribe each technique and illustrate itby giving an example.

2. PARAMETRIC SEARCHING

We begin by outlining the parametric-searching technique, and then illustratethe technique by giving an examplewhere this technique is applied. Finally,we discuss various extensions of para-metric searching.

414 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 4: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

2.1 Outline of the Technique

The parametric-searching technique ofMegiddo [1979, 1983a] can be describedin the following general terms (whichare not as general as possible, but suf-fice for our purposes). Suppose we havea decision problem 3(l) that depends ona real parameter l, and is monotone inl, meaning that if 3(l0) is true for somel0, then 3(l) is true for all l , l0. Ourgoal is to find l*, the maximum l forwhich 3(l) is true, assuming such amaximum exists. Suppose further that3(l) can be solved by a (sequential)algorithm As that takes l and a set ofdata objects (independent of l) as theinput, and that, as a byproduct, As canalso determine whether the given l isequal to, smaller than, or larger thanthe desired value l* (in general, themonotonicity of 3(l) makes this arather easy task). Assume moreoverthat the control flow of As is governedby comparisons, each of which amountsto testing the sign of some low-degreepolynomial in l.

Megiddo’s technique then runs As ge-nerically at the unknown optimum l*and maintains an open interval I that isknown to contain l*. Initially, I is thewhole line. Whenever As reaches abranching point that depends on somecomparison with an associated polyno-mial p(l), it computes all the real rootsl1, l2, . . . of p and computes 3(li) byrunning (the standard, nongeneric ver-sion of) As with the value of l 5 li. Ifone of the li is equal to l*, we stop,since we have found the value of l*.Otherwise, we have determined theopen interval (li, li11) that contains l*.Since the sign of p(l) remains the samefor all l [ (li, li11), we can computethe sign of p(l*) by evaluating, sayp((li 1 li11)/2). The sign of p(l*) alsodetermines the outcome of the compari-son at l*. We now set I to be I ù (li,li11), and the execution of the genericAs is resumed. As we proceed throughthis execution, each comparison that weresolve constrains the range where l*can lie even further. We thus obtain a

sequence of progressively smaller inter-vals, each known to contain l*, until weeither reach the end of As with a finalinterval I, or hit l* at one of the com-parisons of As. Since the value of 3([)changes at l*, the generic algorithmwill always make a comparison whoseassociated polynomial vanishes at l*,which will then cause the overall algo-rithm to terminate with the desiredvalue of l*.

If As runs in time Ts and makes Cscomparisons, then, in general, the costof the procedure just described is O(CsTs),and is thus generally quadratic in theoriginal complexity. To speed up theexecution, Megiddo proposes to imple-ment the generic algorithm by a parallelalgorithm Ap (under Valiant’s [1975]comparison model of computation; seethe following). If Ap uses P processorsand runs in Tp parallel steps, then eachparallel step involves at most P inde-pendent comparisons; that is, we do notneed to know the outcome of such acomparison to be able to execute othercomparisons in the same batch. We canthen compute the O(P) roots of all thepolynomials associated with these com-parisons, and perform a binary searchto locate l* among them, using (thenongeneric) As at each binary step. Thecost of simulating a parallel step of Apis thus O(P 1 Ts log P) (there is noneed to sort the O(P) roots; instead, onecan use repeated median finding, whoseoverall cost is only O(P)), for a totalrunning time of O(PTp 1 TpTs log P).In most cases, the second term domi-nates the running time.

This technique can be generalized inmany ways. For example, given a con-cave function f(l), we can compute thevalue l* of l that maximizes f(l), pro-vided we have sequential and parallelalgorithms for the decision problem (ofcomparing l* with a specific l); thesealgorithms compute f(l) and its deriva-tive, from which the relative location ofthe maximum of f is easy to determine.We can compute l* even if the decisionalgorithm As cannot distinguish be-tween, say, l , l* and l 5 l*; in this

Algorithms for Geometric Optimization • 415

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 5: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

case we maintain a half-closed intervalI containing l*, and the right endpointof the final I gives the desired value l*.See Agarwal and Sharir [1994], Cole etal. [1987], and Toledo [1991, 1993a] forthese and some other generalizations.

It is instructive to observe that paral-lelism is used here in a very weaksense, since the overall algorithm re-mains sequential and just simulates theparallel algorithm. The only featurethat we require is that the parallel algo-rithm perform only a small number ofbatches of comparisons, and that thecomparisons in each batch are indepen-dent of each other. We can thereforeassume that the parallel algorithm runsin the parallel comparison model of Val-iant [1975], which measures parallelismonly in terms of the comparisons beingmade, and ignores all other operations,such as bookkeeping, communicationbetween the processors, and data alloca-tion. Moreover, any portion of the algo-rithm that is independent of l can beperformed sequentially in any mannerwe like. These observations simplify thetechnique considerably in many cases.

2.2 An Example: The Slope-SelectionProblem

As an illustration, consider the slope-selection problem: given a set of n pointsin the plane and an integer k # (2

n),determine a segment connecting two in-put points that has the kth smallestslope among all such segments. Usingthe duality transform [Edelsbrunner1987], we can formulate this problem asfollows. We are given a set L of n non-vertical lines in the plane and an inte-ger 1 # k # (2

n), and we wish to find anintersection point between two lines ofL that has the kth smallest x-coordi-nate. (We assume, for simplicity, gen-eral position of the lines, so that nothree lines are concurrent, and no twointersection points have the same x-co-ordinate.) We are thus seeking the kthleftmost vertex of the arrangement!(L) of the lines in L; see Edelsbrunner

[1987] and Sharir and Agarwal [1995]for more details concerning arrange-ments.

We define 3(l) to be true if the x-co-ordinates of at most k vertices of !(L)are smaller than or equal to l. Obvi-ously, 3(l) is monotone, and l*, themaximum value of l for which 3(l) istrue, is the x-coordinate of the desiredvertex. After having computed l*, theactual vertex is rather easy to compute,and in fact the algorithm described inthe following can compute the vertexwithout any additional work. In order toapply the parametric-searching tech-nique, we need an algorithm that, givena vertical line ,;x 5 l, can compare lwith l*. Let kl be the number of verti-ces of !(L) whose x-coordinates are lessthan or equal to l. If kl , k, thenclearly l , l*. If kl $ k, then l . l*(respectively, l 5 l*) if and only if no(respectively, one) vertex lies on thevertical line x 5 l.

Let (,1, ,2, . . . , ,n) denote the se-quence of lines in L sorted in the de-creasing order of their slopes, and let(,p(1), ,p(2), . . . , ,p(n)) denote the se-quence of these lines sorted by theirintercepts with x 5 l. An easy observa-tion is that two lines ,i, ,j, with i , j,intersect to the left of x 5 l if and onlyif p(i) . p(j). In other words, the num-ber of intersection points to the left ofx 5 l can be counted, in O(n log n)time, by counting the number of inver-sions in the permutation p, using astraightforward tree-insertion proce-dure [Knuth 1973]. Moreover, we canimplement this inversion-counting pro-cedure by a parallel sorting algorithmthat takes O(log n) parallel steps anduses O(n) processors (e.g., the one inAjtai et al. [1983]). Hence, we can countthe number of inversions in O(n log n)time sequentially or in O(log n) paralleltime using O(n) processors. Pluggingthese algorithms into the parametric-searching paradigm, we obtain an O(nlog3 n)-time algorithm for the slope-se-lection problem.

416 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 6: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

2.3 Improvements and Extensions

Cole [1987] observed that in certain ap-plications of parametric searching, in-cluding the slope-selection problem, therunning time can be improved toO((P 1 Ts)Tp), as follows. Consider aparallel step of the preceding genericalgorithm. Suppose that, instead of in-voking the decision procedure O(log P)times in this step to resolve all compar-isons, we call it only O(1) times, say,three times. This will determine theoutcome of 7/8 of the comparisons, andwill leave 1/8 of them unresolved (weassume here, for simplicity, that eachcomparison has only one critical valueof l where its outcome changes; this isthe case in the slope-selection problem).Suppose further that each of the unre-solved comparisons can influence only aconstant (and small) number, say, two,of comparisons executed at the next par-allel step. Then 3/4 of these compari-sons can still be simulated genericallywith the currently available informa-tion. This leads to a modified schemethat mixes the parallel steps of the algo-rithm, since we now have to performtogether new comparisons and yet unre-solved old comparisons. Nevertheless,Cole shows that if carefully imple-mented (by assigning an appropriatetime-dependent weight to each unre-solved comparison, and by choosing theweighted median at each step of thebinary search), the number of parallelsteps of the algorithm increases only byan additive logarithmic term, whichleads to the stated improvement. Anideal setup for Cole’s improvement iswhen the parallel algorithm is describedas a circuit (or network), each of whosegates has a constant fanout. Since sort-ing can be implemented efficiently bysuch a network [Ajtai et al. 1983], Cole’stechnique is applicable to problemswhose decision procedure is based onsorting.

Cole’s idea therefore improves therunning time of the slope-selection algo-rithm to O(n log2 n). (Note that the onlystep of the algorithm that depends on l

is the sorting that produces p. The sub-sequent inversion-counting step is inde-pendent of l, and can be executed se-quentially. In fact, this step can betotally suppressed in the generic execu-tion, since it does not provide us withany additional information about l.) Us-ing additional machinery, Cole et al.[1989] gave an optimal O(n log n)-timesolution. They observe that one cancompare l* with a value l that is faraway from l* in a faster manner, bycounting inversions only approximately.This approximation is progressively re-fined as l approaches l* in subsequentcomparisons. Cole et al. show that theoverall cost of O(log n) calls to the ap-proximating decision procedure is onlyO(n log n), so this also bounds the run-ning time of the whole algorithm. Thistechnique was subsequently simplifiedin Bronnimann and Chazelle [1994].Chazelle et al. [1994] have shown thatthe algorithm of Cole et al. [1989] canbe extended to compute, in O(n log n)time, the kth leftmost vertex in an ar-rangement of n line segments.

The slope-selection problem is onlyone of many problems in geometric opti-mization that have been efficientlysolved using parametric searching. SeeAgarwal et al. [1993a,c], Agarwal andMatousek [1993], Agarwal and Sharir[1994], Chazelle et al. [1993], and Pelle-grini [1996] for a sample of other prob-lems, many of which are described inPart II, that benefit from parametricsearching.

The parametric-searching techniquecan be extended to higher dimensions ina natural manner. Suppose we have ad-variate (strictly) concave functionF(l), where l varies over Rd. We wishto compute l* [ Rd at which F(l) at-tains its maximum value. Let As, Ap be,as previously, sequential and parallelalgorithms that can compute F(l0) forany given l0. As previously, we run Apgenerically at l*. Each comparison in-volving l now amounts to evaluatingthe sign of a d-variate polynomial p(l1,. . . , ld), and each parallel step re-quires resolving P such independent

Algorithms for Geometric Optimization • 417

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 7: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

comparisons at l*. Resolving a compar-ison is now more difficult because p(l1,. . . , ld) 5 0 is now a (d 2 1)-dimen-sional variety. Cohen and Megiddo[1993] described a recursive procedureto execute a parallel step for the case inwhich the polynomial corresponding toeach of the comparisons is a linear func-tion. The total running time in simulat-ing Ap, using their procedure, is2O(d2)Ts(Tp log P)d. Agarwala and Fer-nandez-Baca [1996] improved the run-ning time slightly by extending Cole’sidea to multidimensional parametricsearching; see also Megiddo [1984] andPapadimitriou [1981]. The running timewas further improved by Agarwal et al.[1993c] to dO(d)Ts(Tp log P)d. Later, To-ledo [1993b] extended these techniquesto comparisons involving nonlinearpolynomials, using Collins’s [1975] cy-lindrical algebraic decomposition. Thetotal running time of his procedure isO(Ts(Tp log P)2d21). For the sake ofcompleteness, we present these rathertechnical higher-dimensional extensionsin the Appendix.

3. ALTERNATIVE APPROACHES TOPARAMETRIC SEARCHING

Despite its power and versatility, theparametric-searching technique hassome shortcomings.

(1) Parametric searching requires thedesign of an efficient parallel algo-rithm for the generic version of thedecision procedure. This is not al-ways easy, even though it onlyneeds to be done in the weak com-parison model, and it often tends tomake the overall solution quite com-plicated and impractical.

(2) The generic algorithm requires ex-act computation of the roots of thepolynomials p(l) whose signs deter-mine the outcome of the compari-sons made by the algorithm. In gen-eral, the roots of a polynomialcannot be computed exactly; there-fore, one has to rely on computa-tional algebra techniques to isolate

the roots of p(l) and to determinethe sign of p(l*) without computingthe roots explicitly. These tech-niques are rather expensive.

(3) Finally, from an aesthetic point ofview, the execution of an algorithmbased on parametric searching mayappear to be somewhat chaotic.Such an algorithm neither gives anyinsight into the problem, nor doesits execution resemble any “intui-tive” flow of execution for solvingthe problem.

These shortcomings have led severalresearchers to look for alternative ap-proaches to parametric searching forgeometric-optimization problems.Roughly speaking, parametric searchingeffectively conducts an implicit binarysearch over a set L 5 {l1, . . . , lt} ofcritical values of the parameter l, tolocate the optimum l* among them.(For example, in the slope-selectionproblem, the critical values are theQ(n2) x-coordinates of the vertices ofthe arrangement !(L).) The power ofthe technique stems from its ability toperform the binary search by generatingonly a small number of critical valuesduring the search, without computingthe entire L explicitly. In this sectionwe describe some alternative ways ofperforming such a binary search, whichalso generates only a small set of criti-cal values.

3.1 Randomization

Randomization is a natural approach toperforming an implicit binary searchover the critical values [Matousek1991b; Varadarajan 1996; Zemel 1987].Suppose we know that l* lies in someinterval I 5 [a, b]. Suppose furtherthat we can randomly choose an ele-ment l0 [ I ù L, where each item ischosen with probability 1/uI ù Lu. Thenit follows that, by comparing l* with afew randomly chosen elements of I ù L(i.e., by executing the decision algo-rithm at these values), we can shrink Ito an interval I9 that is guaranteed to

418 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 8: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

contain l* and that is expected to con-tain significantly fewer critical values.

The difficult part is, of course, choos-ing a random element from I ù L. Inmany cases, a procedure for computinguI ù Lu can be converted into a proce-dure for generating a random elementof I ù L. For example, in the slope-selection problem, given a set L of nlines in the plane and a vertical stripW 5 (l, r) 3 R, an inversion-countingalgorithm for computing the number ofvertices of !(L) within W can be used togenerate a multiset of q random verti-ces of !(L) ù W in time O(n log n 1 q)[Matousek 1991b]. Based on this obser-vation, Matousek [1991b] obtained thefollowing simple slope-selection algo-rithm. Each step of the algorithm main-tains a vertical strip W(a, b) 5{(x, y)ua # x # b} that is guaranteed tocontain the kth leftmost vertex; initiallya 5 2` and b 5 1`. Let m be thenumber of vertices of !(L) lying insideW. We repeat the following step untilthe algorithm terminates.

If m # n, the kth leftmost vertex of!(L) can be computed in O(n log n) by asweep-line algorithm (through W). Oth-erwise, set k* to be the number of verti-ces lying to the left of the line x 5 a. Letj 5 (k 2 k*) z n/m, ja 5 j 2 3=n, andjb 5 j 1 3=n. We choose n randomvertices of !(L) lying inside W(a, b). Ifthe kth leftmost vertex lies in W( ja, jb)and the vertical strip W( ja, jb) contains atmost cm/=n vertices, for some appropri-ate constant c . 0, we set a 5 ja, b 5 jb,and repeat this step. Otherwise, we dis-card the random sample of vertices, anddraw a new sample. It can be shown,using Chernoff ’s bound [Motwani andRaghavan 1995], that the expected run-ning time of the preceding algorithm isO(n log n).

Shafer and Steiger [1993] give aslightly different O(n log n) expected-time algorithm for the slope-selectionproblem. They choose a random subsetof u 5 O(n log n) vertices of !(L). Leta1, a2, . . . , au be the x-coordinates ofthese vertices. Using the algorithm byCole et al. [1989] for counting the num-

ber of inversions approximately, theydetermine in O(n log n) time the verti-cal strip W(ai, ai11) that contains thekth leftmost vertex of !(L). They provethat, with high probability, W(ai, ai11)contains only O(n) vertices of !(L), andtherefore the desired vertex can be com-puted in an additional O(n log n) timeby a sweep-line algorithm. See Dillen-court et al. [1991] for yet another ran-domized slope-selection algorithm. Inthe following we mention a few moreapplications of this randomized ap-proach.

Clarkson and Shor [1989] gave an-other randomized technique for solvinga variety of geometric-optimizationproblems. Originally, they had proposedthe algorithm for computing the diame-ter of a set of points in R3 (see Section8.1). Their algorithm was extended byAgarwal and Sharir [1996a]. Chan[1998] used a different randomizedtechnique, based on the following sim-ple observation. Suppose we wish tocompute the minimum of n given num-bers. If we examine these numbers in arandom order, then, whereas each num-ber needs to be compared in turnagainst the current minimum, the cur-rent minimum will be reset onlyO(log n) expected number of times.Chan used this simple observation tosolve combinatorial optimization prob-lems whose input can be decomposedinto a collection of subproblems ofsmaller size so that the optimal valuefor the whole problem is the minimumof the values of the subproblems. Hisalgorithm processes the subproblems ina random order. For each subproblemthe algorithm compares its optimalvalue with the current minimum; this,however, amounts to solving the corre-sponding decision problem. The ex-pected number of subproblems forwhich the optimum value is actuallycomputed (to reset the current mini-mum) is only O(log n). Each of thesevalues is computed recursively. Chanshowed that in many instances the ex-pected running time of this technique is

Algorithms for Geometric Optimization • 419

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 9: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

asymptotically the same as that of thedecision procedure.

3.2 Expanders and Cuttings

In many cases, the preceding random-ized approach can be derandomized,without affecting the asymptotic run-ning time, using techniques based onexpanders or on geometric cuttings. Ex-pander graphs are special graphs thatcan be constructed deterministically (ina rather simple manner), have a linearnumber of edges, and share many prop-erties with random graphs; see Alonand Spencer [1993] for more details. Forexample, in the slope-selection problem,we can construct expander graphswhose vertices are the given lines andwhose edges correspond to appropriatevertices of the arrangement (each edgecorresponds to the intersection of thetwo lines that it connects). If we searchamong these vertices for the optimal l*,we obtain a slab that contains the ver-tex of !(L) whose x-coordinate is l* andthat does not contain any of the expand-er- induced vertices. One can then show,using properties of expander graphs,that the number of vertices of !(L)within the slab is rather small, so thatthe search for l* within the slab canproceed in a more efficient manner.This is similar to the randomized solu-tion in Matousek [1991b]. (The con-struction of the specific expander graphis somewhat involved, and is describedin Katz and Sharir [1993].)

Although expanders have been exten-sively used in many areas, includingparallel computation, complexity the-ory, communication networks, and VLSIrouting, their applications in computa-tional geometry have been rather sparseso far. Ajtai and Megiddo [1996] gave anefficient parallel linear-programmingalgorithm based on expanders, and laterKatz [1995] and Katz and Sharir [1993,1997] applied expanders to solve severalgeometric-optimization problems, in-cluding the application to the slope-se-lection problem, as just mentioned.

Chazelle et al. [1993] developed an

O(n log2 n)-time deterministic algorithmfor the slope-selection problem, usingcuttings.4 The bound was subsequentlyimproved by Bronnimann and Chazelle[1994], to O(n log n). For the sake ofcompleteness, we give a brief sketch ofthe algorithm by Chazelle et al. [1993].The algorithm works in O(log n) stages.In the beginning of the jth stage, wehave a vertical strip Wj, which containsl*, and a triangulation Tj of Wj. Foreach triangle D [ Tj, we store the ver-tices of D, the subset LD of lines in Lthat intersect the interior of D, and theintersection points of LD with the edgesof D. We refer to the vertices of Tj andthe intersection points of LD with theedges of D, over all D [ Tj, as criticalpoints. The algorithm maintains the fol-lowing two invariants.

(C1) The total number of critical pointsis at most c1n, for some constantc1 . 0.

(C2) For every triangle D [ Tj, uLDu #n/c 2

j lines of L, where c2 $ 2 is aconstant.

By (C1) and (C2), Wj contains O(n) verticesof !(L) for j . log n, so we can find l*by a sweep-line algorithm. We set W1 tobe a vertical strip containing all thevertices of !(L), and T1 consists of asingle unbounded triangle, namely, W1itself. Suppose we are in the beginningof the jth stage. For every triangle Dwith uLDu . n/c 2

j , we compute a (1/4c)-cutting JD of LD, clip each triangle t [JD within D, retriangulate t ù D, andcompute the critical points for the newtriangulation of Wj. If the total numberof critical points after this refinement isat most c1n, we move to the next stage.Otherwise, we shrink the strip Wj asfollows. We choose a critical point withthe median x-coordinate, say x 5 lm,

4 A (1/r)-cutting for a set H of n hyperplanes in Rd

is a partition J of Rd into O(rd) simplices withpairwise disjoint interiors so that the interior ofeach simplex intersects at most n/r hyperplanes ofH. For any given r, a 1/r-cutting of size O(rd)always exists and can be computed in timeO(nrd21) [Chazelle 1993].

420 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 10: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

and, using the decision algorithm de-scribed in Section 2.2, determinewhether l* is greater than, smallerthan, or equal to lm. If l* 5 lm, thenwe stop; otherwise we shrink Wj to ei-ther (l, lm) 3 R or (lm, r) 3 R, depend-ing on whether l* is smaller or largerthan lm. In any case, the new stripcontains only half of the critical points.After repeating this procedure for a con-stant number of times, we can ensurethat the number of critical points in thecurrent strip is at most c1n/4. We setWj11 to this strip, clip T9j within Wj11,retriangulate every clipped triangle,and merge two triangles if their union isa triangle intersecting at most n/c2

j

lines of L. The total number of criticalpoints after these steps can be proved tobe at most c1n. As shown in Chazelle etal. [1993], each stage takes O(n log n)time, so the overall running time is O(nlog2 n). Using the same idea as in Coleet al. [1989] (of counting the number ofinversions approximately), Bronnimannand Chazelle [1994] managed to im-prove the running time to O(n log n).

3.3 Matrix Searching

An entirely different alternative toparametric searching was proposed byFrederickson and Johnson [Frederick-son 1991; Frederickson and Johnson1983, 1984], which is based on search-ing in sorted matrices. It is applicablein cases where the set L of candidatecritical values for the optimum parame-ter l* can be represented in an n 3 nmatrix A, each of whose rows and col-umns is sorted. The size of the matrix istoo large for an explicit binary searchthrough its elements, so an implicitsearch is needed. Here we assume thateach entry of the matrix A can be com-puted in O(1) time. We give a briefsketch of this matrix-searching tech-nique.

Let us assume that n 5 2k for somek $ 0. The algorithm works in phases.The first phase, which consists of ksteps, maintains a collection of disjointsubmatrices of A so that l* is guaran-

teed to be an element of one of thesesubmatrices. In the beginning of the ithstep, for i $ 0, the algorithm has atmost Bi 5 2i12 2 1 matrices, each ofsize 2k2i11 3 2k2i11. The ith step splitsevery such matrix into four squaresubmatrices, each of size 2k2i 3 2k2i,and discards some of these submatrices,so as to be left with only Bi11 matrices.After the kth step, we are left with O(n)singleton matrices, so we can perform abinary search on these O(n) critical val-ues to obtain l*.

The only nontrivial step in the pre-ceding algorithm is determining whichof the submatrices should be discardedin each step of the first phase, so that atmost Bi11 matrices are left after the ithstep. After splitting each submatrix, weconstruct two sets U and V: U is the setof the smallest (i.e., upper-leftmost) ele-ment of each submatrix, and V is the setof largest (i.e., bottom-rightmost) ele-ment of each submatrix. We choose themedian elements lU and lV of U and V,respectively. We run the decision algo-rithm at lU and lV, to compare themwith l*. If any of them are equal to l*,we are done. Otherwise, there are thefollowing cases to consider.

(1) If lU , l*, we discard all thosematrices whose largest elements aresmaller than lU.

(2) If lU . l*, we discard all thosematrices whose smallest elementsare larger than lU; at least half ofthe matrices are discarded in thiscase.

(3) If lV , l*, we discard all thosematrices whose largest elements aresmaller than lV; at least half of thematrices are discarded in this case.

(4) If lV . l*, we discard all thosematrices whose smallest elementsare larger than lV.

It can be shown that this pruning stepretains at most Bi11 submatrices [Fred-erickson 1991; Frederickson and John-son 1983, 1984], as desired. In conclu-sion, we can find the optimum l* byexecuting only O(log n) calls to the deci-

Algorithms for Geometric Optimization • 421

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 11: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

sion procedure, so the running time ofthis matrix-searching technique isO(log n) times the cost of the decisionprocedure. The technique, when appli-cable, is both efficient and simple com-pared to the standard parametricsearching.

Aggarwal et al. [Aggarwal and Klawe1987; Aggarwal et al. 1987, 1990; Ag-garwal and Park 1988] studied a differ-ent matrix-searching technique for opti-mization problems. They gave a linear-time algorithm for computing theminimum (or maximum) element of ev-ery row of a totally monotone matrix; amatrix A 5 {ai, j} is called totally mono-tone if ai1 , ji

, ai1, j2implies that ai2, j1

,ai2, j2

, for any 1 # i1 , i2 # m, 1 # j1 ,j2 # n. Totally monotone matrices arisein many geometric, as well as nongeo-metric, optimization problems. For ex-ample, the farthest neighbors of all ver-tices of a convex polygon and thegeodesic diameter of a simple polygoncan be computed in linear time, usingsuch matrices [Aggarwal et al. 1987;Hershberger and Suri 1993].

4. THE PRUNE-AND-SEARCH TECHNIQUEAND LINEAR PROGRAMMING

Like parametric searching, the prune-and-search (or decimation) techniquealso performs an implicit binary searchover the finite set of candidate valuesfor l*, but while doing so, it also tries toeliminate input objects that are guaran-teed not to affect the value of l*. Eachphase of the technique eliminates a con-stant fraction of the remaining objects.Therefore, after a logarithmic numberof steps, the problem size becomes aconstant, and the problem can be solvedin a final, brute-force step. Because ofthe “decimation” of input objects, theoverall cost of the resulting algorithmremains proportional to the cost of thefirst pruning phase. The prune-and-search technique was originally intro-duced by Megiddo [1983b, 1984], in de-veloping a linear-time algorithm forlinear programming with n constraintsin two and three dimensions. Later he

extended the approach to obtain anO(22d

n)-time algorithm for linear pro-gramming in Rd. Since then the prune-and-search technique has been appliedto many other geometric-optimizationproblems. We illustrate the techniqueby describing Megiddo’s two-dimen-sional linear-programming algorithm.

We are given a set H 5 {h1, . . . , hn}of n halfplanes and a vector c, and wewish to minimize cx over the feasibleregion K 5 ùi51

n hi. Without loss ofgenerality, assume that c 5 (0, 1) (i.e.,we seek the lowest point of K). Let Ldenote the set of lines bounding thehalfplanes of H, and let L1 (respective-ly, L2) denote the subset of lines ,i [ Lwhose associated halfplane hi lies below(respectively, above) ,i. For simplicity,we assume that L contains no verticallines; nevertheless, it is easy to modifythe algorithm to handle vertical lines aswell. The algorithm pairs up the lines ofL into disjoint pairs (,1, ,2), (,3, ,4), . . . ,so that either both the lines in a pairbelong to L1, or both belong to L2. Thealgorithm computes the intersectionpoints of the lines in each pair, andchooses the median, xm, of their x-coor-dinates. Let x* denote the x-coordinateof the optimal (i.e., lowest) point in K (ifsuch a point exists). The algorithm thenuses a linear-time decision procedure(whose details are omitted here, al-though some of them are discussed inthe following), that determines whetherxm 5 x*, xm , x*, or xm . x*. If xm 5x*, we stop, since we have found theoptimum. Suppose that xm , x*. If(,, ,9) is a pair of lines, both of whichbelong to L2 and whose intersectionpoint lies to the left of xm, then we candiscard the line with the smaller slopefrom any further consideration, becausethat line is known to pass below theoptimal point of K. All other cases canbe treated in a fully symmetric manner,so we have managed to discard aboutn/4 lines.

We have thus computed, in O(n) time,a subset H9 # H of about 3n/4 con-straints so that the optimal point ofK9 5 ùh[H9 h is the same as that of K.

422 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 12: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

We now apply the whole procedure onceagain to H9, and keep repeating this (forO(log n) stages) until either the numberof remaining lines falls below somesmall constant, in which case we solvethe problem by brute force (in constanttime), or the algorithm has hit x* acci-dentally, in which case it stops rightaway. (We omit here the description ofthe linear-time decision procedure, andof handling cases in which K is empty orunbounded; see Edelsbrunner [1987]and Megiddo [1983b, 1984] for details.)It is now easy to see that the overallrunning time of the algorithm is O(n).

Remark. It is instructive to comparethis method to the parametric searchingtechnique, in the context of two-dimen-sional linear programming. In both ap-proaches, the decision procedure aims tocompare some given x0 to the optimalvalue x*. This is done by computing themaximum and minimum values of theintercepts of the lines in L2 and in L1,respectively, with the line x 5 x0. Atrivial method for computing the maxi-mum and minimum in parallel is in abinary-tree manner, computing themaximum or minimum of pairs of lines,then of pairs of pairs, and so on. Bothtechniques begin by implementing ge-nerically the first parallel step of thisdecision procedure. The improved per-formance of the prune-and-search algo-rithm stems from the realization that(a) there is no need to perform the fullbinary search over the critical values ofthe comparisons in that stage—a singlebinary search step suffices (this is simi-lar to Cole’s enhancement of parametricsearching, mentioned previously), and(b) this single comparison allows us todiscard a quarter of the given lines, sothere is no point in continuing the ge-neric simulation, and it is better to startthe whole algorithm from scratch, withthe surviving lines. From this point ofview, the prune-and-search techniquecan be regarded as an optimized variantof parametric searching.

This technique can be extended tohigher dimensions, although it becomes

more complicated, and requires recur-sive invocations of the algorithm on sub-problems in lower dimensions. It yieldsa deterministic algorithm for linear pro-gramming that runs in O(Cdn) time,where Cd is a constant depending on d.One of the difficult steps in higher di-mensions is to develop a procedure thatcan discard a fraction of the input con-straints from further consideration byinvoking the (d 2 1)-dimensional linearprogramming algorithm a constantnumber of times; the value of Cd de-pends on this constant. The original ap-proach by Megiddo [1984] gives Cd 522d

, which was improved by Clarkson[1986] and Dyer [1986] to 3d2

. Theirprocedure can be simplified and im-proved using geometric cuttings as fol-lows (see Agarwal et al. [1993c] andDyer and Frieze [1989]). Let H be theset of hyperplanes bounding the inputconstraints. Choose r to be a constant,and compute a 1/r-cutting J for H. Byinvoking the (d 2 1)-dimensional lin-ear-programming algorithm recursivelyO(rd) times (at most three times foreach hyperplane h supporting a facet ofa simplex in J: on h itself, and on twoparallel hyperplanes parallel to h, oneon each side and lying very close to h),one can determine the simplex D of Jthat contains x*. The constraints whosebounding hyperplanes do not intersect Dcan be discarded because they do notdetermine the optimum value. We solvethe problem recursively with the re-maining n/r constraints. Dyer andFrieze [1989] (see also Agarwal et al.[1993c]) have shown that the number ofcalls to the recursive algorithm can bereduced to O(dr). This yields an dO(d)n-time algorithm for linear programmingin Rd. An entirely different algorithmwith a similar running time was givenby Chazelle and Matousek [1996]. It isan open problem whether faster deter-ministic algorithms that are linear in ncan be developed. Although no progresshas been made on this front, there havebeen significant developments on ran-domized algorithms for linear program-

Algorithms for Geometric Optimization • 423

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 13: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

ming, which we discuss in the next twosections.

Recently there has been considerableinterest in parallelizing Megiddo’sprune-and-search algorithm. Deng[1990] gave an O(log n)-time and O(n)-work algorithm for two-dimensional lin-ear programming, under the CRCWmodel of computation. His algorithm,however, does not extend to higher di-mensions. Alon and Megiddo [1990]gave a randomized algorithm under theCRCW model of computation that runs,with high probability, in O(1) time us-ing O(n) processors. Ajtai and Megiddo[1996] gave an O((log log n)d)-time de-terministic algorithm using O(n) pro-cessors under Valiant’s model of compu-tation. Goodrich and Ramos [1997] andSen [1996] gave an O((log log n)d12)-time, O(n)-work algorithm under theCRCW model; see also Dyer [1995].

5. RANDOMIZED ALGORITHMS FORLINEAR PROGRAMMING

Random sampling has become one of themost powerful and versatile techniquesin computational geometry, so it is nosurprise that this technique has alsobeen successful in solving many geomet-ric-optimization problems. See the bookby Mulmuley [1994] and the survey pa-pers by Clarkson [1992] and Seidel[1993] for applications of the random-sampling technique in computationalgeometry. In this section, we describe arandomized algorithm for linear pro-

gramming by Clarkson [1995], based onrandom sampling, which is actuallyquite general and can be applied to anygeometric set-cover and related prob-lems [Agarwal and Desikan 1997; Bron-nimann and Goodrich 1995; Clarkson1993]. Other randomized algorithms forlinear programming, which run in ex-pected linear time for any fixed dimen-sion, are proposed by Dyer and Frieze[1989], Seidel [1991], and Matousek etal. [1996].

Clarkson’s algorithm proceeds as fol-lows. Let H be the set of constraints. Weassign a weight m(h) [ Z to each con-straint; initially m(h) 5 1 for all h [ H.For any A # H, let m(A) 5 (h[A m(h).The algorithm works in rounds, each ofwhich consists of the following steps.Set r 5 6d2. If uHu # 6d2, we computethe optimal solution using the simplexalgorithm. Otherwise, choose a randomsample R , H such that m(R) 5 r. (Wecan regard H as a multiset in whicheach constraint h appears m(h) times,and we choose a multiset R [ (r

H) of rconstraints.) We compute the optimalsolution xR for R and the subset V ,H \ R of constraints that xR violates(i.e., the subset of constraints that donot contain xR). If V 5 À, the algorithmreturns xR. If m(V) # 3m(H)/d, we dou-ble the weight of each constraint in V;in any case, we repeat the samplingprocedure. See Figure 1 for apseudocode of the algorithm.

Let B be the set of d constraints

Fig. 1. Clarkson’s randomized LP algorithm.

424 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 14: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

whose boundaries are incident to theoptimal solution. A round is called suc-cessful if m(V) # 3m(H)/d. Using thefact that R is a random subset, one canargue that each round is successful withprobability at least 1/2. Every success-ful round increases m(H) by a factor ofat most (1 1 1/3d), so the total weightm(H) after kd successful rounds is atmost n(1 1 1/3d)kd , nek/3. On theother hand, each successful iterationdoubles the weight of at least one con-straint in B (it is easily verified that Vmust contain such a constraint), whichimplies that after kd iterations m(H) $m(B) $ 2k. Hence, after kd successfulrounds, 2k # m(H) # nek/3. This impliesthat the preceding algorithm terminatesin at most 3d ln n successful rounds.Since each round takes O(dd) time tocompute xR and O(dn) time to computeV, the expected running time of thealgorithm is O((d2n 1 dd11)log n). Bycombining this algorithm with a ran-domized recursive algorithm, Clarksonimproved the expected running time toO(d2n) 1 dd/21O(1) log n.

6. ABSTRACT LINEAR PROGRAMMING

In this section we present an abstractframework that captures linear pro-gramming, as well as many other geo-metric-optimization problems, includingcomputing smallest enclosing balls (orellipsoids) of finite point sets in Rd,computing largest balls (ellipsoids) cir-cumscribed in convex polytopes in Rd,computing the distance between polyto-pes in Rd, general convex programming,and many other problems. Sharir andWelzl [1992] and Matousek et al. [1996]presented a randomized algorithm foroptimization problems in this frame-work, whose expected running time islinear in terms of the number of con-straints whenever the combinatorial di-mension d (whose precise definition, inthis abstract framework, is given in thefollowing) is fixed. More important, therunning time is subexponential in d formany of the LP-type problems, includ-ing linear programming. This is the

first subexponential “combinatorial”bound for linear programming (a boundthat counts the number of arithmeticoperations and is independent of the bitcomplexity of the input), and is a firststep toward the major open problem ofobtaining a strongly polynomial algo-rithm for linear programming. The pa-pers by Gartner and Welzl [1996] andGoldwasser [1995] also survey theknown results on LP-type problems. Adual version of the algorithm was inde-pendently obtained by Kalai [1992], butonly in the context of linear program-ming.

6.1 An Abstract Framework

Let us consider optimization problemsspecified by a pair (H, w), where H is afinite set, and w;2H 3 0 is a functioninto a linearly ordered set (0, #); weassume that 0 has a minimum value2`. The elements of H are called con-straints, and for G # H, w(G) is calledthe value of G. Intuitively, w(G) de-notes the smallest value attainable by acertain objective function while satisfy-ing all the constraints of G. The goal isto compute a minimal subset BH of Hwith w(BH) 5 w(H) (from which, ingeneral, the value of H is easy to deter-mine), assuming the availability ofthree basic operations, which we specifyin the following.

Such a minimization problem is calledLP-type if the following axioms are sat-isfied.

Axiom 1 (Monotonicity). For any F,G with F # G # H, we have

w~F! # w~G!.

Axiom 2 (Locality). For any F # G #H with 2` , w(F) 5 w(G) and any h [H,

w~G! , w~G ø $h%! f

w~F! , w~F ø $h%!.

Linear programming is easily shownto be an LP-type problem: Set w(G) to

Algorithms for Geometric Optimization • 425

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 15: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

be the vertex of the feasible region thatminimizes the objective function andthat is coordinate-wise lexicographicallysmallest (this definition is important tosatisfy Axiom 2), and extend the defini-tion of w(G) in an appropriate mannerto handle empty or unbounded feasibleregions.

A basis B # H is a set of constraintssatisfying 2` , w(B), and w(B9) ,w(B) for all proper subsets B9 of B. ForG # H, with 2` , w(G), a basis of G isa minimal subset B of G with w(B) 5w(G). (For linear programming, a basisof G is a minimal set of halfspace con-straints in G such that the minimalvertex of their intersection is the mini-mal vertex of G.) A constraint h is vio-lated by G if w(G) , w(G ø {h}), and itis extreme in G if w(G 2 {h}) , w(G).The combinatorial dimensions of(H, w), denoted as dim(H, w), is themaximum cardinality of any basis. Wecall an LP-type problem basis regular iffor any basis with uBu 5 dim(H, w) andfor any constraint h, every basis of B ø{h} has exactly dim(H, w) elements.(Clearly, linear programming is basis-regular, where the dimension of everybasis is d.)

We assume that the following primi-tive operations are available.

(Violation test) h is violated byB: for a constraint h and a basisB, tests whether h is violated by B.

(Basis computation) basis (B, h):for a constraint h and a basis B,computes a basis of B ø {h}.

(Initial basis) initial (H): An ini-

tial basis B0 with exactlydim(H, w) elements is available.

For linear programming, the first oper-ation can be performed in O(d) time, bysubstituting the coordinates of the ver-tex w(B) into the equation of the hyper-plane defining h. The second operationcan be regarded as a dual version of thepivot step in the simplex algorithm, andcan be implemented in O(d2) time. Thethird operation is also easy to imple-ment.

We are now in position to describe thealgorithm. Using the initial-basis prim-itive, we compute a basis B0 and callSUBEX_1p(H, B0), where SUBEX_1p isthe recursive algorithm, given in Figure2, for computing a basis BH of H.

A simple inductive argument showsthe expected number of primitive opera-tions performed by the algorithm isO(2dn), where n 5 uHu and d 5dim(H, w) is the combinatorial dimen-sion. However, using a more involvedanalysis, which can be found in Ma-tousek et al. [1996], one can show thatbasis-regular LP-type problems can besolved with an expected number of atmost

e2Îd ln~~n2d!/Îd 1 O~Îd 1 ln n!

violation tests and basis computations.This is the subexponential bound towhich we alluded.

Matousek [1994] has given examplesof abstract LP-type problems of combi-natorial dimension d with 2d con-straints, for which the preceding algo-

Fig. 2. A randomized algorithm for LP-type problems.

426 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 16: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

rithm requires V(e=2d/4=d) primitiveoperations. Here is an example of such aproblem. Let A be a lower-triangulard 3 d {0, 1}-matrix, with all diagonalentries being 0; that is, ai, j [ {0, 1} for1 # j , i # d and aij 5 0 for 1 # i #j # d. Let x1, . . . , xd denote variablesover Z2, and suppose that all additionsand multiplications are performed mod-ulo 2. We define a set of 2d constraints

H~ A! 5 $hicu1 # i # d,c [ $0, 1%%,

where

hic;xi $ O

j51

i21

aij xj 1 c.

That is, xi 5 1 if the right-hand side ofthe constraint is 1 modulo 2 and xi [{0, 1} if the right-hand side is 0 modulo2. For a subset G # H, we define w(G)to be the lexicographically smallestpoint of ùh[G h. It can be shown thatthe preceding example is an instance ofa basis-regular LP-type problem, withcombinatorial dimension d. Matousekshowed that if A is chosen randomly(i.e., each entry aij, for 1 # j , i # d, ischosen independently, with Pr[aij 5 0]5 Pr[aij 5 1] 5 1/2) and the initialbasis is also chosen randomly, then theexpected number of primitive opera-tions performed by SUBEX_1pis V(e=2d/4=d).

6.2 Linear Programming

We are given a set H of n halfspaces inRd. We assume that the objective vectoris c 5 (1, 0, 0, . . . , 0), and the goal isto minimize cx over all points in thecommon intersection ùh[H h. For asubset G # H, define w(G) to be thelexicographically smallest point (vertex)of the intersection of halfspaces in G.(As noted, some care is needed to handleunbounded or empty feasible regions;we omit here details concerning thisissue.)

As noted previously, linear program-ming is a basis-regular LP-type prob-lem, with combinatorial dimension d,

and each violation test or basis compu-tation can be implemented in time O(d)or O(d2), respectively. In summary, weobtain a randomized algorithm for lin-ear programming, which performs

e2Îd ln~n/Îd! 1 O~Îd 1 ln n!

expected number of arithmetic opera-tions. Using SUBEX_1p instead of thesimplex algorithm for solving the small-size problems in the RANDOM_1palgo-rithm (given in Figure 1), the expectednumber of arithmetic operations can bereduced to O(d2n) 1 eO(=d log d).

In view of Matousek’s lower bound,one should aim to exploit additionalproperties of linear programming to ob-tain a better bound on the performanceof the algorithm for linear program-ming; this is still a major open problem.

6.3 Extensions

Recently, Chazelle and Matousek [1996]gave a deterministic algorithm for solv-ing LP-type problems in time O(dO(d)n),provided an additional axiom holds (to-gether with an additional computa-tional assumption). Still, these extra re-quirements are satisfied in manynatural LP-type problems. Matousek[1995b] has investigated the problem offinding the best solution, for an abstractLP-type problem, that satisfies all but kof the given constraints. He proved thatthe number of bases that violate at mostk constraints in a nondegenerate in-stance of an LP-type problem is O((k 11)d), where d is the combinatorial di-mension of the problem, and that theycan be computed in time O(n(k 1 1)d).In some cases the running time can beimproved using appropriate data struc-tures. For example, given a set H of nhalfplanes in the plane and an integerk # n, the point with the smallest y-coordinate that lies in at least n 2 khalfplanes can be computed in time

O~n log k 1 k3 log2 n!

[Matousek 1995b; Roos and Widmayer1994]. For larger values of k, the run-

Algorithms for Geometric Optimization • 427

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 17: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

ning time is O(n log n 1 nk) [Chan1998]. If the intersection of the half-spaces is nonempty, the running timecan be improved to O(n 1 k(n/k)« log n)for any « . 0.

Amenta [1994a] considers the follow-ing extensions of the abstract frame-work. Suppose we are given a family ofLP-type problems (H, wl), monotoni-cally parameterized by a real parameterl; the underlying ordered value set 0has a maximum element 1` represent-ing infeasibility. The goal is to find thesmallest l for which (H, wl) is feasible(i.e., wl(H) , 1`). See Amenta[1994a,b] for more details and relatedwork.

6.4 Abstract Linear Programming andHelly-Type Theorems

In this subsection we describe an inter-esting connection between Helly-typetheorems and LP-type problems, asoriginally noted by Amenta [1994a].

Let K be an infinite collection of setsin Rd, and let t be an integer. We saythat K satisfies a Helly-type theorem,with Helly number t, if the followingholds. If _ is a finite subcollection of Kwith the property that every subcollec-tion of t elements of _ has a nonemptyintersection, then ù _ Þ À. (The bestknown example of a Helly-type theoremis Helly’s [1930] theorem itself, whichapplies for the collection K of all convexsets in Rd, with the Helly number d 11; see Danzer et al. [1963] and Eckhoffet al. [1993] for excellent surveys onthis topic.) Suppose further that we aregiven a collection _(l), consisting of nsets K1(l), . . . , Kn(l) that are parame-trized by some real parameter l, withthe property that Ki(l) # Ki(l9), for i 51, . . . , n and for l # l9, and that, forany fixed l, the family {K1(l), . . . ,Kn(l)} admits a Helly-type theorem,with a fixed Helly number t. Our goal isto compute the smallest l for whichùi51

n Ki(l) Þ À, assuming that such aminimum exists. Amenta proved thatthis problem can be transformed to an

LP-type problem, whose combinatorialdimension is at most t.

As an illustration, consider the small-est-enclosing-ball problem. Let P 5 {p1,. . . , pn} be the given set of n points inRd, and let Ki(l) be the ball of radius lcentered at pi, for i 5 1, . . . , n. Sincethe Kis are convex, the collection inquestion has Helly number d 1 1. It iseasily seen that the minimal l for whichthe Ki(l)s have nonempty intersectionis the radius of the smallest enclosingball of P. This shows that the smallest-enclosing-ball problem is LP-type, andcan thus be solved in O(n) randomizedexpected time in any fixed dimension.See the following for more details.

There are several other exampleswhere Helly-type theorems can beturned into LP-type problems. They in-clude computing a line transversal to afamily of translates of a convex object inthe plane and of certain families of con-vex objects in R3, and computing asmallest homothet of a given convex setthat intersects (or contains, or is con-tained in) every member in a given col-lection of n convex sets in Rd. We referthe reader to Amenta [1994a,b] for moredetails and for additional examples.

PART II: APPLICATIONS

In the first part of the article we focusedon general techniques for solving geo-metric-optimization problems. In thissecond part, we list numerous problemsin geometric optimization that can beattacked using some of the techniquesreviewed in the foregoing. For the sakeof completeness, we also review vari-ants of these problems for which thepreceding techniques are not applicable.

7. FACILITY-LOCATION PROBLEMS

A typical facility-location problem is de-fined as follows. Given a set D 5 {d1,. . . , dn} of n demand points in Rd, aparameter p, and a distance function d,we wish to find a set S of p supplyobjects (points, lines, segments, etc.) so

428 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 18: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

that the maximum distance between ademand point and its nearest supplyobject is minimized. That is, we mini-mize, over all possible appropriate setsS of supply objects, the following objec-tive function:

c~D, S! 5 max1#i#n

mins[S

d~di, s!.

Instead of minimizing the precedingquantity, one can choose other objectivefunctions, such as

c9~D, S! 5 Oi51

n

mins[S

d~di, s!.

In some applications, a weight wi isassigned to each point di [ D, and thedistance from di to a point x [ Rd isdefined as wid(di, x). The book byDrezner [1995] describes many othervariants of the facility-location problem.

The set S 5 {s1, . . . , sp} of supplyobjects partitions D into p clusters, D1,. . . , Dp, so that si is the nearest supplyobject to all points in Di. Therefore, afacility-location problem can also be re-garded as a clustering problem. Thesefacility-location (or clustering) problemsarise in many areas, including opera-tions research, shape analysis [Han andMyaeng 1996; Meghini 1995; Schroeterand Bigun 1995], data compression andvector quantization [Makhoul 1985], in-formation retrieval [Cutting et al. 1993,1992], drug design [Finn et al. 1997],and data mining [Agrawal et al. 1992;Brinkhoff and Kriegel 1994; Shafer etal. 1996]. A useful extension of the facil-ity-location problem, which has beenwidely studied, is the capacitated facili-ty-location problem, in which we havean additional constraint that the size ofeach cluster should be at most c forsome parameter c $ n/p.

If p is considered as part of the input,most facility-location problems are NP-hard, even in the plane or even whenonly an approximate solution is being

sought.5 Although many of these prob-lems can be solved in polynomial timefor a fixed value of p, some of them stillremain intractable. In this section wereview efficient algorithms for a fewspecific facility-location problems, towhich the techniques introduced in PartI can be applied; in these applications, pis usually a small constant.

7.1 Euclidean p-Center

Given a set D of n demand points in Rd,we wish to find a set S of p supplypoints so that the maximum Euclideandistance between a demand point andits nearest neighbor in S is minimized.This problem can be solved efficiently,when p is small, using the parametricsearching technique. The decision prob-lem in this case is to determine, for agiven radius r, whether D can be cov-ered by the union of p balls of radius r.In some applications, S is required to bea subset of D, in which case the problemis referred to as the discrete p-centerproblem.

General Results. A naive procedurefor the p-center problem runs in timeO(ndp12), observing that the critical ra-dius r* is determined by at most d 1 1points, which also determine one of theballs; similarly, there are O(nd(p21))choices for the other p 2 1 balls, and ittakes O(n) time to verify whether aspecific choice of balls covers D. For theplanar case, Drezner [1981] gave an im-proved O(n2p11)-time algorithm, whichwas subsequently improved by Hwanget al. [1993b] to nO(=p). Hwang et al.[1993a] have given another nO(=p)-timealgorithm for computing a discrete p-center. Recently, Agarwal and Pro-copiuc [1998] extended and simplifiedthe technique by Hwang et al. [1993b]to obtain an nO(p121/d)-time algorithm forcomputing a p-center of n points in Rd.Therefore, for a fixed value of p, theEuclidean p-center (and also the Euclid-

5 Please see Feder and Greene [1988], Gonzalez[1985], Ko et al. [1990], Maass [1986], Megiddo[1990], and Megiddo and Supowit [1984].

Algorithms for Geometric Optimization • 429

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 19: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

ean discrete p-center) problem can besolved in polynomial time in any fixeddimension. However, either of theseproblems is NP-complete for d $ 2, if pis part of the input [Fowler et al. 1981;Megiddo and Supowit 1984]. This hasled researchers to develop efficient algo-rithms for approximate solutions andfor small values of p and d.

Approximation Algorithms. Let r*be the minimum value of r for which pdisks of radius r cover D. Feder andGreene [1988] showed that computing aset S of p supply points so that c(D, S)# 1.822r* under the Euclidean distancefunction, or c(D, S) , 2r* under theL`-metric, is NP-hard. The greedy algo-rithm described in Figure 3, originallyproposed by Gonzalez [1985] and byHochbaum and Shmoys [1985, 1986],computes in O(np) time a set S of ppoints so that c(D, S) # 2r*.

This algorithm works equally well forany metric and for the discrete p-centerproblem. The running time was im-proved to O(n log p) by Feder andGreene [1988]. Agarwal and Procopiuc[1998] showed that, for any « . 0, a setS of p supply points with c(D, S) # (1 1«)r* can be computed in O(n log p) 1(p/«d)O(p121/d) time.

Dyer and Freeze [1985] modified thegreedy algorithm to obtain an approxi-mation algorithm for the weighted p-center problem, in which a weight w(p)is associated with each demand pointu [ D and the objective function isc# (D, S) 5 maxu[D minv[S w(u)d(u, v).Their algorithm computes a solutionwith approximation factor min{3, 1 1b}, where b is the ratio of the maximum

and the minimum weight. Plesnik[1987] improved the approximation fac-tor to 2, as in the unweighted case.Bar-Ilan et al. [1993] proposed a polyno-mial-time approximation algorithm forthe capacitated p-center problem thatcomputes a solution within a constantfactor. The approximation factor waslater improved in Khuller and Suss-mann [1996]. See Gonzalez [1991] andKo et al. [1990] for other approximationalgorithms.

Another way of seeking an approxi-mation is to find a small number ofballs of a fixed radius, say r, that coverall demand points. Computing k*, theminimum number of balls of radius rthat cover D, is also NP-complete [Fowl-er et al. 1981]. A greedy algorithm canconstruct k* log n balls of radius r thatcover D. Hochbaum and Maass [1985]gave a polynomial-time algorithm tocompute a cover of size (1 1 «)k*, forany « . 0 [1985]; see also Bronnimannand Goodrich [1995], Feder and Greene[1988], and Gonzalez [1991]. No con-stant-factor approximation algorithm isknown for the capacitated coveringproblem with unit-radius disks, that is,the problem of partitioning a givenpoint set S in the plane into the mini-mum number of clusters, each of whichconsists of at most c points and can becovered by a disk of unit radius. Never-theless, the greedy algorithm can bemodified to obtain an O(log n)-factor ap-proximation for this problem [Bar-Ilanet al. 1993].

The general results reviewed so far donot make use of parametric searching:since there are only O(nd11) candidate

Fig. 3. Greedy algorithm for approximate p-center.

430 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 20: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

values for the optimum radius r*, onecan simply enumerate all these valuesand run a standard binary searchamong them. The improvement that onecan gain from parametric searching issignificant only when p is relativelysmall, which is what we discuss next.

Euclidean 1-Center. The 1-centerproblem is to compute the smallest ballenclosing D. The decision procedure forthe 1-center problem is thus to deter-mine whether D can be covered by a ballof radius r. For d 5 2, the decisionproblem can be solved in O(log n) paral-lel steps using O(n) processors (e.g., betesting whether the intersection of thedisks of radius r centered at the pointsof D is nonempty). This yields an O(nlog3 n)-time algorithm for the planarEuclidean 1-center problem. Using theprune-and-search paradigm, one can,however, solve the 1-center problem inlinear time [Dyer 1986], and this ap-proach extends to higher dimensions,where, for any fixed d, the running timeis dO(d)n [Agarwal et al. 1993c; Chazelleand Matousek 1996; Dyer and Frieze1989]. Megiddo [1989; Megiddo and Ze-mel 1986] extends this approach to ob-tain a linear-time algorithm for theweighted 1-center problem. Dynamicdata structures for maintaining thesmallest enclosing ball of a set of points,as points are being inserted and de-leted, are given in Agarwal and Ma-tousek [1995] and Bar-Yehuda et al.[1993]. See Drezner [1981, 1989],Drezner et al. [1992], Follert et al.[1995], and Megiddo [1983c] for othervariants of the 1-center problem. A nat-ural extension of the 1-center problem isto find a disk of the smallest radius thatcontains k of the n input points. Thebest known deterministic algorithmruns in time O(n log n 1 nk log k)using O(n 1 k2 log k) space [Datta et al.1995; Eppstein and Erickson 1994] (seealso Efrat et al. [1994]), and the bestknown randomized algorithm runs inO(n log n 1 nk) expected time usingO(nk) space, or in O(n log n 1 nk log k)expected time using O(n) space [Ma-

tousek 1995a]. Matousek [1995b] alsoshowed that the smallest disk coveringall but k points can be computed intime6 O(n log n 1 k3n«). Chan [1998]presented a randomized algorithm forcomputing the discrete 1-center in R3

whose expected running time is O(nlog n).

The smallest-enclosing-ball problemis an LP-type problem, with combinato-rial dimension d 1 1 [Sharir and Welzl1992; Welzl 1991]. Indeed, the con-straints are the given points, and thefunction w maps each subset G to theradius of the smallest ball containing G.Monotonicity of w is trivial, and localityfollows easily from the uniqueness ofthe smallest enclosing ball of a given setof points. The combinatorial dimensionis d 1 1 because at most d 1 1 pointsare needed to determine the smallestenclosing ball. This problem, however,is not basis regular (the smallest enclos-ing ball may be determined by any num-ber, between 2 and d 1 1, of points),and a naive implementation of the ba-sis-changing operation may be quitecostly (in d). Nevertheless, Gartner[1995] showed that this operation canbe performed in this case using expectedeO(=d) arithmetic operations. Hence, theexpected running time of the algorithmis O(d2n) 1 eO(=d log d).

There are several extensions of thesmallest-enclosing-ball problem. Theyinclude (i) computing the smallest en-closing ellipsoid of a point set,7 (ii) com-puting the largest ellipsoid (or ball) in-scribed inside a convex polytope in Rd

[Gartner 1995], (iii) computing a small-est ball that intersects (or contains) agiven set of convex objects in Rd (seeMegiddo [1989]), and (iv) computing a

6 In this article, the meaning of complexity boundsthat depend on an arbitrary parameter « . 0,such as the one stated here, is that given any « .0, we can fine-tune the algorithm so that itscomplexity satisfies the stated bound. In thesebounds the constant of proportionality usuallydepends on «, and tends to infinity when « tendsto zero.7 Please see Chazelle and Matousek [1996], Dyer[1992], Post [1984], and Welzl [1991].

Algorithms for Geometric Optimization • 431

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 21: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

smallest area annulus containing agiven planar point set. All these prob-lems are known to be LP-type, and thuscan be solved using the algorithm de-scribed in Section 6. However, not all ofthem run in subexponential expectedtime because they are not all basis reg-ular. Linear-time deterministic algo-rithms, based on the prune-and-searchtechnique, have also been developed formany of these problems in two dimen-sions.8

Euclidean 2-Center. In this problemwe want to cover a set D of n points inRd by two balls of smallest possiblecommon radius. There is a trivialO(nd11)-time algorithm for the 2-centerproblem in Rd, because the clusters D1and D2 in an optimal solution can beseparated by a hyperplane [Drezner1984b]. Faster algorithms have been de-veloped for the planar case using para-metric searching. Agarwal and Sharir[1994] gave an O(n2 log n)-time algo-rithm for determining whether D can becovered by two disks of radius r. Theiralgorithm proceeds as follows. There areO(n2) distinct subsets of D that can becovered by a disk of radius r, and thesesubsets can be computed in O(n2 log n)time, by processing the arrangement ofthe n disks of radius r, centered at thepoints of D. For each such subset D1,the algorithm checks whether D\D1 canbe covered by another disk of radius r.Using a dynamic data structure, thetotal time spent is shown to be O(n2

log n). Plugging this algorithm into theparametric searching machinery, oneobtains an O(n2 log3 n)-time algorithmfor the Euclidean 2-center problem. Ma-tousek [1991b] gave a simpler random-ized algorithm with O(n2 log2 n) ex-pected time by replacing parametricsearching with randomization. The run-ning time of the decision algorithm wasimproved by Hershberger [1993] toO(n2), which has been utilized in the

best near-quadratic solution, by Jarom-czyk and Kowaluk [1994], which runs inO(n2 log n) time; see also Jaromczykand Kowaluk [1995a].

Major progress on this problem wasrecently made by Sharir [1997], whogave an O(n log9 n)-time algorithm, bycombining the parametric-searchingtechnique with several additional tech-niques, including a variant of the ma-trix-searching algorithm of Frederick-son and Johnson [1984]. Eppstein[1997] has simplified Sharir’s algo-rithm, using randomization and betterdata structures, and obtained an im-proved solution, whose expected run-ning time is O(n log2 n).

Recently Agarwal et al. [1997] havedeveloped an O(n4/3 log5 n)-time algo-rithm for the discrete 2-center problem.The decision problem in this case askswhether, given a radius r . 0, thereexist two points p, q [ D so that thedisks of radius r centered at p and qcover all the points in D. If we let B(p)denote the disk of radius r centered at apoint p, then the problem is equivalentto determining whether all disks in theset @ 5 {B(p)up [ D} can be hit by twopoints of D. For a point p [ D, let Kp 5ù{B [ @up [y B}. If p is one of the twopoints in a solution to the hitting setproblem, then the other point has to liein Kp. Hence, two points of D hit all thedisks of @ if and only if øp[D Kp con-tains at least one point of D. Exploitingseveral geometric and topological prop-erties of øp[D Kp, Agarwal et al. pre-sented an O(n4/3 log4 n)-time algorithmfor deciding whether øp[D Kp ù D 5 À,which leads to the overall performanceof the algorithm as stated previously. Itis an open question whether a near-linear algorithm exists for this problem.

Rectilinear p-Center. In this problemthe metric is the L`-distance, so thedecision problem is now to cover thegiven set D by a set of p axis-parallelcubes, each of length 2r. The problem isNP-hard if p is part of the input andd $ 2, or if d is part of the input andp $ 3 [Fowler et al. 1981; Megiddo

8 Please see Bhattacharya et al. [1991a,b], Bhatta-charya and Toussaint [1991], and Jadhav et al.[1996].

432 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 22: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

1990]. Ko et al. [1990] showed that com-puting a solution set S with c(D, S) ,2r* is also NP-hard.

The rectilinear 1-center problem istrivially solved in linear time, and apolynomial-time algorithm for the recti-linear 2-center problem, even if d isunbounded, is given in Megiddo [1990].A linear-time algorithm for the planarrectilinear 2-center problem is given byDrezner [1987] (see also Ko and Ching[1992]); Ko and Lee [1991] give an O(nlog n)-time algorithm for the weightedcase. Recently, Sharir and Welzl [1996]have developed a linear-time algorithmfor the rectilinear 3-center problem byshowing that it is an LP-type problem(as is the rectilinear 2-center problem).They have also obtained an O(n log n)-time algorithm for computing a rectilin-ear 4-center, using the matrix-searchingtechnique by Frederickson and Johnson;they have shown that this algorithm isoptimal in the worst case. RecentlyChan [1998] developed an O(n log n)expected time randomized algorithm forcomputing a rectilinear 5-center. SeeKatz and Nielsen [1996] and Sharir andWelzl [1996] for additional related re-sults.

7.2 Euclidean p-Line-Center

Let D be a set of n points in Rd and d bethe Euclidean distance function. Wewish to compute the smallest real valuew* so that D can be covered by theunion of p strips of width w*. Megiddoand Tamir [1982] showed that the prob-lem of determining whether w* 5 0(i.e., D can be covered by p lines) isNP-hard, which not only proves that thep-line-center problem is NP-complete,but also proves that approximating w*within a constant factor is NP-complete.Approximation algorithms for this prob-lem are given in Hassin and Megiddo[1991].

The 1-line center is the classicalwidth problem. For d 5 2, an O(nlog n)-time algorithm was given byHoule and Toussaint [1988]. A matchinglower bound was proved by Lee and Wu

[1986]. They also gave an O(n2 log n)-time algorithm for the weighted case,which was improved to O(n log n) byHoule et al. [1989].

For the 2-line-center problem in theplane, Agarwal and Sharir [1994] (seealso Agarwal and Sharir [1991]) gave anO(n2 log5 n)-time algorithm, usingparametric searching. This algorithm isvery similar to their 2-center algorithm;that is, the decision algorithm finds allsubsets of S that can be covered by astrip of width w and for each such sub-set S1, it determines whether S \ S1 canbe covered by another strip of width w.The heart of this decision procedure isan efficient algorithm for the followingoffline width problem. Given a sequenceS 5 (s1, . . . , sn) of insertions anddeletions of points in a planar set D anda real number w, is there an i such thatafter performing the first i updates, thewidth of the current point set is at mostw? A solution to this off-line width prob-lem, that runs in O(n2 log3 n) time, isgiven in Agarwal and Sharir [1991]. Therunning time for the optimization prob-lem was improved to O(n2 log4 n) byKatz and Sharir [1993] and Glozman etal. [1995], using expander graphs andthe Frederickson–Johnson matrixsearching technique, respectively. Thebest known algorithm, by Jaromczykand Kowaluk [1995b], runs in O(n2

log2 n) time. It is an open problemwhether a subquadratic algorithm ex-ists for computing a 2-line center.

7.3 Euclidean p-Median

Let D be a set of n points in Rd. Wewish to compute a set S of p supplypoints so that the sum of distances fromeach demand point to its nearest supplypoint is minimized (i.e., we want to min-imize the objective function c9(D, S)).This problem can be solved in polyno-mial time for d 5 1 (for d 5 1 and p 51 the solution is the median of the givenpoints, whence the problem derives itsname), and it is NP-hard for d $ 2[Megiddo and Supowit 1984]. The spe-cial case of d 5 2, p 5 1 is the classical

Algorithms for Geometric Optimization • 433

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 23: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

Fermant–Weber problem, and it goesback to the seventeenth century. It isknown that the solution for the Fer-mant–Weber problem is unique and al-gebraic provided that all points of D arenot collinear. Several numerical ap-proaches have been proposed to com-pute an approximate solution. SeeChandrasekaran and Tamir [1990] andWesolowsky [1993] for the history of theproblem and for the known algorithms,and Papadimitriou [1981] for some heu-ristics for the p-median problem thatwork well for a set of random points.Recently, Arora et al. [1998] describedan (1 1 «)-approximation algorithm forthe p-median problem in the planewhose running time is nO(1/«). For d .2, the running time of their algorithm isnO((log n/«)d21).

7.4 Segment-Center

Given a set D of n demand points in R2

and a segment e, we wish to find atranslated and rotated copy of e so thatthe maximum distance from each pointof D to this copy is minimized. Thisproblem was originally considered byImai et al. [1992], who had given anO(n4log n)-time algorithm. An improvedsolution, based on parametric search-ing, with O(n2a(n)log3 n) running time,was later obtained by Agarwal et al.[1993b] (here a(n) denotes the ex-tremely slowly growing inverse of Ack-ermann’s function). The decision prob-lem in this case is to determine whetherthere exists a translated and rotatedcopy of the hippodrome H 5 e Q Br, theMinkowski sum of the segment e with adisk of radius r, which fully contains D.Since H is convex, this is equivalent toH containing P 5 conv(D). Hence, thedecision procedure can be stated as fol-lows. Given a convex polygon P and thehippodrome H, does H contain a trans-lated and rotated copy of P? See Figure4. Note that placements of P can bespecified in terms of three parameters,two for the translation and one for therotation. Let FP # R3 denote the set ofplacements of P at which P lies inside

H. Using Davenport–Schinzel sequences[Sharir and Agarwal 1995], Agarwal etal. [1993b] showed that the complexityof FP is O(n22a(n)), and that it can becomputed in time O(n22a(n) log n). Byexploiting various geometric and combi-natorial properties of FP and usingsome elegant results from combinatorialgeometry, Efrat and Sharir [1996]showed that the complexity of FP isonly O(n log n), and that one can deter-mine in time O(n11«) whether FP Þ À.Plugging this into the parametricsearching technique, one obtains anO(n11«)-time solution to the segment-center problem.

7.5 Other Facility-Location Problems

In addition to the problems discussed,several other variants of the facility-location problem have been studied. Forexample, Hershberger [1992] describedan O(n2/log log n)-time algorithm forpartitioning a given set S of n pointsinto two subsets so that the sum of theirdiameters is minimized. If we want tominimize the maximum of the two di-ameters, the running time can be im-proved to O(n log n) [Hershberger andSuri 1991]. Glozman et al. [1995] havestudied problems of covering S by sev-eral different kinds of shapes. Maass[1986] showed that the problem of cov-ering S with the minimum number ofunit-width annuli is NP-hard even ford 5 1 (a unit-width annulus in R1 is aunion of two unit-length intervals), andHochbaum and Maass [1987] gave anapproximation algorithm for coveringpoints with annuli. There has beensome work on hierarchical clustering

Fig. 4. The segment-center problem.

434 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 24: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

[Eppstein 1998; Yianilos 1993] and ondynamic algorithms for clustering [Can1993; Charikar et al. 1997].

8. PROXIMITY PROBLEMS

8.1 Diameter in R3

Given a set S of n points in R3, we wishto compute the diameter of S, that is,the maximum distance between any twopoints of S. The decision procedure hereis to determine, for a given radius r,whether the intersection of the balls ofradius r centered at the points of Scontains S. The intersection of congru-ent balls in R3 has linear complexity[Grunbaum 1956; Heppes 1956], there-fore it is natural to ask whether the inter-section of n congruent balls can be com-puted in O(nlog n) time. (Checkingwhether all points of S lie in the intersec-tion can then be performed in additionalO(nlog n) time, using straightforwardpoint-location techniques.) Clarkson andShor [1989] gave a simple randomizedalgorithm with O(nlog n) expected time(which is worst-case optimal) for com-puting the intersection, and then used arandomized prune-and-search algo-rithm, summarized in Figure 5, to com-pute the diameter of S.

The correctness of the preceding algo-rithm is easy to check. The only non-trivial step in the algorithm is comput-ing I and S1. If d is the Euclideanmetric, I can be computed in O(uSu ex-pected time, using the ball-intersectionalgorithm. S1 can then be computed inadditional O(uSu log uSu) time, using anyoptimal planar point-location algorithm(see, e.g., Sarnak and Tarjan [1986]).Hence, each recursive step of the algo-

rithm takes O(uSu log uSu) expected time.Since p is chosen randomly, uS1u #2uSu/3 with high probability, which im-plies that the expected running time ofthe overall algorithm is O(n log n).

It was a challenging open problemwhether an O(n log n)-time determinis-tic algorithm can be developed for com-puting the intersection of n congruentballs in R3. This has been answered inthe affirmative by Amato et al. [1994],following a series of near-linear, butweaker, deterministic algorithms [Cha-zelle et al. 1993; Matousek and Schwar-zkopf 1996; Ramos 1997b]. Amato et al.derandomized the Clarkson–Shor algo-rithm, using several sophisticated tech-niques.9 Their algorithm yields an O(nlog3 n)-time algorithm for computingthe diameter. Recently, Ramos [1997a]and Bespamyatnikh [1998] obtainedO(n log2 n)-time algorithms for comput-ing the diameter. Obtaining an optimalO(n log n)-time deterministic algorithmfor computing the diameter in R3 stillremains elusive.

8.2 Closest Line Pair

Given a set L of n lines in R3, we wishto compute a closest pair of lines in L.Let d(L, L9) denote the Euclidean dis-tance between the closest pair of linesin L 3 L9, for two disjoint sets of L, L9of lines. Two algorithms for this prob-lem, both based on parametric search-ing, were given independently by Cha-zelle et al. [1993] and by Pellegrini[1994]; both algorithms run in O(n8/51«)time. Using Plucker coordinates [Chazelleet al. 1996; Sommerville 1951] and range-searching data structures, the algorithmsconstruct, in O(n8/51«) time, a family ofpairs {(L1, L91), . . . , (Lk, L9k)}, so thatevery line in Li lies below (in the z-direc-tion) all the lines of L9i and d(L, L9) 5min1#i#k d(Li, L9i). Hence, it suffices tocompute a closest pair in Li 3 L9i, for eachi # k, which can be done using paramet-

9 An earlier attempt by Bronnimann et al. [1993]to derandomize the Clarkson–Shor algorithm hadan error.

Fig. 5. A randomized algorithm for computingthe diameter in 3-D.

Algorithms for Geometric Optimization • 435

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 25: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

ric searching. The decision procedure canbe stated as follows. For a given realnumber r, determine whether d(Li, L9i) #r, for each i # k. Since lines in R3 havefour degrees of freedom, each of thesesubproblems can be transformed to thefollowing point-location problem in R4.Given a set S of n points in R4 (represent-ing the lines in Li) and a set G of msurfaces, each being the graph of an alge-braic trivariate function of constant de-gree (each surface is the locus lines in R3

that pass above a line of L9i at distance rfrom it), determine whether every pointin S lies below all the surfaces of G. It isshown in Chazelle et al. [1993] that thispoint-location problem can be solved intime O(n4/51« m4/51«), which implies anO(n8/51«)-time algorithm for computingd(L, L9). Agarwal and Sharir [1996a]have shown that d(Li, L9i) can be com-puted in O(n3/41« m3/41«) expected time,by replacing parametric searching withrandomization and by exploiting certaingeometric properties that the surfaces inG possess. Roughly speaking, this is ac-complished by generalizing the Clarkson–Shor algorithm for computing the diame-ter, described in Figure 5. However, thisalgorithm does not improve the runningtime for computing d(L, L9), because westill need O(n8/51«) time for constructingthe pairs (Li, L9i).

If we are interested in computing apair of lines with the minimum verticaldistance, the running time can be im-proved to O(n4/31«) [Pellegrini 1994].

8.3 Distance Between Polytopes

We wish to compute the Euclidean dis-tance d(31, 32) between two given con-vex polytopes 31 and 32 in Rd. If thepolytopes intersect, then this distance is0. If they do not intersect, then thisdistance equals the maximum distancebetween two parallel hyperplanes sepa-rating the polytopes; such a pair of hy-perplanes is unique, and they are or-thogonal to the segment connecting twopoints a [ 31 and b [ 32 with d(a, b) 5d(31, 32). It is shown by Gartner[1995] that this problem is LP-type,

with combinatorial dimension at mostd 1 2 (or d 1 1, if the polytopes do notintersect). It is also shown there thatthe primitive operations can be per-formed with expected eO(=d) arithmeticoperations. Hence, the problem can besolved by the general LP-type algo-rithm, whose expected number of arith-metic operations is O(d2n) 1eO(=d log d), where n is the total numberof facets in 31 and 32. For d 5 2, themaximum and the minimum distancebetween two convex polygons can becomputed in O(log n) time, assumingthat the vertices of each Pi are stored inan array, sorted in a clockwise order[Edelsbrunner 1985].

8.4 Selecting Distances

Let S be a set of n points in the plane,and let 1 # k # (2

n) be an integer. Wewish to compute the kth smallest dis-tance between a pair of points of S. Thiscan be done using parametric searching.The decision problem is to compute, fora given real r, the sum

Op[S

uDr~ p! ù ~S 2 $ p%!u,

where Dr(p) is the closed disk of radiusr centered at p. (This sum is twice thenumber of pairs of points of S at dis-tance #r.) Agarwal et al. [1993a] gavea randomized algorithm, with O(n4/3

log4/3 n) expected time, for the decisionproblem, using the random-samplingtechnique of Clarkson and Shor [1989],which yields an O(n4/3 log8/3 n) expected-time algorithm for the distance-selectionproblem. Goodrich [1993] derandomizedthis algorithm, at a cost of an additionalpolylogarithmic factor in the runningtime. Katz and Sharir [1997] obtained anexpander-based O(n4/3 log31« n)- time(deterministic) algorithm for this prob-lem. See also Salowe [1989].

8.5 Shape Matching

Let P and Q be two polygons with mand n edges, respectively. The problemis to measure the resemblance between

436 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 26: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

P and Q, that is, to determine how wella copy of P can fit Q, if we allow P totranslate or to both translate and ro-tate. The Hausdorff distance is one ofthe common ways of measuring resem-blance between two (fixed) sets P andQ [Huttenlocher and Kedem 1990]; it isdefined as

H~P, Q! 5 max$maxa[P

minb[Q

d~a, b!,

maxa[Q

minb[P

d~a, b!%,

where d(z, z) is the underlying metric,usually the Euclidean distance.

If we allow P to translate only, thenwe want to compute minv H(P 1 v, Q).The problem has been solved by Agar-wal et al. [1994], using parametricsearching, in O((mn)2 log3 (mn)) time,which is significantly faster than thepreviously best known algorithm by Altet al. [1995]. If P and Q are finite sets ofpoints, with uPu 5 m and uQu 5 n, amore efficient solution, not based onparametric searching, is proposed byHuttenlocher et al. [1993]. Their solu-tion, however, does not apply to the caseof polygons. If we measure distance bythe L`-metric, faster algorithms, basedon parametric searching, are developedin Chan [1998], Chew et al. [1995], andChew and Kedem [1992].

If we allow P to translate and rotate,then computing the minimum Haus-dorff distance becomes significantlyharder. Chew et al. [1997] gave anO(m2n2log3 mn)-time algorithm whenboth P and Q are finite point sets, andan O(m3n2 log3 mn)-time algorithm whenP and Q are polygons.

Another way of measuring the resem-blance between two polygons P and Q isby computing the area of their intersec-tion (or, rather, of their symmetric dif-ference). Suppose we wish to minimizethe area of the symmetric differencebetween P and Q, under translation ofP. For this case, de Berg et al. [1996]gave an O((m 1 n)log(m 1 n))-time algo-rithm, using the prune-and-search par-adigm. Their algorithm can be extended

to higher dimensions at a polylogarithmiccost, using parametric searching.

8.6 Surface Simplification

A generic surface-simplification prob-lem is defined as follows. Given a poly-hedral object P in R3 and an error pa-rameter « . 0, compute a polyhedralapproximation P of P with the mini-mum number of vertices, so that themaximum distance between P and P isat most «. There are several ways ofdefining the maximum distance be-tween P and P, depending on the appli-cation. We refer to an object that lieswithin « distance from P as an «-approx-imation of P. Surface simplification is acentral problem in graphics, geographicinformation systems, scientific comput-ing, and visualization.

One way of solving the problem is torun a binary search on the number ofvertices of the approximating surface.We then need to solve the decision prob-lem of determining whether there existsan «-approximation with at most k ver-tices, for some given k. Unfortunately,this problem is NP-hard [Agarwal andSuri 1994], so one seeks efficient tech-niques for computing an «-approxima-tion of size (number of vertices) close tokOPT, where kOPT is the minimum sizeof an «-approximation. Although severalad hoc algorithms have been developedfor computing an «-approximation,10

none of them guarantees any reasonablebound on the size of the output, andmany of them do not even ensure thatthe maximum distance between the in-put and the output surface is indeed atmost «. There has been some recentprogress on developing polynomial-timeapproximation algorithms for comput-ing «-approximations in some specialcases.

The simplest, but nevertheless an in-teresting, special case is when P is aconvex polytope (containing the origin).

10 Please see De Floriani [1987], DeHaemer andZyda [1991], Hoppe et al. [1993, 1994], andMagillo and De Floriani [1996].

Algorithms for Geometric Optimization • 437

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 27: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

In this case we wish to compute anotherconvex polytope Q with the minimumnumber of vertices so that (1 2 «)P #Q # (1 1 «)P (or so that P # Q #(1 1 «)P). We can thus pose a moregeneral problem: given two convex poly-topes P1 # P2 in R3, compute a convexpolytope Q with the minimum numberof vertices such that P1 # Q # P2. Dasand Joseph [1990] have attempted toprove that this problem is NP-hard, buttheir proof contains an error, and it stillremains an open problem. Mitchell andSuri [1995] have shown that there ex-ists a nested polytope Q with at most3kOPT vertices, whose vertices are asubset of the vertices of P2. The problemcan now be formulated as a hitting-setproblem, and, using a greedy approach,they presented an O(n3)-time algorithmfor computing a nested polytope withO(kOPT log n) vertices. Clarkson [1993]showed that the randomized techniquedescribed in Section 5 can compute anested polytope with O(kOPT log kOPT)vertices in O(nlogcn) expected time, forsome constant c . 0. Bronnimann andGoodrich [1995] extended Clarkson’s al-gorithm to obtain a polynomial-time de-terministic algorithm that constructs anested polytope with O(kOPT) vertices.

A widely studied special case of sur-face simplification, motivated by appli-cations in geographic information sys-tems and scientific computing is when Pis a polyhedral terrain (i.e., the graph ofa continuous piecewise-linear bivariatefunction). In most of the applications, Pis represented as a finite set of n points,sampled from the input surface, and thegoal is to compute a polyhedral terrainQ with the minimum number of verti-ces, such that the vertical distance be-tween any point of P and Q is at most «.Agarwal and Suri [1994] showed thatthis problem is NP-hard. They also gavea polynomial-time algorithm for com-puting an «-approximation of sizeO(kOPT log kOPT), by reducing the prob-lem to a geometric set-cover problem,but the running time of their algorithmis O(n8), which is rather high. Agarwaland Desiken [1997] have shown that

Clarkson’s randomized algorithm can beextended to compute a polyhedral ter-rain of size O(kOPT

2 log2kOPT) in ex-pected time O(n21d 1 kOPT

3 log3 kOPT).The survey paper by Heckbert and Gar-land [1995] summarizes most of theknown results on terrain simplification.

Instead of fixing « and minimizing thesize of the approximating surface, wecan fix the size and ask for the bestapproximation. That is, given a polyhe-dral surface P and an integer k, com-pute an approximating surface Q thathas at most k vertices, whose distancefrom P is the smallest possible. Verylittle is known about this problem, ex-cept in the plane. Goodrich [1995]showed that, given a set S of n points inthe plane, an x-monotone polygonalchain Q with at most k vertices thatminimizes the maximum vertical dis-tance between Q and the points of S canbe computed in time O(nlog n). His al-gorithm is based on the parametric-searching technique and uses Cole’s im-provement of parametric searching.(See Goodrich [1995] for related work onthis problem). If the vertices of Q arerequired to be a subset of S, the bestknown algorithm is by Varadarajan[1996]; it is based on parametric search-ing, and its running time is O(n4/31«).

9. STATISTICAL ESTIMATORS ANDRELATED PROBLEMS

9.1 Plane Fitting

Given a set S of n points in R3, we wishto fit a plane h through S so that themaximum distance between h and thepoints of S is minimized. This is thesame problem as computing the widthof S (the smallest distance between apair of parallel supporting planes of S),which is considerably harder than thetwo-dimensional variant mentioned inSection 7.2. Houle and Toussaint [1988]gave an O(n2)-time algorithm for com-puting the width in R3. This can beimproved using parametric searching.The decision procedure is to determine,for a given distance w, whether the

438 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 28: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

convex hull of S has two antipodaledges, such that the two parallel planescontaining these edges are supportingplanes of S and lie at distance #w. (Onealso needs to consider pairs of parallelplanes, one containing a facet of conv(S)and the other passing through a vertex.However, it is easy to test all thesepairs in O(nlog n) time.) The majortechnical issue here is to avoid havingto test quadratically many pairs of an-tipodal edges, which may exist in theworst case. Chazelle et al. [1993] gavean algorithm that is based on parametricsesearching and runs in time O(n8/51«)(see Agarwal et al. [1997a] for an im-proved bound). They reduced the widthproblem to the problem of computing aclosest pair between two sets L, L9 oflines in R3 (each line containing an edgeof the convex hull of S), such that eachline in L lies below all the lines of L9.The fact that this latter problem nowhas an improved O(n3/21«) expected-time solution (see Section 8.2 and Agar-wal and Sharir [1996a]) implies that thewidth can also be computed in expectedtime O(n3/21«). See Korneenko andMartini [1993], Matousek et al. [1993],Stein and Werman [1992a,b], and Vara-darajan and Agarwal [1995] for otherresults on hyperplane fitting.

9.2 Circle Fitting

Given a set S of n points in the plane,we wish to fit a circle C through S sothat the maximum distance between thepoints of S and C is minimized. This isequivalent to finding an annulus of min-imum width that contains S. Ebara etal. [1989] observed that the center of aminimum-width annulus is either a ver-tex of the closest-point Voronoi diagramof S, a vertex of the farthest-pointVoronoi diagram, or an intersectionpoint of a pair of edges of the two dia-grams. Based on this observation, theyobtained a quadratic-time algorithm.Using parametric searching, Agarwal etal. [1994] showed that the center of theminimum-width annulus can be foundwithout checking all of the O(n2) candi-

date intersection points explicitly; theiralgorithm runs in O(n8/51«) time; seealso Agarwal et al. [1997a] for an im-proved solution. Using randomizationand an improved analysis, the expectedrunning time was improved to O(n3/21«)by Agarwal and Sharir [1996a]. Findingan annulus of minimum area that con-tains S is a simpler problem, since itcan be formulated as an instance oflinear programming in R4, and can thusbe solved in O(n) time [Megiddo 1984].Recently, researchers have studied theproblem of fitting a circle of a givenradius through S so that the maximumdistance between S and the circle isminimized. This problem is consider-ably simpler and can be solved in O(nlog n) time [de Berg et al. 1997; Duncanet al. 1997]. In certain applications[Hocken 1993; Roy and Zhang 1992;Voekker 1993], one wants to fit a circleC through S so that the sum of dis-tances between C and the points of S isminimized. No algorithm is known forcomputing an exact solution, althoughseveral numerical techniques have beenproposed; see Berman [1989], Landau[1987], and Thomas and Chen [1989].See Garcıa-Lopez and Ramos [1997], Leand Lee [1991], Mehlhorn et al. [1997],and Swanson [1995] for other variantsof the circle-fitting problem and forsome special cases.

9.3 Cylinder Fitting

Given a set S of n points in R3, we wishto find a cylinder of the smallest radiusthat contains S. Using parametricsearching, the decision problem in thiscase can be rephrased as; given a set @of n balls of a fixed radius r in R3,determine whether there exists a linethat intersects all the balls of @ (theballs are centered at the points of S andthe line is the symmetry axis of a cylin-der or radius r that contains S). Agar-wal and Matousek [1994] showed thatfinding such a line can be reduced tocomputing the convex hull of a set of npoints in R9, which, combined withparametric searching, leads to an O(n4

Algorithms for Geometric Optimization • 439

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 29: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

logO(1) n)-time algorithm for finding asmallest cylinder enclosing S; see, forexample, Schomer et al. [1996]. Thebound was recently improved by Agar-wal et al. [1997b] to O(n31«), by show-ing that the combinatorial complexity ofthe space of all lines that intersect allthe balls of @ is O(n31«), and by design-ing a different algorithm, also based onparametric searching, whose decisionprocedure calculates this space of linesand determines whether it is nonempty.Faster algorithms have been developedfor some special cases [Follert et al.1995; Schomer et al. 1996]. Agarwal etal. [1997b] also gave an O(n/d2)-timealgorithm to compute a cylinder of ra-dius # (1 1 d)r* containing all thepoints of S, where r* is the radius of thesmallest cylinder enclosing S.

Note that this problem is differentfrom those considered in the two previ-ous subsections. The problem analogousto those studied would be to find a cylin-drical shell (a region enclosed betweentwo concentric cylinders) of smallestwidth (difference between the radii ofthe cylinders), which contains a givenpoint set S. This problem is consider-ably harder, and no solution for it thatimproves upon the naive brute-forcetechnique is known.

9.4 Center Points

Given a set S of n points in the plane,we wish to compute a center point s [R2, such that any halfplane containings also contains at least n/3 points ofS. It is a known consequence of Helly’sTheorem that s always exists [Edels-brunner 1987]. In a dual setting, let Lbe the set of lines dual to the points inS, and let K1, K2 be the convex hulls ofthe n/3 and 2n/3 levels of the ar-rangement !(L), respectively.11 Thedual of a center point of S is a line

separating K1 and K2. This implies thatthe set of center points is a convex poly-gon with at most 2n edges.

Cole et al. [1987] gave an O(n log3 n)-time algorithm for computing a centerpoint, using a multidimensional para-metric searching. Using the prune-and-search paradigm, Matousek [1991a] ob-tained an O(n log3 n)-time algorithm forcomputing K1 and K2, which in turnyields the set of all center points. Re-cently, Jadhav and Mukhopadhyay[1994] gave a linear-time algorithm forcomputing a center point, using a directand elegant technique.

Near-quadratic algorithms for com-puting a center point in three dimen-sions were developed in Cole et al.[1987] and Naor and Sharir [1990].Clarkson et al. [1996] gave an efficientalgorithm for computing an approxi-mate center point in Rd.

9.5 Ham-Sandwich Cuts

Let S1, . . . , Sd be d (finite) point setsin Rd. A ham-sandwich cut is a hyper-plane h such that each of the two openhalfspaces bounded by h contains atmost uSiu/2 points of Si, for each 1 #i # d. The ham-sandwich theorem (see,e.g., Edelsbrunner [1987]) guaranteesthe existence of such a cut. For d 5 2,there is always a ham-sandwich cutwhose dual is an intersection point ofthe median levels of !(L1) and !(L2),where Li is the set of lines dual to thepoints in Si, for i 5 1, 2. It can beshown that the number of intersectionpoints between the median levels of!(L1) and of !(L2) is always odd.

Several prune-and-search algorithmshave been proposed for computing aham-sandwich cut in the plane.Megiddo [1985] gave a linear-time algo-rithm for the special case in which S1and S2 are linearly separable. Modify-ing this algorithm, Edelsbrunner andWaupotitsch [1986] gave an O(n log n)-time algorithm when S1 and S2 are notnecessarily linearly separable. A linear-time, recursive algorithm for this gen-eral case is given by Lo and Steiger

11 The level of a point p with respect to !(L) is thenumber of lines lying strictly below p. The k-levelof !(L) is the closure of the set of edges of !(L)whose level is k (the level is fixed over an edge of!(L)); each k-level is an x-monotone connectedpolygonal chain.

440 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 30: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

[1990]. It works as follows. At each levelof recursion, the algorithm maintainstwo sets of lines, R and B, and twointegers p, q, such that any intersectionpoint between KR

p , the p-level of !(R),and KB

q , the q-level of !(B), is dual to aham-sandwich cut of the original sets;moreover, the number of such intersec-tions is guaranteed to be odd. The goalis to compute an intersection point ofKR

p and KBq . Initially R and B are the

sets of lines dual to S1 and S2, and p 5uS1u/2, q 5 uS2u/2. Let r be a suffi-ciently large constant. One then com-putes a (1/r)-cutting J of R ø B. Atleast one of the triangles of J containsand odd number of intersection points ofthe levels. By computing the intersec-tion points of the edges of J with R øB, such a triangle D can be found inlinear time. Let RD # R and BD # B bethe subsets of lines in R and B, respec-tively, that intersect D, and let p9 (re-spectively, q9) be the number of lines inR (respectively, B) that lie below D. Wethen solve the problem recursively forRD and BD with the respective levelsp* 5 p 2 p9 and q* 5 q 2 q9. It easilyfollows that the p*-level of !(RD) andthe q*-level of !(BD) intersect at an oddnumber of points. Since uRDu 1 uBDu 5O(n/r), the total running time of therecursive algorithm is O(n). Lo et al.[1994] extend this approach to R3, andobtain an O(n3/2)-time algorithm forcomputing ham-sandwich cuts in threedimensions.

10. PLACEMENT AND INTERSECTION

10.1 Intersection of Polyhedra

Given a set 3 5 {P1, . . . , Pm} of m con-vex polyhedra in Rd, with a total of nfacets, is their common intersection I 5ùi51

m Pi nonempty? Of course, this is aninstance of linear programming in Rd

with n constraints, but the goal is toobtain faster algorithms that depend onm more significantly than they dependon n. Reichling [1988a] presented anO(m log2 n)-time prune-and-search al-gorithm for d 5 2. His algorithm main-

tains a vertical strip W bounded by twovertical lines bl, br, such that I # W.Let k be the total number of verticesof all the Pis lying inside W. If k # mlog n, the algorithm explicitly computesI ù W in O(m log2 n) time. Otherwise,it finds a vertical line , inside W suchthat both W1 and W2 contain at leastk/4 vertices, where W1 (respectively,W2) is the portion of W lying to theright (respectively, to the left) of ,. Todo so, it finds the vertex with the me-dian x-coordinate of each Pi, and takes,;x 5 xm, where xm is an appropriatelyweighted median of these median coor-dinates.

By running a binary search on eachPi, one can determine whether , inter-sects Pi and, if so, obtain the top andbottom edges of Pi intersecting ,. Thisallows us to compute the intersectionI ù , as the intersection of m intervals,in O(m) time. If this intersection is non-empty, we stop, since we have found apoint in I. Otherwise, if one of the poly-gons of 3 lies fully to the right (respec-tively, to the left) of ,, then I cannot liein W2 (respectively, in W1); if one poly-gon lies fully in W2 and another liesfully in W1, then clearly I 5 À. Finally,if , intersects all the Pis, but theirintersection along , is empty, then,following the same technique as inMegiddo’s [1983b] two-dimensional lin-ear-programming algorithm, one can de-termine, in additional O(m) time, whichof W1, W2 can be asserted not to con-tain I. Hence, if the algorithm has notstopped, it needs to recourse in only oneof the slabs W1, W2. Since the algo-rithm prunes a fraction of the verticesin each stage, it terminates afterO(log n) stages, from which the as-serted running time follows easily.Reichling [1988b] and Eppstein [1992]extended this approach to d 5 3, buttheir approaches do not extend tohigher dimensions. However, if we havea comparison-based data structure thatcan determine in O(log n) time whethera query point lies in a specified Pi, then,using multidimensional parametric

Algorithms for Geometric Optimization • 441

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 31: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

searching, we can determine in timeO(m logO(1) n) whether I Þ À.

10.2 Polygon Placement

Let P be a convex m-gon, and let Q be aclosed planar polygonal environmentwith n edges. We wish to compute thelargest similar copy of P (under transla-tion, rotation, and scaling) that can beplaced inside Q. Using generalizedDelaunay triangulation induced by Pwithin Q, Chew and Kedem [1993] ob-tained a O(m4 n22a(n) log n)-time algo-rithm. Faster algorithms have been de-veloped using parametric searching[Agarwal et al. 1997c; Sharir and Toledo1994]. The decision problem in this casecan be defined as follows. Given a con-vex polygon B with m edges (a scaledcopy of P) and a planar polygonal envi-ronment Q with n edges, can B beplaced inside Q (allowing translationand rotation)? Each placement of B canbe represented as a point in R3, usingtwo coordinates for translation and onefor rotation. Let FP denote the resultingthree-dimensional space of all freeplacements of B inside Q. FP is theunion of a collection of cells of an ar-rangement of O(mn) contact surfaces inR3. Leven and Sharir [1987] have shownthat the complexity of FP isO(mnl6(mn)), where ls(n) is the maxi-mum length of a Davenport–Schinzelsequence of order s composed of n sym-bols [Sharir and Agarwal 1995] (it isalmost linear in n for any fixed s).Sharir and Toledo [1994] gave anO(m2nl6(mn)log mn)-time algorithm todetermine whether FP Þ À: they firstcompute a superset of the vertices of FP,in O(mnl6(mn)log mn) time, and thenspend O(m log n) time for each of thesevertices to determine whether the corre-sponding placement of B is free, using atriangle range-searching data structure.Recently, Agarwal et al. [1997c] gave anO(mnl6(mn) log mn) expected-time ran-domized algorithm to compute FP. Plug-ging these algorithms into the paramet-ric-searching machinery, one can obtain

an

O~m2nl6~mn!log3 mn log log mn!

time deterministic algorithm, or anO(mnl6(mn)log4 mn) expected-timerandomized algorithm, for computing alargest similar placement of P inside Q.

Faster algorithms are known for com-puting a largest placement of P inside Qin some special cases. If both P and Qare convex, then a largest similar copyof P inside Q can be computed in timeO(mn2 log n) [Agarwal et al. 1998]; if Pis not allowed to rotate, then the run-ning time is O(m 1 n log2 n) [Toledo1991].

The biggest-stick problem is anotherinteresting special case of the largest-placement problem; here Q is a simplepolygon and P is a line segment. In thiscase, we are interested in finding thelongest segment that can be placed in-side Q. This problem can be solved us-ing a divide-and-conquer algorithm, de-veloped in Agarwal et al. [1994], andlater refined in Agarwal et al. [1997a]and Agarwal and Sharir [1996a]. It pro-ceeds as follows. Partition Q into twosimple polygons Q1, Q2 by a diagonal ,so that each of Q1 and Q2 has at most2n/3 vertices. Recursively compute thelongest segment that can be placed ineach Qi, and then determine the longestsegment that can be placed in Q andthat intersects the diagonal ,. The deci-sion algorithm for the merge step is todetermine whether there exists a place-ment of a line segment of length w thatlies inside Q and crosses ,. Agarwal etal. [1994] have shown that this problemcan be reduced to the following. We aregiven a set S of points and a set G ofalgebraic surfaces in R4, where eachsurface is the graph of a trivariate func-tion, and we wish to determine whetherevery point of S lies below all the surfacesof G. Agarwal and Sharir [1996a] gave arandomized algorithm with O(n3/21«) ex-pected running time for this point-loca-tion problem. Using randomization, in-stead of parametric searching, theyobtained an O(n3/21«) expected-time pro-

442 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 32: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

cedure for the overall merge step (findingthe biggest stick that crosses ,). The totalrunning time of the algorithm is thereforealso O(n3/21«). Finding a longest segmentinside Q whose endpoints are vertices ofQ is a simpler problem, and can be solvedby a linear-time algorithm due to Hersh-berger and Suri [1993b].

10.3 Collision Detection

Let P and Q be two (possibly nonconvex)polyhedra in R3. P is assumed to befixed and Q to move along a given tra-jectory p. The goal is to determine thefirst position on p, if any, at which Qintersects P. This problem can be solvedusing parametric searching. Suppose,for example, that Q is only allowed totranslate along a line. Then the decisionproblem is to determine whether Q in-tersects P as it translates along a seg-ment e in R3. Let Qe 5 Q Q e be theMinkowski sum of Q and e. Then Qintersects P as it translates along e ifand only if Qe intersects P. This inter-section problem can be solved in O(n8/51«)time using simplex range-searching datastructures [Pellegrini 1993; Schomer andThiel 1995]. Plugging this into the para-metric-searching machinery, we can com-pute, in O(n8/51«) time, the first intersec-tion of Q with P as Q moves along a line.

If Q rotates around a fixed axis ,,then the decision problem is to deter-mine whether Q intersects P as it ro-tates by a given angle u from its initialposition. In this case, each edge ofQ sweeps a section of a hyperboloid.Schomer and Thiel [1995] have shownthat, using a standard linearizationtechnique (as described in Agarwal andMatousek [1994] and Yao and Yao[1985]), the intersection-detection prob-lem can be formulated as an instance ofsimplex range searching in R5, and canbe solved in time O(n8/51«). Pluggingthis algorithm into the parametric-searching technique, we can also computethe first intersection in time O(n8/51«).

Gupta et al. [1994] have studied vari-ous collision-detection problems for aset of moving points in the plane. For

example, they given an O(n5/3 log6/5 n)-time algorithm for determining whethera collision occurs in a set of points, eachmoving in the plane along a line withconstant velocity.

11. QUERY-TYPE PROBLEMS

Parametric searching has also been suc-cessfully applied in designing efficientdata structures for a number of query-type problems. In this section we dis-cuss a few of these problems, includingray shooting and linear-optimizationqueries.

11.1 Ray Shooting

The general ray-shooting problem canbe defined as follows. Preprocess a givenset S of objects in Rd (usually d 5 2 or3), so that the first object hit by a queryray can be computed efficiently. Theray-shooting problem arises in com-puter graphics, visualization, and inmany other geometric problems [Agar-wal and Matousek 1993, 1994, 1995;Pellegrini 1996]. The connection be-tween ray shooting and parametricsearching was observed by Agarwal andMatousek [1993]. Here the decisionproblem is to determine, for a specifiedpoint s on the query ray r, whether theinitial segment ss of r intersects anyobject in S (where s is the origin of r).Hence, we need to execute generically,in the parametric-searching style, anappropriate intersection-detection queryprocedure on the segment ss, where s isthe (unknown) first intersection point of rand the objects in S. Based on this tech-nique, several efficient ray-shooting datastructures have been developed.12 We il-lustrate this technique by giving a simpleexample.

Let S be a set of n lines in the plane,and let S* be the set of points dual tothe lines of S. A segment e intersects aline of S if and only if the double-wedge

12 Please see Agarwal et al. [1997a], Agarwal andMatousek [1993], and Agarwal and Sharir[1996b,c].

Algorithms for Geometric Optimization • 443

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 33: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

e* dual to e contains a point of S*.Hence, a segment intersection-detectionquery for S can be answered by prepro-cessing S* into a triangle (or a wedge)range-searching structure; see, for ex-ample, Matousek [1992, 1993]. Roughlyspeaking, we construct a partition treeT on S* as follows. We fix a sufficientlylarge constant r. If uS*u # 2r, T consistsof a single node storing S*. Otherwise,using a result of Matousek [1992], weconstruct, in O(n) time, a family ofpairs P 5 {(S*1, D1), . . . , (S*u, Du)} suchthat (i) S*1, . . . , S*u form a partition ofS*, (ii) n/r # uS*iu # 2n/r for each i, (iii)each Di is a triangle containing S*i, and(iv) every line intersects at most c=rtriangles Di of P, for some absolute con-stant c (independent of r). We recur-sively construct a partition tree Ti oneach S*i, and attach it as the ith subtreeof T. The root of Ti stores the simplexDi. The size of T is linear, and the timespent in constructing T is O(n log n).

Let e be a query segment, and let e*be its dual double wedge. To determinewhether e intersects any line of S (i.e.,whether e* contains any point of S*),we traverse T in a top-down fashion,starting from the root. Let v be a nodevisited by the algorithm. If v is a leaf,we explicitly check whether any point ofS*v lies in the double wedge e*. Supposethen that v is an internal node. If Dv #e*, then clearly e* ù SÞ À, and we stop.If Dv ù e* 5 À, then we stop processingv and do not visit any of its children. If­e* intersects Dv, we recursively visitall the children of v. Let Q(nv) denotethe number of nodes in the subtreerooted at v visited by the query proce-

dure (nv is the size of S*v). By construc-tion, a line intersects at most c=r trian-gles of Pv (the partition constructed atv), so ­e* intersects at most 2c=r trian-gles of Pv. Hence, we obtain the follow-ing recurrence:

Q~nv! # 2cÎr Q~2nv/r! 1 O~r!.

The solution of the preceding recurrenceis O(n1/21«), for any « . 0, provided r ischosen sufficiently large (as a functionof «). Since the height of T is O(log n),we can answer a query in O(log n) par-allel time, using O(n1/21«) processors,by visiting the nodes of the same levelin parallel.

Returning to the task of answering aray-shooting query, let r be a query raywith origin s, and let s be the (un-known) first intersection point of r witha line of S. We compute s by runningthe parallel version of the segment in-tersection-detection procedure generi-cally, in the parametric-searching style,on the segment g 5 ss. At each node vthat the query procedure visits, it testswhether Dv # g* or Dv ù ­g* Þ À. Sinces is the only indeterminant in thesetests, the tests reduce to determining,for each vertex p of Dv, whether p liesabove, below, or on the line s* dual to s;see Figure 6. Let jp be the intersectionpoint of the dual line p* and the linecontaining r; and set gp 5 sjp. By deter-mining whether the segment gp inter-sects a line of S, we can determinewhether p lies above or below s*.Hence, using this parametric-searchingapproach, a ray-shooting query can beanswered in O(n1/21«) time.

Fig. 6. A ray-shooting query.

444 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 34: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

Several other ray-shooting data struc-tures based on this technique have beendeveloped in Agarwal et al. [1997a],Agarwal and Matousek [1993, 1994,1995], Mohaban and Sharir [1997], andPellegrini [1996].

11.2 Linear-Optimization Queries

We wish to preprocess a set H of half-spaces in Rd into a linear-size datastructure so that, for a linear objectivefunction c, we can efficiently computethe vertex of ù H that minimizes c.Using multidimensional parametricsearching and data structures for an-swering halfspace-emptiness queries,Matousek [1993] presented an efficientalgorithm that, for any n # m # nd/2,can answer linear-optimization queriesin O(n/m1/d/2 log2d21 n) time usingO(m logO(1) n) space and preprocessing.A simpler randomized algorithm withO(n) space and O(n121/d/22O(log* n)) ex-pected query time was proposed byChan [1996]. Linear-optimization que-ries can be used to answer many otherqueries. For example, using Matousek’stechnique and a dynamic data structurefor halfspace range searching, the1-center of a set S of points in Rd can bemaintained dynamically, as points areinserted into or deleted from S. SeeAgarwal et al. [1995], Agarwal and Ma-tousek [1995], and Matousek [1993] foradditional applications of multidimen-sional parametric searching for query-type problems.

11.3 Extremal Placement Queries

Let S be a set of n points in Rd. We wishto preprocess S into a data structure sothat queries of the following form can beanswered efficiently. Let D(t), for t [ R,be a family of simplices so that D(t1) #D(t2), for any t1 # t2, and such that forany t, D(t) can be computed in O(1)time. The goal is to compute the largestvalue t 5 tmax for which the interior ofD(tmax) does not contain any point of S.For example, let D be a fixed simplexand let D(t) 5 tD be the simplex ob-

tained by scaling D by a factor of t. Wethus want to find the largest dilatedcopy of D that does not contain anypoint of S. As another example, let ,1,,2 be two lines, and let m be a constant.Define D(t) to be the triangle formed by,1, ,2, and the line with slope m and atdistance t from the intersection point ,1ù ,2; see Figure 7. These problems arisein many applications, including hiddensurface removal [Pellegrini 1996] andEuclidean shortest paths [Mitchell 1993].

By preprocessing S into a simplexrange-searching data structure, we candetermine whether D(t0) ù S 5 À, forany given t0, and then plug this queryprocedure into the parametric-searchingmachinery, thereby obtaining the de-sired tmax. The best known data struc-ture for simplex range searching cananswer a query in time O(m/n1/d

logd11 n), using O(m) space, so an ex-tremal placement query can be an-swered in time O(m/n1/d log2(d11) n).

12. DISCUSSION

In this survey we have reviewed severaltechniques for geometric optimization,and discussed many geometric problemsthat benefit from these techniques.There are, of course, quite a few nongeo-metric-optimization problems that canalso be solved efficiently using thesetechniques. For example, parametricsearching has been applied to developefficient algorithms for these problems:(i) let G be a directed graph, in whichthe weight of each edge e is a d-variate

Fig. 7. An extremal-placement query.

Algorithms for Geometric Optimization • 445

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 35: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

linear function we(x), and let s and t betwo vertices in G; find the point x [ Rd,so that the maximum flow from s to t ismaximized over all points x [ Rd [Co-hen and Megiddo 1993]; (ii) compute theminimum edit distance between twogiven sequences, where the cost of per-forming an insertion, deletion, and sub-stitution is a univariate linear function[Gusfield et al. 1994].

This survey mostly reviewed theoreti-cal results in geometric optimization.Many of the algorithms discussed hereare relatively simple and are widelyused in practice, but some of the algo-rithms, especially those that involvecomputing the roots of polynomials ofdegree greater than four, are quite so-phisticated and have little practicalvalue. Much work has been done ondeveloping heuristics for clustering[Agrawal et al. 1998; Jain and Dubes1988; Kaufman and Rousseeuw 1990],shape analysis [Eric and Grimson 1990;Goodrich et al. 1994; Huttenlocher et al.1993], surface simplification [Cohen et al.1996; Hoppe 1997; Luebke and Erikson1997], and ray tracing [Foley et al. 1990;Mitchell et al. 1994; Whang et al. 1995].

There are several other geometric-op-timization problems that are not dis-cussed here. We conclude by mentioningtwo classes of such problems. The firstone is the optimal motion-planningproblem, where we are given a movingrobot B, an environment with obstacles,and two placements of the robot, and wewant to compute an “optimal” collision-free path for B between the given place-ments. The cost of a path depends on Band on the application. In the simplestcase, B is the point robot, O is a set ofpolygonal obstacles in the plane or poly-hedral obstacles in R3, and the cost of apath is its Euclidean length. We thenface the Euclidean shortest-path prob-lem, which has been studied intensivelyin the past decade; see Canny and Reif[1987], Hershberger and Suri [1993a],and Reif and Storer [1994]. The problembecomes much harder if B is not a point,because even the notion of optimality is

not well defined. See Mitchell [1996] foran excellent survey on this topic.

The second class of problems that wewant to mention can be called geometricgraph problems. Given a set S of pointsin Rd, we can define a weighted com-plete graph induced by S, where theweight of an edge (p, q) is the distancebetween p and q under some suitablemetric (two of the most commonly usedmetrics are the Euclidean and the recti-linear metrics). We can now pose manyoptimization problems on this graph,including the Euclidean traveling sales-person, Euclidean matching, Euclidean(rectilinear) Steiner trees, and mini-mum weight triangulation. Although allthese problems can be solved usingtechniques known for general graphs,the hope is that better and/or simpleralgorithms can be developed by exploit-ing the geometry of the problem. Therehave been several significant develop-ments on geometric graph problemsover the last few years, of which themost exciting are the (1 1 «)-approxi-mation algorithm for the Euclideantraveling salesperson and related prob-lems [Arora 1996, 1997; Rao and Smith1988]. The best known algorithm, due toRao and Smith, can compute a (1 1«)-shortest tour of a set of n points in Rd

in time

O~2~d/«!O~d!

n 1 ~d/«!dn log n!.

We refer the reader to Bern and Epp-stein [1997] for a survey on approxima-tion algorithms for such geometric-opti-mization problems.

APPENDIX: MULTIDIMENSIONALPARAMETRIC SEARCHING

In this appendix we describe how toextend the parametric-searching tech-nique to higher dimensions. Suppose wehave a d-variate (strictly) concave func-tion F(l), where l varies over Rd. Wewish to compute the point l* [ Rd atwhich F(l) attains its maximum value.Let As be, as before, an algorithm thatcan compute F(l0) for any given l0. As

446 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 36: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

in the parametric searching, we assumethat the control flow of As is governedby comparisons, each of which amountsto computing the sign of a d-variatepolynomial p(l) of a constant maximumdegree. We also need a few additionalassumptions on As. We call a variable inAs dynamic if its value depends on l.The only operations allowed on dynamicvariables are: (i) evaluating a polyno-mial p(l) of degree at most d, where d isa constant, and assigning the value to adynamic variable; (ii) adding two dy-namic variables; and (iii) multiplying adynamic variable with a constant.These assumptions imply that if l isindeterminant, then each dynamic vari-able is a polynomial in l of degree atmost d; and that F is a piecewise poly-nomial, each piece being a polynomial ofdegree at most d.

We run As generically at l*. Eachcomparison involving l now amounts toevaluating the sign of a d-variate poly-nomial p(l1, . . . , ld) at l*.

First consider the case where p is alinear function of the form

a0 1 O1#i#d

aili,

such that ad Þ 0. Consider the hyper-plane

h;ld 5 2~a0 1 Oi51

d21

aili!/ad.

It suffices to describe an algorithm forcomputing the point l*h [ h such thatF(l*h) 5 maxl[h F(l). By invoking thisalgorithm on h and two other hyper-planes h«1 and h«2, where

h«1;ld 5 2~a0 1 « 1 Oi51

d21

aili!/ad

and

h«2;ld 5 2~a0 2 « 1 Oi51

d21

aili!/ad

for some arbitrarily small constant «,we can determine, exploiting the con-cavity of F, whether l* [ h, l* [ h1,or l* [ h2, where h1 and h2 are thetwo open halfspaces bounded by h.(Technically, one can, and should, treat« as an infinitesimal quantity; seeMegiddo [1984] for details. Also, a simi-lar perturbation scheme works whenad 5 0.) We solve the following moregeneral problem. Let g be a k-flat in Rd

contained in a (k 1 1)-flat h, and letg1 , h (resp., g2 , h) be the halfspaceof h lying above (resp., below) g, relativeto a direction in h orthogonal to g. Wewish to compute the point l*g [ g suchthat F(l*g) 5 maxl[g F(l). Denote byAs

(k) an algorithm for solving this prob-lem. As previously, by running As

(k) on gand on two infinitesimally shifted copiesof g within h, we can determine whetherthe point l*h where F attains its maxi-mum on h lies in g, in g1, or in g2.Notice that As

0 5 As, and that As(d21) is

the algorithm for computing l*h. Induc-tively, assume that we have an algo-rithm As

(k21) that can solve this problemfor any (k 2 1)-dimensional flat. Werun As generically at l*g, where l variesover g. Each comparison involves thedetermination of the side of a (k 21)-flat g9 , g that contains l*g. RunningAs

(k21) on g9 and on two other infinitesi-mally shifted copies, g9«1 and g9«2, of g9within g, we can perform the desiredlocation of l9g with respect to g9, andthereby resolve the comparison. Whenthe simulation of As terminates, l*g willbe found.

The total running time of the algo-rithm As

(k) is O(Tsk11). The details of

this approach can be found in Agarwalet al. [1993c], Cohen and Megiddo[1993], Matousek [1993], and Norton etal. [1992]. If we also have a parallelalgorithm Ap that evaluates F(l0) intime Tp using P processors, then therunning time at As

(k) can be improved,as in the one-dimensional case, by exe-cuting Ap generically at l*g in each re-cursive step. A parallel step, however,requires resolving P independent com-parisons. The goal is therefore to re-

Algorithms for Geometric Optimization • 447

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 37: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

solve, by invoking As(k21) a constant

number of times, a fixed fraction ofthese P comparisons, where each com-parison requires the location of l*g withrespect to a (k 2 1)-flat g9 , g. Cohenand Megiddo [1993] developed such aprocedure that yields a

sO~d2!Ts~Tp log P!d

time algorithm for computing l*; seealso Megiddo [1984]. Agarwala and Fer-nandez-Baca [1996] extended Cole’s im-provement of Megiddo’s parametricsearching to multidimensional paramet-ric searching, which improves therunning time of the Cohen–Megiddo al-gorithm in some cases by a polylogarith-mic factor. Agarwal et al. [1993c]showed that these procedures can besimplified and improved, using (1/r)-cuttings, to dO(d)Ts(Tp log P)d.

Toledo [1993b] extended the preced-ing approach to resolving the signs ofnonlinear polynomials, using Collins’s[1975] cylindrical algebraic decomposi-tion. We describe his algorithm for d 52. That is, we want to compute the signof a bivariate, constant-degree polyno-mial p at l*. Let ~p denote the set ofroots of p. We compute Collins’ cylindri-cal algebraic decomposition P of R2 sothat the sign of p is invariant withineach cell of P [Arnon et al. 1984; Collins1975]. Our aim is to determine the cellt [ P that contains l*, thereby deter-mining the sign of p at l*.

The cells of P are delimited by O(1)y-vertical lines—each passing through aself-intersection point of ~p or through

a point of vertical tangency of ~p; seeFigure 8. For each vertical line ,, werun the standard one-dimensional para-metric-searching procedure to deter-mine which side of , contains l*. If anyof these substeps returns l*, we aredone. Otherwise, we obtain a verticalstrip s that contains l*. We still have tosearch through the cells of P within s,which are stacked one above the otherin the y-direction, to determine which ofthem contains l*. We note that thenumber of roots of p along any verticalline ,;x 5 x0 within s is the same, thateach root varies continuously with x0,and that their relative y-order is thesame for each vertical line. In otherwords, the roots of ~p in s constitute acollection of disjoint, x-monotone arcsg1, . . . , gt whose endpoints lie on theboundary lines of s. We can regard eachgi as the graph of a univariate functiongi(x).

Next, for each gi, we determinewhether l* lies below, above, or on gi.Let x* be the x-coordinate of l*, and let, be the vertical line x 5 x*. If we knewx*, we could have run As at each gi ù ,,and could have located l* with respectto gi, as desired. Since we do not knowx*, we execute the one-dimensionalparametric-searching algorithm generi-cally, on the line ,, with the intention ofsimulating it at the unknown point li 5gi ù ,. This time, performing a compar-ison involves computing the sign ofsome bivariate, constant-degree polyno-mial g at li (we prefer to treat g as abivariate polynomial, although we could

Fig. 8. (i) Roots of p; (ii) cylindrical algebraic decomposition of p; (iii) curves g 5 0 and g1.

448 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 38: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

have eliminated one variable, by re-stricting l to lie on gi). We compute theroots r1, . . . , ru of g that lie on gi, andset r0 and ru11 to be the left and rightendpoints of gi, respectively. As previ-ously, we compute the index j so that l*lies in the vertical strip s9 bounded be-tween rj and rj11. Notice that the signof g is the same for all points on giwithin the strip s9, so we can now com-pute the sign of g at li.

When the generic algorithm beingsimulated on li terminates, it returns aconstant-degree polynomial Fi(x, y),corresponding to the value of F at li(i.e., Fi(li) 5 F(li)), and a vertical stripsi # s that contains l*. Let ri(x) 5Fi(x, gi(x)). Let gi

1 (resp., gi2) be the copy

of gi translated by an infinitesimallysmall amount in the (1y)-direction (resp.,(2y)-direction); that is, gi

1(x) 5 gi(x) 1 «(resp., gi

2(x) 5 gi(x) 2 «), where « . 0 isan infinitesimal. We next simulate thealgorithm at li

1 5 gi1 ù , and li

2 5gi

2 ù ,. We thus obtain two functionsri

1(x), ri2(x) and two vertical strips si

1,si

2. Let si 5 si ù si1 ù si

2. We need toevaluate the signs of ri(x*) 2 ri

1(x*) andri(x*) 2 ri

2(x*) to determine the locationof l* with respect to gi (this is justified bythe concavity of F). We compute the x-co-ordinates of the intersection points of (thegraphs of) ri, ri

1, ri2 that lie inside si. Let

x1 # x2 # . . . # xs be these x-coordinates,and let x0, xs11 be the x-coordinates of theleft and right boundaries of si, respec-tively. By running As on the vertical linesx 5 xj, for 1 # j # s, we determine thevertical strip Wi 5 [xj, xj11] 3 R thatcontains l*. Notice that the signs of poly-nomials ri(x) 2 ri

1(x), ri(x) 2 ri2(x) are

fixed for all x [ [xj, xj11]. By evaluatingri, ri

1, ri2 for any x0 [ [xj, xj11], we can

compute the signs of ri(x*) 2 ri1(x*) and

of ri(x*) 2 r1(x*).Repeating this procedure for all gis

we can determine the cell of P thatcontains l*, and thus resolve the com-parison involving p. We then resumethe execution of the generic algorithm.

The execution of the one-dimensionalprocedure takes O(Ts

2) steps, which im-plies that the generic simulation of the

one-dimensional procedure requiresO(Ts

3) time. The total time spent in re-solving the sign of p at l* is thereforeO(Ts

3). Hence, the total running time ofthe two-dimensional algorithm is O(Ts

4).As before, using a parallel version of thealgorithm for the generic simulationreduces the running time considerably.In d dimensions, the running time ofToledo’s original algorithm is O(Ts(Tplog P)2d21), which can be improved toTs(Tp log P)O(d2)), using the result byChazelle et al. [1991] on vertical decom-position of arrangements of algebraicsurfaces.

REFERENCES

AGARWAL, P. K. AND DESIKAN, P. K. 1997. Anapproximation algorithm for terrain simplifi-cation. In Proceedings of the Eighth ACM-SIAM Symposium on Discrete Algorithms,139–147.

AGARWAL, P. K. AND MATOUSEK, J. 1993. Rayshooting and parametric search. SIAMJ. Comput. 22, 794–806.

AGARWAL, P. K. AND MATOUSEK, J. 1994. Onrange searching with semialgebraic sets. Dis-crete Comput. Geom. 11, 393–418.

AGARWAL, P. K. AND MATOUSEK, J. 1995. Dy-namic half-space range reporting and itsapplications. Algorithmica 13, 325–345.

AGARWAL, P. K. AND PROCOPIUC, C. M. 1998.Exact and approximation algorithms for clus-tering. In Proceedings of the Ninth ACM-SIAM Symposium on Discrete Algorithms,658–667.

AGARWAL, P. K. AND SHARIR, M. 1991. Off-linedynamic maintenance of the width of a planarpoint set. Comput. Geom. Theor. Appl. 1, 65–78.

AGARWAL, P. K. AND SHARIR, M. 1994. Planargeometric location problems. Algorithmica 11,185–195.

AGARWAL, P. K. AND SHARIR, M. 1996a. Efficientrandomized algorithms for some geometric op-timization problems. Discrete Comput. Geom.16, 317–337.

AGARWAL, P. K. AND SHARIR, M. 1996b. Rayshooting amidst convex polygons in 2D. J.Alg. 21, 508–519.

AGARWAL, P. K. AND SHARIR, M. 1996c. Rayshooting amidst convex polyhedra and polyhe-dral terrains in three dimensions. SIAMJ. Comput. 25, 100–116.

AGARWAL, P. K. AND SURI, S. 1994. Surface ap-proximation and geometric partitions. In Pro-ceedings of the Fifth ACM-SIAM Symposiumon Discrete Algorithms, 24–33.

Algorithms for Geometric Optimization • 449

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 39: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

AGARWAL, P. K., AMENTA, N., AND SHARIR, M. 1998.Placement of one convex polygon inside an-other. Discrete Comput. Geom. 19, 95–104.

AGARWAL, P. K., ARONOV, B., AND SHARIR,M. 1997a. Computing envelopes in four di-mensions with applications. SIAM J. Comput.26, 1714–1732.

AGARWAL, P. K., ARONOV, B., AND SHARIR, M. 1997b.Line traversals of balls and smallest enclosingcylinders in three dimensions. In Proceedings ofthe Eighth ACM-SIAM Symposium on DiscreteAlgorithms, 483–492.

AGARWAL, P. K., ARONOV, B., AND SHARIR, M. 1997c.Motion planning for a convex polygon in a po-lygonal environment. Tech. Rep. CS-1997-17,Dept. of Computer Science, Duke University.

AGARWAL, P. K., ARONOV, B., SHARIR, M., AND SURI,S. 1993a. Selecting distances in the plane.Algorithmica 9, 495–514.

AGARWAL, P. K., EFRAT, A., AND SHARIR, M. 1995.Vertical decomposition of shallow lengths in3-dimensional arrangements and its applica-tions. In Proceedings of the Eleventh AnnualACM Symposium on Computational Geome-try, 39–50.

AGARWAL, P. K., EFRAT, A., SHARIR, M., AND TO-LEDO, S. 1993b. Computing a segment cen-ter for a planar point set. J. Alg. 15, 314–323.

AGARWAL, P. K., SHARIR, M., AND TOLEDO, S.1993c. An efficient multi-dimensional search-ing technique and its applications. Tech. Rep.CS-1993-20, Dept. of Computer Science, DukeUniversity.

AGARWAL, P. K., SHARIR, M., AND TOLEDO, S.1994. Applications of parametric searchingin geometric optimization. J. Alg. 17, 292–318.

AGARWAL, P. K., SHARIR, M., AND WELZL, E.1997. The discrete 2-center problem. In Pro-ceedings of the Thirteenth Annual ACM Sym-posium Computer Geometry, 147–155.

AGARWALA, R. AND FERNANDEZ-BACA, D. 1996.Weighted multidimensional search and its ap-plications to convex optimization. SIAMJ. Comput. 25, 83–99.

AGGARWAL, A. AND KLAWE, M. M. 1987. Appli-cations of generalized matrix searching togeometric algorithms. Discrete Appl. Math.27, 3–23.

AGGARWAL, A. AND PARK, J. K. 1988. Notes onsearching in multidimensional monotone ar-rays. In Proceedings of the 29th Annual IEEESymposium on the Foundations of ComputerScience, 497–512.

AGGARWAL, A., KLAWE, M. M., MORAN, S., SHOR,P. W., AND WILBER, R. 1987. Geometric ap-plications of a matrix-searching algorithm.Algorithmica 2, 195–208.

AGGARWAL, A., KRAVETS, D., PARK, J. K., AND SEN,S. 1990. Parallel searching in generalizedMonge arrays with applications. In Proceed-

ings of the Second ACM Symposium on Paral-lel Algorithms Architecture, 259–268.

AGRAWAL, R., GEHRKE, J., GUNOPULOS, D., AND

RAGHAVAN, P. 1998. Automatic subspaceclustering of high dimensional data for datamining applications. In Proceedings of theACM SIGMOD Conference on Management ofData, 94–105.

AGRAWAL, R., GHOSH, A., IMJELINSKI, T., IYER, B.,AND SWAMI, A. 1992. An interval classifierfor database mining applications. In Proceed-ings of the Eighteenth International Confer-ence on Very Large Databases.

AJTAI, M. AND MEGIDDO, N. 1996. A determinis-tic poly(log log n)-time n-processor algorithmfor linear programming in fixed dimension.SIAM J. Comput. 25, 1171–1195.

AJTAI, M., KOMLOS, J., AND SZEMEREDI, E. 1983.Sorting in c log n parallel steps. Combinatorica3, 1–19.

ALON, N. AND MEGIDDO, N. 1990. Parallel linearprogramming in fixed dimension almostsurely in constant time. In Proceedings of the31st Annual IEEE Symposium on the Founda-tions of Computer Science, 574–582.

ALON, N. AND SPENCER, J. 1993. The Probabilis-tic Method. Wiley, New York.

ALT, H., BEHRENDS, B., AND BLOMER, J. 1995.Approximate matching of polygonal shapes.Ann. Math. Artif. Intell. 13, 251–266.

AMATO, N. M., GOODRICH, M. T., AND RAMOS,E. A. 1994. Parallel algorithms for higher-dimensional convex hulls. In Proceedings ofthe 35th Annual IEEE Symposium on theFoundations of Computer Science, 683–694.

AMENTA, N. 1994a. Bounded boxes, Hausdorffdistance, and a new proof of an interestingHelly theorem. In Proceedings of the TenthAnnual ACM Symposium on ComputationalGeometry, 340–347.

AMENTA, N. 1994b. Helly-type theorems andgeneralized linear programming. DiscreteComput. Geom. 12, 241–261.

ARNON, D. S., COLLINS, G. E., AND MCCALLUM,S. 1984. Cylindrical algebraic decomposi-tion I: The basic algorithm. SIAM J. Comput.13, 865–877.

ARORA, S. 1996. Polynomial time approxima-tion schemes for Euclidean TSP and othergeometric problems. In Proceedings of the37th Annual IEEE Symposium on the Foun-dations of Computer Science, 2–11.

ARORA, S. 1997. Nearly linear time approxima-tion schemes for the Euclidean TSP and othergeometric problems. In Proceedings of the38th Annual IEEE Symposium on the Foun-dations of Computer Science, 554–563.

ARORA, S., RAGHAVAN, P., AND RAO, S. 1998. Ap-proximation schemes for Euclidean k-medianand related problems. In Proceedings of the

450 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 40: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

30th Annual ACM Symposium on the Theoryof Computation, 106–113.

BAR-ILAN, J., KORTSARZ, G., AND PELEG, D. 1993.How to allocate network centers. J. Alg. 15,385–415.

BAR-YEHUDA, R., EFRAT, A., AND ITAI, A. 1993. Asimple algorithm for maintaining the centerof a planar point-set. In Proceedings of theFifth Canadian Conference on ComputationalGeometry, 252–257.

BERMAN, M. 1989. Large sample bias in leastsquares estimators of a circular arc centerand its radius. Comput. Vis. Graph. ImageProcess. 45, 126–128.

BERN, M. AND EPPSTEIN, D. 1997. Approx-imation algorithms for geometric problems. InApproximation Algorithms for NP-Hard Prob-lems, D. S. Hochbaum, Ed., PWS Publishing,Boston, MA, 296–345.

BESPAMYATNIKH, S. 1998. An efficient algorithmfor the three-dimensional diameter problem.In Proceedings of the Ninth Annual ACM-SIAM Symposium on Discrete Algorithms,137–146.

BHATTACHARYA, B. K. AND TOUSSAINT, G. 1991.Computing shortest transversals. Computing46, 93–119.

BHATTACHARYA, B. K., CZYZOWICZ, J., EGYED, P.,TOUSSAINT, G., STOJMENOVIC, I., AND URRUTIA,J. 1991a. Computing shortest transversalsof sets. In Proceedings of the Seventh AnnualACM Symposium on Computational Geome-try, 71–80.

BHATTACHARYA, B. K., JADHAV, S., MUKHOPADHYAY,A., AND ROBERT, J.-M. 1991b. Optimal algo-rithms for some smallest intersection radiusproblems. In Proceedings of the Seventh An-nual ACM Symposium on Computational Ge-ometry, 81–88.

BRINKHOFF, T. AND KRIEGEL, H.-P. 1994. Theimpact of global clusterings on spatial data-base systems. In Proceedings of the TwentiethInternational Conference on Very Large Data-bases.

BRONNIMANN, H. AND CHAZELLE, B. 1994. Opti-mal slope selection via cuttings. In Proceed-ings of the Sixth Canadian Conference onComputational Geometry, 99–103.

BRONNIMANN, H. AND GOODRICH, M. T. 1995.Almost optimal set covers in finite VC-dimen-sion. Discrete Comput. Geom. 14, 263–279.

BRONNIMANN, H., CHAZELLE, B., AND MATOUSEK,J. 1993. Product range spaces, sensitivesampling, and derandomization. In Proceed-ings of the 34th Annual IEEE Symposium onthe Foundations of Computer Science, 400–409.

CAN, F. 1993. Incremental clustering for dy-namic information retrieval. ACM Trans. Inf.Syst. 11, 143–164.

CANNY, J. AND REIF, J. H. 1987. New lower

bound techniques for robot motion planningproblems. In Proceedings of the 28th AnnualIEEE Symposium on the Foundations of Com-puter Science, 49–60.

CHAN, T. M. 1996. Fixed-dimensional linearprogramming queries made easy. In Proceed-ings of the Twelfth Annual ACM Symposiumon Computational Geometry, 284–290.

CHAN, T. M. 1998. Geometric applications of arandomized optimization technique. In Pro-ceedings of the Fourteenth Annual ACM Sym-posium on Computational Geometry (to ap-pear).

CHANDRASEKARAN, R. AND TAMIR, A. 1990. Alge-braic optimization: The Fermat–Weber loca-tion problem. Math. Program. 46, 219–224.

CHARIKAR, M., CHEKURI, C., FEDER, T., AND MOT-WANI, R. 1997. Incremental clustering anddynamic information retrieval. In Proceedingsof the 29th Annual ACM Symposium on theTheory of Computation, 626–635.

CHAZELLE, B. 1993. Cutting hyperplanes for di-vide-and-conquer. Discrete Comput. Geom. 9,145–158.

CHAZELLE, B. AND MATOUSEK, J. 1996. On-lin-ear-time deterministic algorithms for optimi-zation problems in fixed dimensions. J. Alg.21, 579–597.

CHAZELLE, B., EDELSBRUNNER, H., GUIBAS, L. J.,AND SHARIR, M. 1991. A singly-exponentialstratification scheme for real semi-algebraicvarieties and its applications. Theor. Comput.Sci. 84, 77–105.

CHAZELLE, B., EDELSBRUNNER, H., GUIBAS, L. J.,AND SHARIR, M. 1993. Diameter, width,closest line pair and parametric searching.Discrete Comput. Geom. 10, 183–196.

CHAZELLE, B., EDELSBRUNNER, H., GUIBAS, L. J.,AND SHARIR, M. 1994. Algorithms forbichromatic line segment problems and poly-hedral terrains. Algorithmica 11, 116–132.

CHAZELLE, B., EDELSBRUNNER, H., GUIBAS, L. J.,SHARIR, M., AND STOLFI, J. 1996. Lines inspace: Combinatorics and algorithms. Algo-rithmica 15, 428–447.

CHEW, L. P. AND KEDEM, K. 1992. Improve-ments on geometric pattern matching prob-lems. In Proceedings of the Third Scandina-vian Workshop on Algorithm Theory, LNCS621, Springer-Verlag, 318–325.

CHEW, L. P. AND KEDEM, K. 1993. A convexpolygon among polygonal obstacles: Place-ment and high-clearance motion. Comput.Geom. Theor. Appl. 3, 59–89.

CHEW, L. P., DOR, D., EFRAT, A., AND KEDEM,K. 1995. Geometric pattern matching in d-dimensional space. In Proceedings of the Sec-ond Annual European Symposium on Algo-rithms, LNCS 979, Springer-Verlag, 264–279.

CHEW, L. P., GOODRICH, M. T., HUTTENLOCHER,D. P., KEDEM, K., KLEINBERG, J. M., AND

Algorithms for Geometric Optimization • 451

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 41: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

KRAVETS, D. 1997. Geometric patternmatching under Euclidean motion. Comput.Geom. Theor. Appl. 7, 113–124.

CLARKSON, K. L. 1986. Linear programming inO(n3d2

) time. Inf. Process. Lett. 22, 21–24.CLARKSON, K. L. 1992. Randomized geometric

algorithms. In Computing in Euclidean Geom-etry, D.-Z. Du and F. K. Hwang, Eds., WorldScientific, Singapore, 117–162.

CLARKSON, K. L. 1993. Algorithms for polytopecovering and approximation. In Proceedings ofthe Third Workshop on Algorithms DataStructures, LNCS 709, Springer-Verlag, 246–252.

CLARKSON, K. L. 1995. Las Vegas algorithmsfor linear and integer programming. J. ACM42, 488–499.

CLARKSON, K. L. AND SHOR, P. W. 1989. Appli-cations of random sampling in computationalgeometry, II. Discrete Comput. Geom. 4, 387–421.

CLARKSON, K. L., EPPSTEIN, D., MILLER, G. L.,STURTIVANT, C., AND TENG, S.-H. 1996. Ap-proximating center points with iterative Ra-don points. Int. J. Comput. Geom. Appl. 6,357–377.

COHEN, E. AND MEGIDDO, N. 1993. Maximizingconcave functions in fixed dimension. In Com-plexity in Numeric Computation (P. Pardalos,Ed., World Scientific, Singapore, 74–87.

COHEN, J., VARSHNEY, A., MANOCHA, D., TURK, G.,WEBER, H., AGARWAL, P., BROOKS, F. P., JR.,AND WRIGHT, W. V. 1996. Simplification en-velopes. In Proceedings of SIGGRAPH ’96,119–128.

COLE, R. 1987. Slowing down sorting networksto obtain faster sorting algorithms. J. ACM34, 200–208.

COLE, R., SALOWE, J., STEIGER, W., AND SZEMEREDI,E. 1989. An optimal-time algorithm forslope selection. SIAM J. Comput. 18, 792–810.

COLE, R., SHARIR, M., AND YAP, C. K. 1987. Onk-hulls and related problems. SIAM J. Com-put. 16, 61–77.

COLLINS, G. E. 1975. Quantifier elimination forreal closed fields by cylindrical algebraic de-composition. In Proceedings of the Second GIConference on Automata Theory and FormalLanguages, LNCS 33, Springer-Verlag, Ber-lin, 134–183.

CUTTING, D. R., KARGER, D. R., AND PEDERSEN,J. O. 1993. Constant interaction-time scat-ter/gather browsing of very large documentcollections. In Proceedings of the SixteenthAnnual International ACM SIGIR Conferenceon Research and Development in InformationRetrieval, 126–134.

CUTTING, D. R., KARGER, D. R., PEDERSEN, J. O.,AND TUKEY, J. W. 1992. Scatter/gather: Acluster-based approach to browsing large doc-

ument collections. In Proceedings of the Six-teenth Annual International ACM SIGIR Con-ference on Research and Development inInformation Retrieval, 318–329.

DANZER, L., GRUNBAUM, B., AND KLEE, V. 1963.Helly’s theorem and its relatives. In Convex-ity, Proceedings of the Symposium on PureMathematics, Vol. 7, American MathematicalSociety, Providence, RI 101–180.

DAS, G. AND JOSEPH, D. 1990. The complexity ofminimum convex nested polyhedra. In Pro-ceedings of the Second Canadian Conferenceon Computational Geometry, 296–301.

DATTA, A., LENHOF, H.-P., SCHWARZ, C., AND SMID,M. 1995. Static and dynamic algorithmsfor k-point clustering problems. J. Alg. 19,474–503.

DE BERG, M., BOSE, J., BREMNER, D., RAMASWAMI,S., AND WILFONG, G. 1997. Computing con-strained minimum-width annuli of point sets.In Proceedings of the Fifth Workshop on Algo-rithms Data Structure, LNCS 1272, Springer-Verlag, 3–16.

DE BERG, M., DEVILLERS, O., VAN KREVELD, M.,SCHWARZKOPF, O., AND TEILLAUD, M. 1996.Computing the maximum overlap of two con-vex polygons under translation. In Proceed-ings of the Seventh Annual InternationalSymposium on Algorithms Computation.

DE FLORIANI, L. 1984. A graph based approachto object feature recognition. In Proceedings ofthe Third Annual ACM Symposium on Com-putational Geometry, 100–109.

DEHAEMER, M. J., JR. AND ZYDA, M. J. 1991.Simplification of objects rendered by polygo-nal approximations. Comput. Graph. 15, 175–184.

DENG, X. 1990. An optimal parallel algorithmfor linear programming in the plane. Inf. Pro-cess. Lett. 35, 213–217.

DILLENCOURT, M. B., MOUNT, D. M., AND NETAN-YAHU, N. S. 1992. A randomized algorithmfor slope selection. Int. J. Comput. Geom.Appl. 2, 1–27.

DREZNER, Z. 1981. On a modified 1-center prob-lem. Manage Sci. 27, 838–851.

DREZNER, Z. 1984a. The p-centre problems—Heuristic and optimal algorithms. J. Oper.Res. Soc. 35, 741–748.

DREZNER, Z. 1984b. The planar two-center andtwo-median problem. Transp. Sci. 18, 351–361.

DREZNER, Z. 1987. On the rectangular p-centerproblem. Naval Res. Logist. Q. 34, 229–234.

DREZNER, Z. 1989. Conditional p-centre prob-lems. Transp. Sci. 23, 51–53.

DREZNER, Z., ED. 1995. Facility Location.Springer-Verlag, New York.

DREZNER, Z., MEHREZ, A., AND WESOLOWSKY,G. O. 1992. The facility location problems

452 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 42: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

with limited distances. Transp. Sci. 25, 183–187.

DUNCAN, C. A., GOODRICH, M. T., AND RAMOS,E. A. 1997. Efficient approximation andoptimization algorithms for computationalmetrology. In Proceedings of the Eighth ACM-SIAM Symposium on Discrete Algorithms,121–130.

DYER, M. E. 1984. Linear time algorithms fortwo- and three-variable linear programs.SIAM J. Comput. 13, 31–45.

DYER, M. E. 1986. On a multidimensionalsearch technique and its application to theEuclidean one-centre problem. SIAM J. Com-put. 15, 725–738.

DYER, M. E. 1992. A class of convex programswith applications to computational geometry.In Proceedings of the Eighth Annual ACMSymposium on Computational Geometry,9–15.

DYER, M. E. 1995. A parallel algorithm for lin-ear programming in fixed dimensions. In Pro-ceedings of the Eleventh Annual ACM Sympo-sium on Computational Geometry, 345–349.

DYER, M. E. AND FRIEZE, A. M. 1985. A simpleheuristic for the p-centre problem. Oper. Res.Lett. 3, 285–288.

DYER, M. E. AND FRIEZE, A. M. 1989. A random-ized algorithm for fixed-dimension linear pro-gramming. Math. Program. 44, 203–212.

EBARA, H., FUKUYAMA, N., NAKANO, H., AND NA-KANISHI, Y. 1980. Roundness algorithmsusing the Voronoi diagrams. In Abstracts ofthe First Canadian Conference on Computa-tional Geometry, 41.

ECKHOFF, J. 1993. Helly, Radon, and Carath-eodory type theorems. In Handbook of ConvexGeometry, P. M. Gruber and J. Wills, Eds.,North-Holland, 389–448.

EDELSBRUNNER, H. 1985. Computing the ex-treme distances between two convex polygons.J. Alg. 6, 213–224.

EDELSBRUNNER, H. 1987. Algorithms in Combi-natorial Geometry. Springer-Verlag, Heidel-berg.

EDELSBRUNNER, H. AND WAUPOTITSCH, R. 1986.Computing a ham-sandwich cut into two di-mensions. J. Symbol. Comput. 2, 171–178.

EFRAT, A. AND SHARIR, M. 1996. A near-linearalgorithm for the planar segment center prob-lem. Discrete Comput. Geom. 16, 239–257.

EFRAT, A., SHARIR, M., AND ZIY, A. 1994. Com-puting the smallest k-enclosing circle and re-lated problems. Comput. Geom. Theor. Appl.4, 119–136.

EISNER, M. AND SEVERANCE, D. 1976. Mathe-matical techniques for efficient record seg-mentation in large shared databases. J. ACM23, 619–635.

EPPSTEIN, D. 1992. Dynamic three-dimensional

linear programming. ORSA J. Comput. 4,360–368.

EPPSTEIN, D. 1997. Faster construction of pla-nar two-centers. In Proceedings of the EighthACM-SIAM Symposium on Discrete Algo-rithms, 1997.

EPPSTEIN, D. 1998. Fast hierarchical clusteringand other applications of dynamic closestpairs. In Proceedings of the Ninth AnnualACM-SIAM Symposium on Discrete Algo-rithms, 619–628.

EPPSTEIN, D. AND ERICKSON, J. 1994. Iteratednearest neighbors and finding minimal poly-topes. Discrete Comput. Geom. 11, 321–350.

ERIC, W. AND GRIMSON, L. 1990. Object Recogni-tion by Computer: The Role of Geometric Con-straints. MIT Press, Cambridge, MA.

FEDER, T. AND GREENE, D. H. 1988. Optimalalgorithms for approximate clustering. In Pro-ceedings of the Twentieth Annual ACM Sym-posium on the Theory of Computation, 434–444.

FINN, P., KAVRAKI, L. E., LATOMBE, J.-C., MOTWANI,R., SHELTON, C., VENKATASUBRAMANIAN, S.,AND YAO, A. 1997. Rapid: Randomizedpharmacophore identification for drug design.In Proceedings of the Thirteenth Annual ACMSymposium on Computational Geometry,324–333.

FOLEY, J. D., VAN DAM, A., FEINER, S. K., AND

HUGHES, J. F. 1990. Computer Graphics:Principles and Practice. Addison-Wesley,Reading, MA.

FOLLERT, F., SCHOMER, E., AND SELLEN, J.1995. Subquadratic algorithms for theweighted maximum facility location problem.In Proceedings of the Seventh Canadian Con-ference on Computational Geometry, 1–6.

FOLLERT, F., SCHOMER, E., SELLEN, J., SMID, M.,AND THIEL, C. 1995. Computing a largestempty anchored cylinder, and related prob-lems. In Proceedings of the Fifteenth Confer-ence on Foundations of Software TechnologyTheoretical Computer Science, LNCS 1026,Springer-Verlag, 428–442.

FOWLER, R. J., PATERSON, M. S., AND TANIMOTO,S. L. 1981. Optimal packing and coveringin the plane are NP-complete. Inf. Process.Lett. 12, 133–137.

FREDERICKSON, G. N. 1991. Optimal algorithmsfor tree partitioning. In Proceedings of theSecond Annual ACM-SIAM Symposium onDiscrete Algorithms, 168–177.

FREDERICKSON, G. N. AND JOHNSON, D. B. 1982.The complexity of selection and ranking inX 1 Y and matrices with sorted rows andcolumns. J. Comput. Syst. Sci. 24, 197–208.

FREDERICKSON, G. N. AND JOHNSON, D. B. 1983.Finding kth paths and p-centers by generat-ing and searching good data structures. J.Alg. 4, 61–80.

Algorithms for Geometric Optimization • 453

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 43: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

FREDERICKSON, G. N. AND JOHNSON, D. B. 1984.Generalized selection and ranking: Sortedmatrices. SIAM J. Comput. 13, 14–30.

GARCıA-LOPEZ, J. AND RAMOS, P. 1997. Fitting aset of points by a circle. In Proceedings of theThirteenth Annual ACM Symposium on Com-putational Geometry, 139–146.

GARTNER, B. 1995. A subexponential algorithmfor abstract optimization problems. SIAMJ. Comput. 24, 1018–1035.

GARTNER, B. AND WELZL, E. 1996. Linear pro-gramming—randomization and abstractframeworks. In Proceedings of the ThirteenthSymposium on Theoretical Aspects of Com-puter Science, LNCS 1046, Springer-Verlag,669–687.

GLOZMAN, A., KEDEM, K., AND SHPITALNIK, G.1995. On some geometric selection and opti-mization problems via sorted matrices. InProceedings of the Fourth Workshop on Algo-rithms Data Structures, LNCS 955, Springer-Verlag, 26–37.

GOLDWASSER, M. 1995. A survey of linear pro-gramming in randomized subexponentialtime. SIGACT News 26, 96–104.

GONZALEZ, T. 1985. Clustering to minimize themaximum intercluster distance. Theor. Com-put. Sci. 38, 293–306.

GONZALEZ, T. 1991. Covering a set of points inmultidimensional space. Inf. Process. Lett. 40,181–188.

GOODRICH, M. T. 1993. Geometric partitioningmade easier, even in parallel. In Proceedingsof the Ninth Annual ACM Symposium onComputational Geometry, 73–82.

GOODRICH, M. T. 1995. Efficient piecewise-lin-ear function approximation using the uniformmetric. Discrete Comput. Geom. 14, 445–462.

GOODRICH, M. T. AND RAMOS, E. A. 1997.Bounded-independence derandomization ofgeometric partitioning with applications toparallel fixed-dimensional linear program-ming. Discrete Comput. Geom. 18, 397–420.

GOODRICH, M. T., MITCHELL, J. S. B., AND OR-LETSKY, M. W. 1994. Practical methods forapproximate geometric pattern matching un-der rigid motion. In Proceedings of the TenthAnnual ACM Symposium on ComputationalGeometry, 103–112.

GRUNBAUM, B. 1956. A proof of Vazsonyi’s con-jecture. Bull. Res. Council Isr., Sect. A, 6,77–78.

GUPTA, P., JANARDAN, R., AND SMID, M. 1994.Fast algorithms for collision and proximityproblems involving moving geometric objects.Rep. MPI-I-94-113, Max-Planck-Institut In-form., Saarbrucken, Germany.

GUSFIELD, D., BALASUBRAMANIAN, K., AND NAOR,D. 1994. Parametric optimization of se-quence alignment. Algorithmica 12, 312–326.

HAN, K.-A. AND MYAENG, S.-H. 1996. Image or-

ganization and retrieval with automaticallyconstructed feature vectors. In Proceedings ofthe Sixteenth Annual International ACM SI-GIR Conference on Research and Developmentin Information Retrieval, 157–165.

HASSIN, R. AND MEGIDDO, N. 1991. Approxi-mation algorithms for hitting objects bystraight lines. Discrete Appl. Math. 30, 29–42.

HECKBERT, P. S. AND GARLAND, M. 1995. Fastpolygonal approximation of terrains andheight fields. Tech. Rep. CMU-CS-95-181,Carnegie Mellon University.

HELLY, E. 1930. Uber Systeme von abges-chlossenen Mengen mit gemeinschaftlichenPunkten. Monaths. Math. und Physik 37,281–302.

HEPPES, A. 1956. Beweis einer Vermutung vonA. Vazonyi. Acta Math. Acad. Sci. Hungar. 7,463–466.

HERSHBERGER, J. 1992. Minimizing the sum ofdiameters efficiently. Comput. Geom. Theor.Appl. 2, 111–118.

HERSHBERGER, J. 1993. A faster algorithm forthe two-center decision problem. Inf. Process.Lett. 47, 23–29.

HERSHBERGER, J. AND SURI, S. 1991. Findingtailored partitions. J. Alg. 12, 431–463.

HERSHBERGER, J. AND SURI, S. 1993a. Efficientcomputation of Euclidean shortest paths inthe plane. In Proceedings of the 34th AnnualIEEE Symposium on the Foundations of Com-puter Science, 508–517.

HERSHBERGER, J. AND SURI, S. 1993b. Matrixsearching with the shortest path metric. InProceedings of the 25th Annual ACM Sympo-sium on Theory Computation, 485–494.

HOCHBAUM, D. S. AND MAAS, W. 1985. Approxi-mation schemes for covering and packingproblems in image processing and VLSI. J.ACM 32, 130–136.

HOCHBAUM, D. S. AND MAAS, W. 1987. Fast ap-proximation algorithms for a nonconvex cov-ering problem. J. Alg. 8, 305–323.

HOCHBAUM, D. S. AND SHMOYS, D. 1985. A bestpossible heuristic for the k-center problem.Math. Oper. Res. 10, 180–184.

HOCHBAUM, D. S. AND SHMOYS, D. 1986. A uni-fied approach to approximation algorithms forbottleneck problems. J. ACM 33, 533–550.

HOCKEN, R. J., RAJA, J., AND BABU, U. 1993.Sampling issues in coordinate metrology.Man. Rev. 6, 282–294.

HOPPE, H. 1997. View-dependent refinement ofprogressive meshes. In Proceedings of SIG-GRAPH ’97, 189–198.

HOPPE, H., DEROSE, T., DUCHAMP, T., HALSTEAD,M., JIN, H., MCDONALD, J., SCHWEITZER, J.,AND STUETZLE, W. 1994. Piecewise smoothsurface reconstruction. In Proceedings of SIG-GRAPH ’94, 295–302.

454 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 44: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

HOPPE, H., DEROSE, T., DUCHAMP, T., MCDONALD,J., AND STUETZLE, W. 1993. Mesh optimiza-tion. In Proceedings of SIGGRAPH ’93, 19–26.

HOULE, M. E. AND TOUSSAINT, G. T. 1988. Com-puting the width of a set. IEEE Trans. Patt.Anal. Mach. Intell. PAMI-10, 761–765.

HOULE, M. E., IMAI, H., IMAI, K., AND ROBERT,J.-M. 1989. Weighted orthogonal linear L`-approximation and applications. In Proceed-ings of the First Workshop on Algorithms DataStructures, LNCS 382, Springer-Verlag, 183–191.

HUTTENLOCHER, D. P. AND KEDEM, K. 1990.Computing the minimum Hausdorff distancefor point sets under translation. In Proceed-ings of the Sixth Annual ACM Symposium onComputational Geometry, 340–349.

HUTTENLOCHER, D. P., KEDEM, K., AND SHARIR,M. 1993. The upper envelope of Voronoisurfaces and its applications. Discrete Com-put. Geom. 9, 267–291.

HUTTENLOCHER, D. P., KLANDERMAN, G. A., ANDRUCKLIDGE, W. J. 1993. Comparing imagesusing the Hausdorff distance. IEEE Trans.Patt. Anal. Mach. Intell. 15, 850–863.

HWANG, R. Z., CHANG, R. C., AND LEE,R. C. T. 1993a. The generalized searchingover separators strategy to solve some NP-hard problems in subexponential time. Algo-rithmica 9, 398–423.

HWANG, R. Z., LEE, R. C. T., AND CHANG,R. C. 1993b. The slab dividing approach tosolve the Euclidean p-center problem. Algo-rithmica 9, 1–22.

IMAI, H., LEE, D., AND YANG, C. 1992. 1-segmentcenter covering problems. ORSA J. Comput.4, 426–434.

JADHAV, S. AND MUKHOPADHYAY, A. 1994. Com-puting a centerpoint of a finite planar set ofpoints in linear time. Discrete Comput. Geom.12, 291–312.

JADHAV, S., MUKHOPADHYAY, A., AND BHATTA-CHARYA, B. K. 1996. An optimal algorithmfor the intersection radius of a set of convexpolygons. J. Alg. 20, 244–267.

JAIN, A. K. AND DUBES, R. C. 1988. Algorithmsfor Clustering Data. Prentice-Hall, EnglewoodCliffs, NJ.

JAROMCZYK, J. W. AND KOWALUK, M. 1994. Anefficient algorithm for the Euclidean two-cen-ter problem. In Proceedings of the Tenth An-nual ACM Symposium on Computational Ge-ometry, 303–311.

JAROMCZYK, J. W. AND KOWALUK, M. 1995a. Ageometric proof of the combinatorial boundsfor the number of optimal solutions to the2-center Euclidean problem. In Proceedings ofthe Seventh Canadian Conference on Compu-tational Geometry, 19–24.

JAROMCZYK, J. W. AND KOWALUK, M. 1995b. Thetwo-line center problem from a polar view: A

new algorithm and data structure. In Proceed-ings of the Fourth Workshop on AlgorithmsData Structures, LNCS 955, Springer-Verlag,13–25.

KALAI, G. 1992. A subexponential randomizedsimplex algorithm. In Proceedings of the 24thAnnual ACM Symposium on the Theory ofComputation, 475–482.

KARMARKAR, N. 1984. A new polynomial-timealgorithm for linear programming. Combina-torica 4, 373–395.

KATZ, M. J. 1995. Improved algorithms in geo-metric optimization via expanders. In Pro-ceedings of the Third Israel Symposium onTheory of Computing and Systems, 78–87.

KATZ, M. J. AND NIELSEN, F. 1996. On piercingsets of objects. In Proceedings of the TwelfthAnnual ACM Symposium on ComputationalGeometry, 113–121.

KATZ, M. J. AND SHARIR, M. 1993. Optimal slopeselection via expanders. Inf. Process. Lett. 47,115–122.

KATZ, M. J. AND SHARIR, M. 1997. An expander-based approach to geometric optimization.SIAM J. Comput. 26, 1384–1408.

KAUFMAN, L. AND ROUSSEEUW, P. J. 1990. Find-ing Groups in Data: An Introduction to Clus-ter Analysis. Wiley, New York.

KHACHIYAN, L. G. 1980. Polynomial algorithmin linear programming. U.S.S.R. Comput.Math. Math. Phys. 20, 53–72.

KHULLER, S. AND SUSSMANN, Y. J. 1996. Thecapacitated k-center problem. In Proceedingsof the Fourth Annual European Symposium ofAlgorithms, LNCS 1136, Springer-Verlag,152–166.

KNUTH, D. E. 1973. Sorting and Searching. Ad-dison-Wesley, Reading, MA.

KO, M. T. AND CHING, Y. T. 1992. Linear timealgorithms for the weighted tailored 2-parti-tion problem and the weighted rectilinear2-center problem under L`-distance. DiscreteAppl. Math. 40, 397–410.

KO, M. T. AND LEE, R. C. T. 1991. On weightedrectilinear 2-center and 3-center problems.Inf. Sci. 54, 169–190.

KO, M. T., LEE, R. C. T., AND CHANG, J. S. 1990.An optimal approximation algorithm for therectilinear m-center problem. Algorithmica 5,341–352.

KORNEENKO, N. M. AND MARTINI, H. 1993. Hy-perplane approximation and related topics. InNew Trends in Discrete and ComputationalGeometry, J. Pach, Ed., Algorithms and Com-binatorics, Springer-Verlag, Heidelberg, 135–161.

LANDAU, U. M. 1987. Estimation of circular arcand its radius. Comput. Vis. Graph. ImageProcess 38, 317–326.

LE, V. B. AND LEE, D. T. 1991. Out-of-round-

Algorithms for Geometric Optimization • 455

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 45: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

ness problem revisited. IEEE Trans. Patt.Anal. Mach. Intell. PAMI-13, 217–223.

LEE, D. T. AND WU, Y. F. 1986. Geometric com-plexity of some location problems. Algorith-mica 1, 193–211.

LEVEN, D. AND SHARIR, M. 1987. On the numberof critical free contacts of a convex polygonalobject moving in two-dimensional polygonalspace. Discrete Comput. Geom. 2, 255–270.

LO, C.-Y. AND STEIGER, W. 1990. An optimal-time algorithm for ham-sandwich cuts in theplane. In Proceedings of the Second CanadianConference on Computational Geometry, 5–9.

LO, C.-Y., MATOUSEK, J., AND STEIGER, W. L. 1994.Algorithms for ham-sandwich cuts. DiscreteComput. Geom. 11, 433–452.

LUEBKE, D. AND ERIKSON, C. 1997. View-depen-dent simplification of arbitrary polygonal en-vironments. In Proceedings of SIGGRAPH ’97,199–208.

MAASS, W. 1986. On the complexity of noncon-vex covering. SIAM J. Comput. 15, 453–467.

MAGILLO, P. AND DE FLORIANI, L. 1996. Main-taining multiple levels of detail in the overlayof hierarchical subdivisions. In Proceedings ofthe Eighth Canadian Conference on Computa-tional Geometry, 190–195.

MAKHOUL, J., ROUCOS, S., AND GISH, H. 1985.Vector quantization in speech coding. In Pro-ceedings of the IEEE 73, 1551–1588.

MATOUSEK, J. 1991a. Computing the center ofplanar point sets. In Computational Geome-try: Papers from the DIMACS Special Year,J. E. Goodman, R. Pollack, and W. Steiger,Eds., American Mathematical Society, Provi-dence, 221–230.

MATOUSEK, J. 1991b. Randomized optimal algo-rithm for slope selection. Inf. Process. Lett. 39,183–187.

MATOUSEK, J. 1992. Efficient partition trees.Discrete Comput. Geom. 8, 315–334.

MATOUSEK, J. 1993. Linear optimization que-ries. J. Alg. 14, 432–448.

MATOUSEK, J. 1994. Lower bound for a subexpo-nential optimization algorithm. RandomStruct. Alg. 5, 591–607.

MATOUSEK, J. 1995a. On enclosing k points by acircle. Inf. Process. Lett. 53, 217–221.

MATOUSEK, J. 1995b. On geometric optimiza-tion with few violated constraints. DiscreteComput. Geom. 14, 365–384.

MATOUSEK, J. AND SCHWARZKOPF, O. 1996. A de-terministic algorithm for the three-dimen-sional diameter problem. Comput. Geom.Theor. Appl. 6, 253–262.

MATOUSEK, J., MOUNT, D. M., AND NETANYAHU,N. S. 1993. Efficient randomized algo-rithms for the repeated median line estima-tor. In Proceedings of the Fourth ACM-SIAMSymposium on Discrete Algorithms, 74–82.

MATOUSEK, J., SHARIR, M., AND WELZL, E. 1996.A subexponential bound for linear program-ming. Algorithmica 16, 498–516.

MEGHINI, C. 1995. An image retrieval modelbased on classical logic. In Proceedings of theSixteenth Annual International ACM SIGIRConference on Research and Development inInformation Retrieval, 300–308.

MEGIDDO, N. 1979. Combinatorial optimizationwith rational objective functions. Math. Oper.Res. 4, 414–424.

MEGIDDO, N. 1983a. Applying parallel compu-tation algorithms in the design of serial algo-rithms. J. ACM 30, 852–865.

MEGIDDO, N. 1983b. Linear-time algorithms forlinear programming in R3 and related prob-lems. SIAM J. Comput. 12, 759–776.

MEGIDDO, N. 1983c. The weighted Euclidean1-center problem. Math. Oper. Res. 8, 498–504.

MEGIDDO, N. 1984. Linear programming in lin-ear time when the dimension is fixed. J. ACM31, 114–127.

MEGIDDO, N. 1985. Partitioning with two linesin the plane. J. Alg. 6, 430–433.

MEGIDDO, N. 1989. On the ball spanned byballs. Discrete Comput. Geom. 4, 605–610.

MEGIDDO, N. 1990. On the complexity of somegeometric problems in unbounded dimension.J. Symbol. Comput. 10, 327–334.

MEGIDDO, N. AND SUPOWIT, K. J. 1984. On thecomplexity of some common geometric loca-tion problems. SIAM J. Comput. 13, 182–196.

MEGIDDO, N. AND TAMIR, A. 1982. On the com-plexity of locating linear facilities in theplane. Oper. Res. Lett. 1, 194–197.

MEGIDDO, N. AND ZEMEL, E. 1986. A random-ized O(n log n) algorithm for the weightedEuclidean 1-center problem. J. Alg. 7, 358–368.

MEHLHORN, K., SHERMER, T., AND YAP, C. 1997.A complete roundness classification proce-dure. In Proceedings of the Thirteenth AnnualACM Symposium on Computational Geome-try, 129–138.

MITCHELL, J. S. B. 1993. Shortest paths amongobstacles in the plane. In Proceedings of theNinth Annual ACM Symposium on Computa-tional Geometry, 308–317.

MITCHELL, J. S. B. 1996. Shortest paths andnetworks. Tech. Rep., University at StonyBrook.

MITCHELL, J. S. B. AND SURI, S. 1995. Separa-tion and approximation of polyhedral objects.Comput. Geom. Theor. Appl. 5, 95–114.

MITCHELL, J. S. B., MOUNT, D. M., AND SURI,S. 1994. Query-sensitive ray shooting. InProceedings of the Tenth Annual ACM Sympo-sium on Computational Geometry, 359–368.

MOHABAN, S. AND SHARIR, M. 1997. Ray shoot-

456 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 46: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

ing amidst spheres in three dimensions andrelated problems. SIAM J. Comput. 26, 654–674.

MOTWANI, R. AND RAGHAVAN, P. 1995. Random-ized Algorithms. Cambridge University Press,New York.

MULMULEY, K. 1994. Computational Geometry:An Introduction Through Randomized Algo-rithms. Prentice-Hall, Englewood Cliffs, NJ.

NAOR, N. AND SHARIR, M. 1990. Computing apoint in the center of a point set in threedimensions. In Proceedings of the Second Ca-nadian Conference on Computational Geome-try, 10–13.

NORTON, C. H., PLOTKIN, S. A., AND TARDOS,E. 1992. Using separation algorithms infixed dimensions. J. Alg. 13, 79–98.

PAPADIMITRIOU, C. H. 1981. Worst-case andprobabilistic analysis of a geometric locationproblem. SIAM J. Comput. 10, 542–557.

PELLEGRINI, M. 1993. Ray shooting on trianglesin 3-space. Algorithmica 9, 471–494.

PELLEGRINI, M. 1994. On collision-free place-ments of simplices and the closest pair oflines in 3-space. SIAM J. Comput. 23, 133–153.

PELLEGRINI, M. 1996. Repetitive hidden surfaceremoval for polyhedra. J. Alg. 21, 80–101.

PLESNIK, J. 1987. A heuristic for the p-centerproblem in graphs. Discrete Appl. Math. 17,263–268.

POST, M. J. 1984. Minimum spanning ellip-soids. In Proceedings of the Sixteenth AnnualACM Symposium on the Theory of Computa-tion, 108–116.

RAMOS, E. 1997a. Construction of 1-d lower en-velopes and applications. In Proceedings ofthe Thirteenth Annual ACM Symposium onComputational Geometry, 57–66.

RAMOS, E. 1997b. Intersection of unit-balls anddiameter of a point set in R3. Comput. Geom.Theor. Appl. 8, 57–65.

RAO, S. AND SMITH, W. D. 1988. Improved ap-proximation schemes for geometric graphs via“spanners” and “banyans.” In Proceedings ofthe Thirtieth Annual ACM Symposium Theoryof Computation (to appear).

REICHLING, M. 1988a. On the detection of acommon intersection of k convex objects in theplane. Inf. Process. Lett. 29, 25–29.

REICHLING, M. 1988b. On the detection of acommon intersection of k convex polyhedra.In Computational Geometry and its Applica-tions, LNCS 333, Springer-Verlag, 180–186.

REIF, J. H. AND STORER, J. A. 1994. A single-exponential upper bound for finding shortestpaths in three dimensions. J. ACM 41, 1013–1019.

ROOS, T. AND WIDMAYER, P. 1994. k-Violationlinear programming. Inf. Process. Lett. 52,109–114.

ROY, U. AND ZHANG, X. 1992. Establishment ofa pair of concentric circles with the minimumradial separation for assessing roundness er-ror. Comput. Aided Des. 24, 161–168.

SALOWE, J. 1989. L` Interdistance selection byparametric search. Inf. Process. Lett. 30,9–14.

SARNAK, N. AND TARJAN, R. E. 1986. Planarpoint location using persistent search trees.Commun. ACM 29, 669–679.

SCHOMER, E. AND THIEL, C. 1995. Efficient colli-sion detection for moving polyhedra. In Pro-ceedings of the Eleventh Annual ACM Sympo-sium on Computational Geometry, 51–60.

SCHOMER, E., SELLEN, J., AND TEICHMANN, M., ANDYAP, C. 1996. Efficient algorithms for thesmallest enclosing cylinder problem. In Pro-ceedings of the Eighth Canadian Conferenceon Computational Geometry, 264–269.

SCHROETER, P. AND BIGUN, J. 1995. Hierarchicalimage segmentation by multi-dimensionalclustering and orientation-adaptive boundaryrefinement. Patt. Recogn. 28, 695–709.

SEIDEL, R. 1991. Small-dimensional linear pro-gramming and convex hulls made easy. Dis-crete Comput. Geom. 6, 423–434.

SEIDEL, R. 1993. Backwards analysis of ran-domized geometric algorithms. In New Trendsin Discrete and Computational Geometry, J.Pach, Ed., Springer-Verlag, Heidelberg, 37–68.

SEN, S. 1996. Parallel multidimensional searchusing approximation algorithms: With appli-cations to linear-programming and relatedproblems. In Proceedings of the Eighth ACMSymposium on Parallel Algorithms and Archi-tectures, 251–260.

SHAFER, J., AGRAWAL, R., AND MEHTA, M. 1996.Sprint: A scalable parallel classifier for datamining. In Proceedings of the 22nd Interna-tional Conference on Very Large Databases.

SHAFER, L. AND STEIGER, W. 1993. Randomizingoptimal geometric algorithms. In Proceedingsof the Fifth Canadian Conference on Compu-tational Geometry, 133–138.

SHARIR, M. 1997. A near-linear algorithm forthe planar 2-center problem. Discrete Comput.Geom. 18, 125–134.

SHARIR, M. AND AGARWAL, P. K. 1995. Daven-port–Schinzel Sequences and Their GeometricApplications. Cambridge University Press,New York.

SHARIR, M. AND TOLEDO, S. 1994. Extremalpolygon containment problems. Comput.Geom. Theor. Appl. 4, 99–118.

SHARIR, M. AND WELZL, E. 1992. A combinato-rial bound for linear programming and re-lated problems. In Proceedings of the Ninth

Algorithms for Geometric Optimization • 457

ACM Computing Surveys, Vol. 30, No. 4, December 1998

Page 47: Efficient Algorithms for Geometric Optimization · searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their efficient solution

Symposium on Theoretical Aspects of Com-puter Science, LNCS 577, Springer-Verlag,569–579.

SHARIR, M. AND WELZL, E. 1996. Rectilinear andpolygonal p-piercing and p-center problems.In Proceedings of the Twelfth Annual ACMSymposium on Computational Geometry, 122–132.

SOMMERVILLE, D. M. Y. 1951. Analytical Geom-etry in Three Dimensions. Cambridge Univer-sity Press, Cambridge, UK.

STEIN, A. AND WERMAN, M. 1992a. Finding therepeated median regression line. In Proceed-ings of the Third ACM-SIAM Symposium onDiscrete Algorithms, 409–413.

STEIN, A. AND WERMAN, M. 1992b. Robust sta-tistics in shape fitting. In Proceedings of theIEEE International Conference on ComputerVision Pattern Recognition, 540–546.

SWANSON, D. T., LEE, D. T., AND WU, V. L. 1995.An optimal algorithm for roundness determi-nation on convex polygons. Comput. Geom.Theor. Appl. 5, 225–235.

THOMAS, S. M. AND CHEN, Y. T. 1989. A simpleapproach for the estimation of circular arcand its radius. Comput. Vis. Graph. ImageProcess. 45, 362–370.

TOLEDO, S. 1991. Extremal polygon contain-ment problems and other issues in parametricsearching. M.S. Thesis, Dept. of ComputerScience, Tel Aviv University, Tel Aviv.

TOLEDO, S. 1993a. Approximate parametricsearch. Inf. Process. Lett. 47, 1–4.

TOLEDO, S. 1993b. Maximizing non-linear con-cave functions in fixed dimension. In Com-plexity in Numerical Computations, P. M.Pardalos, Ed., World Scientific, Singapore,429–447.

VALIANT, L. 1975. Parallelism in comparisonproblems. SIAM J. Comput. 4, 348–355.

VARADARAJAN, K. R. 1996. Approximating mono-tone polygonal curves using the uniform met-ric. In Proceedings of the Twelfth AnnualACM Symposium on Computational Geome-try, 311–318.

VARADARAJAN, K. R. AND AGARWAL, P. K. 1995.Linear approximation of simple objects. InProceedings of the Seventh Canadian Confer-ence on Computational Geometry, 13–18.

VOELCKER, H. 1993. Current perspective on tol-erancing and metrology. Man. Rev. 6, 258–268.

WELZL, E. 1991. Smallest enclosing disks (ballsand ellipsoids). In New Results and NewTrends in Computer Science, H. Maurer, Ed.,LNCS 555, Springer-Verlag, 359–370.

WESOLOWSKY, G. 1993. The Weber problem: His-tory and perspective. Loca. Sci. 1, 5–23.

WHANG, K. Y., SONG, J. W., CHANG, J. W., KIM,J. Y., CHO, W. S., PARK, C. M., AND SONG,I. Y. 1995. Octree-R: An adaptible octreefor efficient ray tracing. IEEE Trans. Vis.Comput. Graph. 1, 343–349.

YAO, A. C. AND YAO, F. F. 1985. A general ap-proach to D-dimensional geometric queries. InProceedings of the Seventeenth Annual ACMSymposium on the Theory of Computation,163–168.

YIANILOS, P. N. 1993. Data structures and algo-rithms for nearest neighbor search in generalmetric spaces. In Proceedings of the FourthACM-SIAM Symposium on Discrete Algo-rithms, 311–321.

ZEMEL, E. 1987. A linear time randomizing al-gorithm for searching ranked functions. Algo-rithmica 2, 81–90.

Received November 1996; revised April 1998; accepted June 1998

458 • P. K. Agarwal and M. Sharir

ACM Computing Surveys, Vol. 30, No. 4, December 1998