distributed computation of the knn graph for large high ... · of the k nearest neighbors for each...

44
Title Page Distributed Computation of the k nn Graph for Large High-Dimensional Point Sets Regular Paper – Journal Corresponding Author Lydia E. Kavraki Rice University, Department of Computer Science, 6100 Main Street MS132, Houston, TX 77005-1892 USA email: [email protected] fax: (713) 348-5930 phone: (713) 348-5737 Authors Erion Plaku and Lydia E. Kavraki Rice University, Department of Computer Science, 6100 Main Street MS132, Houston, TX 77005-1892 USA email: {plakue, kavraki}@cs.rice.edu

Upload: others

Post on 01-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

Title Page

Distributed Computation of the knn Graph for

Large High-Dimensional Point Sets

Regular Paper – Journal

Corresponding Author

Lydia E. Kavraki

Rice University,

Department of Computer Science,

6100 Main Street MS132,

Houston, TX 77005-1892 USA

email: [email protected]

fax: (713) 348-5930

phone: (713) 348-5737

AuthorsErion Plaku and Lydia E. Kavraki

Rice University,

Department of Computer Science,

6100 Main Street MS132,

Houston, TX 77005-1892 USA

email: {plakue, kavraki}@cs.rice.edu

Page 2: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

Distributed Computation of the knn Graph

for Large High-Dimensional Point Sets

Erion Plaku a, Lydia E. Kavraki a,∗

aRice University, Department of Computer Science, Houston, Texas 77005, USA

Abstract

High-dimensional problems arising from robot motion planning, biology, data min-

ing, and geographic information systems often require the computation of k nearest

neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each

point to its k closest points. As the research in the above-mentioned fields progres-

sively addresses problems of unprecedented complexity, the demand for computing

knn graphs based on arbitrary distance metrics and large high-dimensional data

sets increases, exceeding resources available to a single machine. In this work we

efficiently distribute the computation of knn graphs for clusters of processors with

message passing. Extensions to our distributed framework include the computa-

tion of graphs based on other proximity queries, such as approximate knn or range

queries. Our experiments show nearly linear speedup with over one hundred pro-

cessors and indicate that similar speedup can be obtained with several hundred

processors.

Key words: Nearest neighbors; Approximate nearest neighbors; knn graphs;

Range queries; Metric spaces; Robotics; Distributed and Parallel Algorithms

∗ Corresponding author.Email address: [email protected] (Lydia E. Kavraki).

Preprint submitted to J. Parallel Distrib. Comput. 18 October 2006

Page 3: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

1 Introduction

The computation of proximity graphs for large high-dimensional data sets in

arbitrary metric spaces is often necessary for solving problems arising from

robot motion planning [14, 31, 35, 40, 45, 46, 56], biology [3, 17, 33, 39, 42, 50,

52], pattern recognition [21], data mining [18, 51], multimedia systems [13],

geographic information systems [34,43,49], and other research fields. Proximity

graphs are typically based on nearest neighbor relations. The nearest neighbor

or, in general, the k nearest neighbor (knn) graph of a data set is obtained by

connecting each point in the data set to its k closest points from the data set,

where a distance metric [11] defines closeness.

As an example, research in robot motion planning has in recent years focused

on the development of sampling-based motion planners motivated by the suc-

cess of the Probabilistic RoadMap (PRM) method [31] for solving problems

involving multiple and highly complex robots. Sampling-based motion plan-

ning algorithms [14, 31, 41, 45, 46] rely on an efficient sampling of the solution

space and construction of the knn graph for the sampled points. The knn

graph captures the connectivity of the solution space and is used to find paths

that allow robots to move from one point in the environment to another while

satisfying certain constraints such as avoiding collision with obstacles.

The use of knn graphs combined with probabilistic methods also has promise

in the study of protein folding [3, 17, 52]. Other applications of knn searches

and graphs in biology include classifying tumors based on gene expression

profiles [42], finding similar protein sequences from a large database [33, 39],

and docking of molecules for computer-assisted drug design [50].

In pattern recognition and data mining, nearest neighbors are commonly used

1

Page 4: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

to classify objects based upon observable features [13, 18, 21, 51]. During the

classification process, certain features are extracted from the unknown object

and the unknown object is classified based on the features extracted from its

k nearest neighbors.

In geographic information systems [34,43,49], a commonly encountered query

is the computation of knn objects for points in space. Examples of queries

include finding “the nearest gas stations from my location” or “the five nearest

stars to the north star.”

As research in robot motion planning, biology, data mining, geographic infor-

mation systems, and other scientific fields progressively addresses problems of

unprecedented complexity, the demand for computing knn graphs based on

arbitrary distance metrics and large high-dimensional data sets increases, ex-

ceeding resources available to single machines [48]. In this paper, we address

the problem of computing the knn graph utilizing multiple processors com-

municating via message passing in a cluster system with no-shared memory

and where the amount of memory available to each processor is limited. Our

model of computation is motivated by many scientific applications where the

computation of the knn graph is needed for massive data sets whose size is

dozens or even hundreds of gigabytes. Since such applications are also com-

putationally intensive and the knn graph constitutes only one stage of the

overall computation, it is important to fit as much of the data as possible in

the main memory of each available processor in order to reduce disk opera-

tions. The challenge lies in developing efficient distributed algorithms for the

computation of the knn graph, since nearest-neighbors computations depend

on all the points in the data set and not just on the points stored in the main

memory of a processor.

2

Page 5: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

The contribution of our work is to develop a distributed decentralized frame-

work that efficiently computes the knn graph for extensively large data sets.

Our distributed framework utilizes efficient communication and computation

schemes to compute partial knn results, collect information from the partial

results to eliminate certain communication and computation costs, and gather

partial results to compute knn queries for each point in the data set. The par-

tial knn results are computed by constructing and querying sequential knn

data structures [8,23,26–28,55] stored in the main memory of each processor.

Our distributed framework works with any sequential knn data structure and

is also capable of taking advantage of specific implementations of sequential

knn data structures to improve the overall efficiency of the distribution.

Our distributed framework supports the efficient computation by hundreds

of processors of very large knn graphs consisting of millions of points with

hundreds of dimensions and arbitrary distance metrics. Our experiments show

nearly linear speedup on one hundred and forty processors and indicate that

similar speedup can be obtained on several hundred processors.

Our distributed framework is general and can be extended in many ways.

One possible extension is the computation of graphs based on approximate

knn queries [4, 16, 32, 36, 38, 47, 54]. In an approximate knn query, instead of

computing exactly the k closest points to a query point, it suffices to compute

k points that are within a (1+ε) hypersphere from the k-th closest point to the

query point. Approximate knn queries provide a trade-off between efficiency

of computation and quality of neighbors computed for each point. Another

possible extension is the computation of graphs based on range queries [5,9,19].

In a range query, we are interested in computing all the points in the data set

that are within some predefined distance from a query point. Range queries

3

Page 6: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

are another type of proximity queries with a wide applicability.

The rest of the paper is organized as follows. In section 2 we review related

work. In section 3 we formally define the problem of computing the knn graph,

describe the distributed model of computation, and then present a distributed

algorithm for computing knn graphs. We also discuss how to extend our dis-

tributed framework to compute graphs based on approximate knn and range

queries. In section 4 we describe the experimental setup, the data sets for test-

ing the efficiency of the distribution, and the results obtained. We conclude in

section 5 with a discussion on the distributed algorithm.

2 Related Work

The computation of the knn graph of a data set S is based on the computation

of the k nearest neighbors for each point s ∈ S. The computation of the k

nearest neighbors for a point s ∈ S is referred to as a knn query and s is

referred to as a query point. In this section we review work related to the

sequential and distributed computation of knn queries and knn graphs.

Motivated by challenging problems in many research fields, a large amount

of work has been devoted to the development of efficient algorithms for the

computation of knn queries [4,8,15,23,26–28,36,55]. The computation of knn

queries usually proceeds in two stages. During a preprocessing stage, a data

structure TS is constructed to support the computation of knn queries from a

given a data set S. During the query stage, searches are performed on TS to

compute the k nearest neighbors of a query point s, denoted TS(s, k). The same

data structure TS can be used for the computation of the k nearest neighbors

from the data set S to any query point; TS is modified only when points are

4

Page 7: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

added to or removed from S. In knn literature, references to the knn data

structure encompass both the structure of the data, TS, and the algorithms

to query the data structure to obtain the k nearest neighbors, TS(s, k).

Researchers have developed many efficient data structures for the computation

of knn queries based on Euclidean distance metrics [4,15,23,26,36]. However,

many challenging applications stemming from robot motion planning, biology,

data mining, geographic information systems, and other fields, require the

computation of knn queries based on arbitrary distance metrics. Although

a challenging problem, progress has been made towards the development of

efficient data structures supporting knn queries based on arbitrary distance

metrics [8, 28, 55]. During the preprocessing stage, usually a metric tree [53]

is constructed hierarchically as the data structure TS. One or more points

are selected from the data set and associated with the root node. Then the

distances from the points associated with the root node to the remaining points

in the data set are computed and used to partition the remaining points in the

data set into several smaller sets. Each set in the partition is associated with a

branch extending from the root node. The extension process continues until the

cardinality of the set in the partition is smaller than some predefined constant.

For example, in the construction of vantage-point trees [55], the median sphere

centered at the point associated with the root is used to partition the data

set into points inside and outside the sphere, while in the construction of

GNAT [8], the data set is clustered using an approximate k-centers algorithm

and each center is associated with a branch extending from the root. Figure 1

illustrates the construction of a metric tree for GNAT. During the query stage,

precomputed distances, lower and upper bounds on the distances between

points associated with the nodes of the tree, triangle inequality, and other

5

Page 8: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

Fig. 1. The hierarchical structure of the metric tree in GNAT [8]. Illustrated are thepoints of the data set and the spheres, represented as circles, associated with eachnode of the tree.

properties and heuristics are used to prune certain branches of the tree in

order to improve the efficiency of computing knn queries.

In addition to the development of more efficient sequential knn algorithms,

distributed algorithms that take full advantage of all the available resources

provide a viable alternative for the computation of knn queries. Research

on distributed knn algorithms mostly focuses on the development of algo-

rithms for computing knn queries based on Euclidean distance metrics. The

work in [43] addresses parallel processing of knn queries in declustered spatial

data in 2-dimensional Euclidean spaces for multiple processors communicat-

ing through a network. The work is based on the parallelization of nearest

neighbor search using the R-tree data structure [26] and could be generalized

to d-dimensional Euclidean spaces. The work in [29] shows how to compute

knn queries for 3-dimensional Euclidean point sets for a shared-memory grid

environment utilizing general grid middle-ware for data intensive problems.

In [49], knn queries are computed efficiently in an integration middleware

that provides federated access to numerous loosely coupled, autonomous data

sources connected through the internet.

There is also research on distributed algorithms for computing knn graphs,

but the focus again is on knn graphs based on Euclidean distance metrics.

In [20], an algorithm is presented for the computation of the knn graph for

6

Page 9: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

a 2-dimensional Euclidean point set for coarse grained multicomputers that

runs in time Θ(n log n/p + t(n, p)), where n is the number of points in the

data set, p is the number of processors, and t(n, p) is the time for a global-sort

operation. The work in [12] develops an optimal algorithm for the computation

of the knn graph of a point set in d-dimensional Euclidean space based on the

well-separated pair decomposition of a point set that requires O(log n) time

with O(n) processors on a standard CREW PRAM model.

Recent progress in the development of parallel and distributed databases and

data mining has spawned a large amount of research in the development of

efficient distributed algorithms for the computation of knn queries [44, 57].

In [1], the PN-tree is developed as an efficient data structure for parallel and

distributed multidimensional indexing and can be used for the computation

of knn queries for certain distance metrics. The computation of knn queries

using the PN-tree data structure depends on clustering of points into nodes

and projections of nodes over each dimension, which is not generally possi-

ble for arbitrary distance metrics. In [7], a parallel algorithm is developed for

computing multiple knn queries based on arbitrary distance metrics from a

database. Experimental results on Euclidean data sets and up to 16 servers

show close to linear speedup. We note that the focus of parallel and distributed

databases and data mining is on applications that are usually I/O intensive

and require the computation of one or several knn queries. The focus of the

work in this paper is different. This paper targets applications that are com-

putationally intensive, require the computation of the knn graph, and the

computation of the knn graph constitutes only one stage of the entire compu-

tation. Our motivation comes from our research in robot motion planning and

biology [14,22,30,37,40,45,46], where the knn graph is computed to assist in

7

Page 10: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

the computation of paths for robots or exploration of high-dimensional state

spaces to analyze important properties of biological processes.

3 Distributed Computation of the knn Graph

Our distributed algorithm for the computation of the knn graph utilizes se-

quential knn data structures, such as those surveyed in section 2, for the

partial computation of knn queries and efficient communication and compu-

tation schemes for computing the k nearest neighbors of each point in the

data set. The description of the distributed algorithm is general and applica-

ble to any knn data structure. We initially present the distributed algorithm

by considering the knn data structure as a black-box and later discuss how to

take advantage of the specific implementations of the knn data structures to

improve the efficiency of the distributed algorithm for the computation of the

knn graph. At the end of this section, we also discuss possible extensions to the

distributed framework including computation of graphs based on approximate

knn and range queries.

3.1 Problem Definition

Let the pair (S, ρ) define a metric space [11], where S is a set of points and

ρ : S × S → R≥0 is a distance metric that for any x, y, z ∈ S satisfies the

following properties: (i) ρ(x, y) ≥ 0 and ρ(x, y) = 0 iff x = y (positivity);

(ii) ρ(x, y) = ρ(y, x) (symmetry); and (iii) ρ(x, y) + ρ(y, z) ≥ ρ(x, z) (triangle

inequality). Let S = {s1, s2, . . . , sn} ⊂ S be a collection of n points defining

the data set. Let k ∈ N be an integer representing the number of nearest

neighbors. The k nearest neighbors (knn), denoted by NS(si, k), of a point

8

Page 11: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

si ∈ S are defined as the k closest points to si from S − {si} according to the

metric ρ. The knn graph of S, G(S, k) = (V, E), is an undirected graph where

V = {v1, . . . , vn}, with vi corresponding to si, and E = {(vi, vj) ∈ V 2 : si ∈

NS(sj, k) ∨ sj ∈ NS(si, k)}.

3.2 Model of Computation

Let P = {P1, P2, . . . , Pp} be the set of all the available processors for the

computation of G(S, k). Let S1, S2, . . . , Sp be a partition of S, i.e., S = S1 ∪

S2 ∪ · · · ∪ Sp and Si ∩ Sj = ∅ for all i, j ∈ {1, 2, . . . , p} and i 6= j. Let TSibe

a data structure that computes knn queries for the set Si, i.e., querying TSi

with s ∈ S and k ∈ N, denoted TSi(s, k), produces NSi

(s, k), where NSi(s, k)

denotes the k closest points to s from Si − {s}.

In our model of computation, processors communicate via message passing and

there is no-shared memory available. We assume restrictions on the memory

available to each processor Pi ∈ P , similar to [20]. Processor Pi is capable

of storing the set Si, the data structure TSi, a small number of query points

Ci ⊂ S − Si, knn query results for |Si| + |Ci| points, and a small amount of

bookkeeping information. In addition, processor Pi is equipped with a small

communication buffer to exchange messages, query points, and results with

other processors.

Our model of computation is particularly well-suited for many scientific ap-

plications in different research fields such as robot motion planning, biological

applications, etc., which often generate hundreds of gigabytes of data and re-

quire the computation of the knn graph G(S, k) for such extensively large data

sets S. In addition to storage requirements, such applications are also compu-

tationally intensive and the computation of the knn graph G(S, k) constitutes

9

Page 12: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

only one stage of the overall computation. Therefore, effective distribution

schemes, as the one we propose in this work, should use most of the main

memory of each processor to store as much of the data set S as possible.

3.3 Local Nearest Neighbor Queries

Each processor Pi ∈ P constructs a knn data structure TSifor the computa-

tion of knn queries NSi(s, k) for any point s ∈ S. The objective of the knn

data structure TSiis the efficient computation of knn queries NSi

(s, k) based

on arbitrary distance metrics for any point s ∈ S. Several examples of efficient

knn data structures can be found in [4,8,15,23,28,36,55]. The distributed al-

gorithm considers the knn data structure TSias a black-box it can query to

obtain NSi(s, k) for any point s ∈ S. In this way, the distributed algorithm is

capable of computing the knn graph utilizing any knn data structure TSi. In

addition, the distributed algorithm is also capable of taking advantage of spe-

cific implementations of the knn data structures, as we discuss in section 3.5.7.

It is important to note that processor Pi cannot query the knn data structure

TSito obtain NS(s, k), since only the portion Si of the data set S is available to

processor Pi. In our distributed algorithm we show how to compute NS(s, k)

for all s ∈ S efficiently.

3.4 Data and Control Flow

Before relating the details of the distributed algorithm, we discuss data and

control flow dependencies. The computation of the knn query NS(si, k) for

a point si ∈ Si requires the combination of results obtained from querying

the knn data structures TS1, . . ., TSp

associated with the processors P1, . . .,

Pp, respectively, since S is partitioned into S1, . . ., Sp. Under our model of

10

Page 13: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

computation, each processor Pi is capable of storing only one knn data struc-

ture, i.e., TSi, due to the limited amount of memory available to processor Pi.

Furthermore, each processor Pi is equipped with only a small communication

buffer which makes the communication of the data structure TSifrom Pi to

other processors prohibitively computationally expensive. Therefore, the only

viable alternative for computing the knn query NS(si, k) is for processor Pi to

send the point si to other processors which in turn query their associated knn

data structures and send back the results to Pi.

An efficient distribution of the computation of the knn graph G(S, k) can be

achieved by maximizing useful computation. For each processor Pi ∈ P , the

useful computation consists of (i) the computation of knn for points owned by

processor Pi and (ii) the computation of knn for points owned by other pro-

cessors. In order to maximize the useful computation by reducing idle times,

it is important that each processor Pi handles requests from other processors

as quickly as possible [25, 58].

The computation of knn queries for points owned by processor Pi requires

no communication, while the computation of knn queries for points owned

by other processors requires communication. Each processor Pi has a limited

cache Ci, thus only a small number of points from other processors can be

stored in Ci. Following general guidelines set forth in [25, 58] for efficient dis-

tribution algorithms, in order to accommodate as many points from other

processors as possible, it is important that processor Pi empties its cache Ci

quickly. Hence, before computing any knn queries for points it owns, processor

Pi first computes any knn queries pending in the cache Ci. Furthermore, in or-

der to minimize the idle or waiting time of other processors, processor Pi also

handles any communication requests made by other processors before com-

11

Page 14: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

puting any knn queries for the points it owns. In this way, processor Pi gives

higher priority to requests from other processors and computes knn queries

for points it owns only when there are no pending requests from other pro-

cessors. The overall effect of this schema is shorter idle and waiting times for

each processor which translates into better utilization of resources for useful

computations.

We have designed a decentralized architecture for our distributed implemen-

tation of the computation of the knn graph G(S, k). We utilize only asyn-

chronous communication between different processors in order to minimize

idle and waiting times. Each processor Pi is responsible for the computations

of knn queries utilizing the data structure TSi, posting of requests to other

processors for the computation of knn queries for points it owns, and ensuring

a timely response to requests made by other processors.

3.5 Distributed Computation

The distributed knn algorithm DKNNG proceeds through several stages, as il-

lustrated in Algorithm 1. After initializing the cache Ci to be empty and

constructing the knn data structure TSiassociated with the points in Si (lines

1–2), each processor Pi enters a loop until the computation of the knn graph

G(S, k) is complete (lines 3–24). Inside the loop, each processor Pi tests if

there are pending computations in the cache Ci (lines 4–7), if the cache Ci

needs to be filled (lines 8–9), if there are pending requests from other proces-

sors (lines 10–20), or if it should compute knn queries from the points Si it

owns (lines 21–24). As the computation proceeds, certain issues arise including

how processor Pi processes queries pending in the cache, removes and adds

queries to the cache, communicates with other processors to fill its cache and

12

Page 15: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

DKNNG Algorithm: Distributed Computation of the knn Graph

Input: Si ⊂ S = {s1, s2, . . . , sn}, points Output: NS(si, k) for each si ∈ Si

k, number of nearest neighbors

Computation by processor Pi. All communications are done asynchronously.

1: initialize cache Ci ← ∅ 14: request results from P ′

2: construct knn data structure TSi15: if received points then

3: while computing G(S, k) is not over do 16: if cache Ci is full then

4: if cache Ci is not empty then 17: create room in Ci

5: select and remove one point s from Ci 18: add received points to Ci

6: compute query TSi(s, k) 19: if received results then

7: send results to owner of s 20: update results

8: if cache Ci is not full then 21: if no pending requests then

9: post request to fill Ci 22: select one point s from Si

10: if received “cache is not full” then 23: compute query TSi(s, k)

11: P ′ ← processors posting the request 24: update results

12: select points from Si to send to P ′ 25: end while

13: send the selected points to P ′

Algorithm 1. High-level description of the distributed computation of the knngraph. The pseudocode is applicable to each processor Pi ∈ P . The computation ofthe knn graph proceeds in stages and at the end of the computation each processorPi stores the knn results NS(si, k) for each point si ∈ Si.

the cache of other processors with query points. Processor Pi considers the dif-

ferent stages in an order that attempts to minimize the idle and waiting times

of other processors by handling pending requests from other processors [25,58]

as quickly as possible before it computes knn queries for the points it owns.

We now describe each stage of DKNNG in more detail.

3.5.1 Cache is Not Empty

Processor Pi selects one point s from the cache Ci and computes its nearest

neighbors utilizing the knn data structure TSi. The selection of the point s

can be done in a variety of ways. One possibility is to apply the first-in first-

out principle and select from the cache Ci the point s that has been in the

13

Page 16: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

cache Ci for the longest time. Processing the points in the order of addition

to the cache has the benefit of guaranteeing that each point will eventually be

processed and thus avoids the starvation problem. On the other hand, since

other processors will possibly send to processor Pi several points at a time,

processing of points in the cache Ci from processor P ′′ will start only after all

the points in the cache Ci from processor P ′ have been processed (assuming

that points from processor P ′ were added to the cache Ci before the points

from processor P ′′.)

In order to ensure a timely response to the requests from other processors, we

introduce randomization to the selection process. A weight ws is associated

with each point s in the cache Ci. The weight ws is directly proportional

to the time the point s has been in the cache Ci and inversely proportional

to the number of points in the cache Ci owned by the processor that owns

the point s. The constant of proportionality can be defined to also include a

measure of the state of computation for each processor and the importance

of the computation of the nearest neighbors for the point s to the processor

that owns the point s. A probability ps is computed as ps = ws/∑

s′∈Ciws′

and a point s is selected with probability ps from the cache Ci. The weight ws

of a point s increases the longer s waits in the cache Ci, since ws is directly

proportional to the waiting time of s. Therefore, as s waits in the cache,

the probability ps increases and thus the likelihood that s will be the next

point selected from Ci increases as well. In order to guarantee that there is no

starvation and to ensure that points in the cache do not wait for long periods

of time, we use a first-in first-out strategy to select points that have been in

the cache for longer than a predefined maximum amount of time.

14

Page 17: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

3.5.2 Cache is Not Full

The objective of processor Pi is to maximize the useful computation [25, 58].

Consequently, it is important that there are always some points in the cache

Ci, so that processor Pi can spend useful computation time computing knn

queries on behalf of other processors. In this way, processor Pi postpones

the computation of knn queries for its points as much as possible – these

computations require no communication and can be done anytime.

Processor Pi requests from other processors to send to it several points in order

to fill its cache Ci. In order to avoid communicating the same request over and

over again, processor Pi sends the request only to those processors that have

responded to a previous similar request. Processor Pi maintains a flag fj for

each processor Pj ∈ P − {Pi} that indicates if processor Pj has responded

to a “cache is not full” request from processor Pi. Processor Pi sends the

request to processor Pj only if the flag fj is set to true. After posting the

request to processor Pj, processor Pi sets the flag fj to false. The flag fj is

set again to true when processor Pi receives a response from processor Pj.

Initially, processor Pi assumes that every other processor Pj has responded to

its request to fill the cache Ci.

When processor Pj receives a request from Pi to fill the cache Ci, processor

Pj selects one or more points uniformly at random from Sj and sends the

selected points to processor Pi. The selected points are never again considered

by processor Pj for filling the cache Ci of processor Pi. In order to ensure

that the same point is never sent to a processor more than once, processor Pj

associates with each point sj ∈ Sj and each processor Pi ∈ P − {Pj} a single

bit bsj ,Pithat indicates if point sj has been sent to processor Pi. A point sj is

considered for selection iff bsj ,Piis set to false. The bit bsj ,Pi

is set to true

15

Page 18: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

when the point sj is sent to processor Pi.

The bookkeeping information requires the additional storage of (|P | − 1) +

(|P | − 1)|Sj| bits by each processor Pj ∈ P . This additional amount of book-

keeping is very small compared to the size of the data Sj. As an example, if

|P | = 100, Sj is 2GB large, each point sj ∈ Sj has 200 dimensions, and each

dimension is represented by a 64-bit double, the amount of bookkeeping is

15.84MB or equivalently 0.77% of the size of Sj.

3.5.3 Received Points

Processor Pi receives points from other processors only as a response to the

request of Pi to fill its cache Ci. If there is room in the cache Ci, the received

points are added to the cache Ci. Otherwise, processor Pi selects one point

from the cache Ci, using the selection process discussed in section 3.5.1, and

computes the knn query associated with it. This process continues until all

the received points are added to the cache Ci.

3.5.4 Received Results

Processor Pi associates with each point si ∈ Si the combined results of the

knn queries computed by Pi and other processors. Let NS(si, k) be the cur-

rent knn results associated with the point si. Initially, NS(si, k) is empty.

Let NSj(si, k) be the knn results computed by processor Pj utilizing the knn

data structure TSj. The set NS(si, k) is updated by considering all the points

s ∈ NSj(si, k) and adding to NS(si, k) the point s iff |NS(si, k)| < k or

ρ(s, si) < mins′∈NS(si,k) ρ(s′, si). In other words, processor Pi merges the knn

results computed by processor Pj with the current results. In the merging

process, only the k closest neighbors are associated with the result NS(si, k).

16

Page 19: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

3.5.5 No Pending Requests

If there are no pending requests, processor Pi has computed all the knn queries

associated with the points in the cache Ci and has requested from other proces-

sors to fill its cache Ci. Furthermore, processor Pi has handled all the requests

from other processors and, thus, it is free to compute knn queries from the

points Si it owns. Therefore, processor Pi selects one of the points si ∈ Si,

which has not been selected before, and computes the knn query NSi(si, k)

utilizing the data structure TSi. The computed results NSi

(si, k) are merged

with the current results NS(si, k), as described in section 3.5.4.

Processor Pi uses the computations of the knn queries for points it owns to

avoid possible idle or waiting times when it has handled all the requests from

other processors. Doing such computations only when it is necessary increases

the effective utilization of processors [25, 58] and consequently results in an

efficient distribution of the computation of the knn graph G(S, k).

3.5.6 Termination Criteria

The computation of the knn graph G(S, k) is complete when NS(s, k) has been

computed for each s ∈ S. Since S is partitioned into S1, . . ., Sp, it suffices if for

each Si and each si ∈ Si, the computation of NS(si, k) is complete. Observe

that the computation of NS(si, k) is complete iff processor Pi, which owns the

point si, has combined the knn query results NS1(si, k), . . ., NSp

(si, k), i.e.,

each processor Pj has computed the knn queries for the point si utilizing the

knn data structure TSjand processor Pi has gathered and combined all the

computed results into NS(si, k).

When the computation of NS(si, k) is complete for all si ∈ Si, processor Pi

notifies all the other processors that the computation of knn queries for Si is

17

Page 20: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

complete. Processor Pi meets the termination criteria when the computation

of knn queries for Si is complete and when processor Pi receives from all the

other processors Pj ∈ P − Pi the notification that the computation of knn

queries for Sj is complete.

Processor Pi detects the completion of the computation of NS(si, k) for all si ∈

Si by maintaining a counter. The counter is initially set to 0 and incremented

each time processor Pi computes a query for a point it owns or when processor

Pi receives results from another processor. When the counter reaches the value

|P ||Si|, the computation of NS(si, k) is complete for each point si ∈ Si, since

each processor has computed knn queries using each point si ∈ Si.

3.5.7 Exploiting Properties of the knn Data Structure

Each processor Pi utilizes the knn data structure TSifor the computation of

knn queries. As presented in Algorithm 1 (line 6), DKNNG considers each knn

data structure TSias a black-box. However, DKNNG not only works with any

knn data structure, but it can easily take advantage of special properties of

the knn data structure TSito improve the overall performance.

The general idea consists of utilizing information gathered during the compu-

tation of knn queries by processor Pi or other processors to prune future com-

putations of knn queries. Past computations could provide information that

can be used to eliminate from consideration points in a data set that are too far

from a query point to be its nearest neighbors. Such an idea has been success-

fully used by many efficient nearest neighbors algorithms [4,8,15,23,28,36,55].

The overall effect is that the computation of knn queries by other processors

for some of the points owned by processor Pi may become unnecessary and

thus the efficiency of DKNNG is improved since certain communication and com-

18

Page 21: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

Fig. 2. Querying the GNAT [8] data structure. The empty circle drawn with a thickblack perimeter represents the hypersphere B(si) centered at si. During the com-putation of the query TSi

(si, k), there is no need to consider points inside the circleswith the solid lines, since these circles do not intersect B(si).

putation costs are eliminated.

Pruning local searches: For the clarity of exposition, we illustrate how to take

advantage of the knn data structures by considering GNAT [8] and kd-trees [23]

as the data structure TSi. The description, however, is applicable to many knn

data structures [4, 15, 28, 36, 55].

GNAT is a hierarchical data structure based on metric trees [53] that supports

the efficient computation of nearest neighbor searches. At the root level, the set

Si is partitioned into smaller sets using an approximate k-centers algorithm

and each center is associated with a branch extending from the root. The

construction process continues recursively until the cardinality of the set in the

partition is smaller than some predefined constant. An illustration is provided

in Figure 1.

GNAT is a suitable choice since it is efficient for large data sets and supports the

computation of knn queries for arbitrary metric spaces, which is important in

many research areas such as robot motion planning, biological applications,

etc. An important property of GNAT is that the efficiency for the computation

of knn for a point si ∈ S can be improved by limiting the search only to

points that are inside a small hypersphere B(si) centered at si, as illustrated in

Figure 2. We compute the radius rB(si) of B(si) as the largest distance from si

19

Page 22: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

to a point in NS(si, k). When NS(si, k) is empty, rB(si) is set to∞. As described

in section 3.5.5, NS(si, k) contains the partial results of the knn query for the

point si. The computation of the knn query NSj(si, k) by processor Pj can

significantly be improved by sending to it, in addition to si, the radius rB(si),

since a point sj ∈ Sj cannot be one of the k closest neighbors to si if it is

outside B(si), since there are already at least k points inside B(si).

Other knn data structures, such as kd-trees [23] could be exploited in a sim-

ilar way. A kd-tree constructs a hierarchical representation of the data set

based on axis-aligned hyperplane decompositions. At each level, an axis and

a hyperplane through that axis is chosen and the data set is split into two

sets depending on the side of the hyperplane each data point is located. The

hyperplane is typically chosen to be the median of the points with respect to

the coordinates of the chosen axis. The process is repeated recursively until

the cardinality of the remaining data set is smaller than some predefined con-

stant. The efficiency of the knn search can be improved by pruning subtrees

whose associated hyperplane does not intersect with B(si), since points in

these subtrees cannot be closer to si than points already in B(si). Therefore,

the radius rB(si) can be used by processor Pj to prune certain branches during

the computation of NSj(si, k).

Reducing communication and computation costs: The idea of pruning the knn

search by limiting it to points inside a small hypersphere around the query

point can be further utilized in other parts of DKNNG to improve its efficiency.

During initialization, each processor Pi selects several points Ci ⊂ Si as center

points and associates each point si ∈ Si with the closest center ci ∈ Ci.

The computation of the centers can be done in a variety of ways including

algorithms similar to k-means [6] or approximate k-centers [8]. The radius

20

Page 23: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

Fig. 3. Additional pruning of knn searches. The hypersphere B(si) is representedby the empty circle with the thick black perimeter, while the centers B(cj) arerepresented by the circles with the points inside. Processor Pi can avoid sendingthe point si to processor Pj , since B(si) does not intersect any of the hyperspheresB(cj). Any point sj ∈ Sj is at least a distance rB(si) away from si and thus cannotbe in NS(si, k), since NS(si, k) already contains k points that are inside B(si).

of the hypersphere B(ci) is computed as the largest distance from ci ∈ Ci

to a point associated with ci. Processor Pi then sends ci and rB(ci) for each

ci ∈ Ci to all other processors. Consider now the computation of NSj(si, k) by

processor Pj for some point si ∈ Si. A point sj ∈ NSj(si, k) will be merged with

NS(si, k) iff ρ(sj, si) < rB(si). Since sj ∈ Sj is contained inside B(cj) for some

cj ∈ Cj, then we know sj cannot be merged with NS(si, k) if B(si)∩B(cj) = ∅.

Hence, processor Pi can avoid sending to processor Pj any point si ∈ Si such

that B(si)∩B(cj) = ∅ for all cj ∈ Cj, since none of the points in NSj(si, k) will

be merged with NS(si, k). Such pruning further reduces the communication

and computation costs of DKNNG. The idea is illustrated in Figure 3.

3.5.8 Improving the Distribution by Preprocessing the Data Set

The efficiency of DKNNG can be further improved by partitioning the set S

into sets S1, . . ., Sp, such that the partitioning improves the pruning of the

knn search as discussed in section 3.5.7. One possibility is to partition S into

p clusters of roughly the same size and assign each cluster to one processor.

The clusters can be again computed using algorithms similar to k-means [6]

or approximate k-centers [8]. The partition of the set S into clusters increases

21

Page 24: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

the likelihood that B(ci) ∩ B(cj) = ∅ for ci ∈ Ci and cj ∈ Cj, since points in

Si belong to a different cluster than points in Sj. Consequently, the likelihood

that, for si ∈ Si, B(si) ∩ B(cj) = ∅ for all cj ∈ Cj, also increases. In such

cases, the computation of the knn query NSj(si, k) becomes unnecessary and

thus reduces overall computational and communication costs.

3.6 Extensions to DKNNG

Although DKNNG is presented for the computation of the knn graph, the pro-

posed framework can be extended and generalized in several ways. We now

discuss how to apply and extend our distributed framework to compute graphs

based on approximate knn queries and range queries.

3.6.1 Computation of Graphs Based on Approximate knn Queries

The computation of graphs based on approximate knn queries is a viable

alternative to the computation of graphs based on knn queries for many ap-

plications in robot motion planning, biology, and other fields, especially for

large high-dimensional data sets. In an approximate knn query, instead of

computing exactly the k closest points to a query point, it suffices to compute

k points that are within a (1 + ε) hypersphere from the k-th closest point to

the query point. As the number of points and the dimension of each point

in the data set increases, the computation of knn queries becomes computa-

tionally expensive and can be made more efficient by considering approximate

knn queries. Approximate knn queries provide a trade-off between efficiency

of computation and quality of neighbors computed for each point.

The DKNNG algorithm of Algorithm 1 easily supports the computation of graphs

based on approximate knn queries. Since each data structure TSiis considered

22

Page 25: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

as a black-box, it suffices to use data structures TSithat support the com-

putation of approximate knn queries instead of knn queries. Furthermore,

efficient approximate nearest-neighbors algorithm exploit past computations

to prune future searches in a similar fashion as exact nearest-neighbors algo-

rithms [4,16,36]. Therefore, all the exploitations of the knn data structures to

improve the efficiency of DKNNG, as discussed in section 3.5.7, are applicable

to approximate knn data structures as well.

3.6.2 Computation of Graphs Based on Range Queries

Another extension is the computation of range queries instead of knn queries.

In a range query, we are interested in computing all the points s ∈ S that are

within some predefined distance ε from a query point si ∈ S, i.e., RS(si, ε) =

{s ∈ S : ρ(s, si) ≤ ε}. Thus, we would like to compute RS(si, ε) for all the

points si ∈ S. This is easily achieved by using data structures that support

range queries instead of data structures that support knn queries. As in the

case of the knn search, a large amount of work has been devoted to the devel-

opment of efficient algorithms for the computation of range queries (see [19]

for extensive references). One potential problem with range queries that could

affect the performance of DKNNG is that the communication of the range re-

sults RSj(si, ε) from processor Pj to processor Pi could be less efficient than

the communication of NSj(si, k), since it is possible that |RSj

(si, ε)| > k. On

the other hand, the update of range results is more efficient than the update

of knn results, since processor Pi updates range results by simply appending

RSj(si, k) to RS(si, k). Furthermore, in the cases where the application we are

interested in do not require processor Pi to store the results of RS(si, ε), i. e.,

the results RS(si, ε) can be stored in any of the available processors, then the

communication of the range results RSj(si, k) from processor Pj to processor

23

Page 26: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

Pi is unnecessary. Unlike in the case of the computation of knn queries where

results computed from other processors can be used to reduce the radius of

the hypersphere centered at the query point, in the case of range queries,

the radius of the hypersphere centered at the query point remains fixed to

ε throughout the distributed computation. Therefore, processor Pj can avoid

communicating the range results RSj(si, ε) to processor Pi and keep the results

stored in its memory. In cases where storing RSj(si, ε) exceeds the memory ca-

pacity available to processor Pj, then processor Pj writes the range results

RSj(si, ε) to a file.

4 Experiments and Results

Our experiments were designed to evaluate the performance of DKNNG com-

pared to the sequential implementation.

4.1 Hardware and Software Setup

The implementation of DKNNG was carried out in ANSI C/C++ using the MPICH

implementation of MPI standard for communication. Code development and

initial experiments were carried out in Rice Terascale Cluster, PBC Cluster,

and Rice Cray XD1 Cluster ADA. The experiments reported in this paper

were run on the Rice Terascale Cluster, a 1 TeraFLOP Linux cluster based on

Intel r© Itanium r©2 processors. Each node has two 64-bit processors running

at 900MHz with 32KB/256KB/1.5MB of L1/L2/L3 cache, and 2GB of RAM per

processor. The nodes are connected by a Gigabit Ethernet network. For the

DKNNG experiments, we used two processors per node.

24

Page 27: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

4.2 Data Sets

In this section, we describe the data sets we used to test the performance of

DKNNG, how these data sets were generated, and the distance metrics we used

in the computation of the knn graph.

4.2.1 Number of Points and Dimensions

We tested the performance of DKNNG on data sets of varying number of points

and dimension. We used data sets with 100000, 250000, and 500000 points

and 90, 105, 174, 203, 258, and 301 dimensions. The 18 data sets obtained

by combining all possible variations in the number of points and dimension

varied in capacity from 83MB to 1.3GB. In addition, we also used data sets with

1000000, 2000000, and 3000000 points and 504 and 1001 dimensions. The

additional 6 data sets obtained by combining all possible variations in the

number of points and dimensions varied in capacity from 3.75GB to 22.37GB.

4.2.2 Generation of Data Sets

Each data set was obtained from motion planning benchmarks, since motion

planning is our main area of research. The objective of motion planning is to

compute collision-free paths consisting of rotations and translations for robots

comprised of a collection of polyhedra moving amongst several polyhedral

obstacles. An example of a path obtained by a motion planning algorithm

is shown in Figure 4(a). In our benchmarks, the robots consisted of objects

described by a large number of polygons, such as 3-dimensional renderings of

the letters of the alphabet, bent cylinders, and the bunny, as illustrated in

Figure 4(b), and the obstacles consisted of the walls of a 3-dimensional maze,

as illustrated in Figure 4(c). Floor and ceiling are removed from Figure 4(c)

25

Page 28: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

(a) (b) (c)

Fig. 4. A motion planning benchmark. The objective of the motion planning isto compute collision-free paths consisting of rotations and translations for robotscomprised of a collection of polyhedra moving amongst several polyhedral obstaclesin 3-dimensional environments. Rendering of objects is not on the same scale. (a)A path where the robot goes from one side of a wall to the other side of the wall bywiggling its way through a small hole. Illustrated are several consecutive poses of therobot as it follows the path. The arrows indicate the direction of motion from onepose to the other. (b) Several different robots varying from 3-dimensional renderingsof the letters of the alphabet to a collection of bent cylinders and complex geometricmodels such as the bunny. (c) A 3-dimensional maze representing the environmentwhere the robots are allowed to move.

to show the maze.

Efficient motion planning algorithms, such as [24, 31, 41, 45, 46], construct the

knn graph using points representing configurations of the robots. The config-

uration of a robot is a 7-dimensional point parameterized by a 3-dimensional

position and an orientation represented by a 4-dimensional quaternion [2].

(We note that the robot has only 6 degrees of freedom, three for translation

and three for rotation. The fourth parameter in the parameterization of the

rotation is redundant, since unit quaternions suffice for the representation of

rotations.) The configuration of ` robots is a 7`-dimensional point obtained by

concatenating the configurations of each robot. The distance between two con-

figurations of a single robot is defined as the geodesic distance in SE(3) [10].

For multiple robots, the distance between two configurations is defined by

summing the SE(3) distances between the corresponding configuration pro-

26

Page 29: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

points 100000, 250000, 500000 1000000, 2000000, 3000000

dimension 90 105 174 203 258 301 504 1001

point type emb cfg emb cfg emb cfg cfg cfg

distance euc geo euc geo euc geo geo geo

(a) (b)

Table 1Data sets used for the computation of the knn graph. The geodesic (geo) and Eu-clidean (euc) distances are used for the computation of knn graphs on configuration(cfg) and embedding (emb) points, respectively. (a) For each dimension, data setswere generated with 100000, 250000, and 500000 points each. (b) For each dimen-sion, data sets were generated with 1000000, 200000, and 3000000 points each.

jections for each robot [14]. The computation of the geodesic distance is an

expensive operation as it requires the evaluation of several sin, cos, and square

root functions. Alternatively, in order to improve on the efficiency of the com-

putation, the knn graph can be based on the Euclidean distance defined for

embeddings of configurations [14]. The knn graph based on embedding points

provides a trade-off between efficiency of computation and quality of neigh-

bors computed for each point. An embedding of a single robot configuration is

a 6-dimensional point obtained by rotating and translating two 3-dimensional

points selected from the geometric description of the robot as specified by the

configuration and concatenating the transformed points.

Data sets generated and used by PRM: The data sets generated by the motion

planning algorithm PRM [31] are described in Table 1(a). PRM constructs a

knn graph by sampling configurations of the robot uniformly at random and

connecting each configuration to its k closest neighbors by a simple path. The

algorithm guarantees that the sampled configurations and the connections

between configurations avoid collisions with the obstacles. In order for PRM

to find paths for multiple robots in environments with many obstacles or

narrow passages, millions of configurations are typically sampled. As described

27

Page 30: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

in Table 1(a), we used data sets consisting either of configuration points or

embedding points. All data sets are generated by using PRM [31] to solve motion

planning problems in the maze environment of Figure 4. The number of robots

was set to 15, 29, and 43 to obtain configuration points of 105, 203, and 301

dimensions and embedding points of 90, 174, and 258 dimensions, respectively.

The robots consisted of 3-dimensional renderings of the letters of the alphabet,

where the i-th robot is the (i mod 26)-th letter of the alphabet.

Large data sets: We generated large data sets consisting of 1000000, 2000000,

and 3000000 points of 504 and 1001 dimensions by sampling configurations

uniformly at random, as described in Table 1(b). We did not use these large

data sets for planning because (i) the scope of this paper is the distributed

computation of the knn graph and not planning and (ii) time limitations,

since planning involves collision checking of configurations and paths and for

the data sets of Table 1(b) would require several weeks of computation.

4.3 Efficiency of DKNNG

To measure the efficiency of DKNNG, we ran the distributed code on various

data sets, as described in section 4.2, using different numbers of processors.

4.3.1 Description of Experiments

Number of nearest neighbors: We tested the performance of the sequential and

DKNNG algorithms for various values of k ∈ {15, 45, 150}. We found very little

variation in the running times for the sequential algorithm and DKNNG, thus in

all our experiments we only report results obtained for k = 15.

Measuring the computation time of the sequential algorithm: The computation

time required by the sequential algorithm is estimated by randomly selecting

28

Page 31: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

500 points from the data set, computing the associated knn queries and cal-

culating the average time required to compute one knn query. The total time

is then obtained by multiplying the average time to compute one knn query

by the number of points in the data set and adding to it the time it takes to

construct the knn data structure. Our experiments with data sets of 100000

and 250000 points and 90, 105, and 174 dimensions showed little variation

(less than 60 seconds) between estimated sequential time and actual sequen-

tial time.

Each data set can be stored in a single machine: The sizes of the data sets

described in Table 1(a) vary from 83MB to 1.3GB. These sizes were chosen

to make a fair comparison between DKNNG and the sequential algorithm by

ensuring that each data set fits in the main memory of a single machine (see

section 4.1 for a description of the hardware we used.) The efficiency of DKNNG

would be even higher if each data set does not fit in the main memory of a

single machine, as it is often the case in emerging applications in robot motion

planning [40, 45, 46] and biology [3, 33, 39, 42, 50, 52], since the performance of

the sequential implementation would deteriorate due to frequent disk access.

For each of the 18 data sets of Table 1(a), we ran DKNNG on 64, 80, and 100

processors. Table 2 contains a summary of the results. For each experiment,

we report the computation time required by the sequential version and the

efficiency obtained with p processors. The efficiency is calculated as t1/(tp · p),

where t1 is the time required by a single processor to compute the knn graph

and tp is the time required by DKNNG to compute the knn graph when run on

p processors.

Storing each data set requires several machines: We also tested the perfor-

mance of DKNNG on the data set of Table 1(b). The purpose of these data sets,

29

Page 32: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

[emb, euc, d = 90, k = 15] [cfg, geo, d = 105, k = 15]

100000 250000 500000 100000 250000 500000

1 108.52m 1076.35m 5396.78m 1875.72m 12016.02m 47301.52m

64 0.922 0.970 0.986 0.992 0.996 0.998

80 0.905 0.958 0.982 0.990 0.995 0.998

100 0.889 0.958 0.976 0.989 0.994 0.997

[emb, euc, d = 174, k = 15] [cfg, geo, d = 203, k = 15]

100000 250000 500000 100000 250000 500000

1 197.14m 1838.97m 8128.69m 3666.77m 22602.67m 92797.77m

64 0.913 0.969 0.986 0.992 0.996 0.998

80 0.876 0.958 0.981 0.990 0.996 0.998

100 0.851 0.944 0.974 0.988 0.995 0.998

[emb, euc, d = 258, k = 15] [cfg, geo, d = 301, k = 15]

100000 250000 500000 100000 250000 500000

1 264.37m 2014.33m 8610.76m 5416.69m 34381.19m 141771.51m

64 0.889 0.958 0.979 0.992 0.997 0.998

80 0.853 0.946 0.973 0.990 0.996 0.998

100 0.813 0.931 0.966 0.987 0.996 0.998

Table 2Efficiency of DKNNG on the data sets of Table 1(a). The heading of each table indicatesthe type of the points, the distance metric, the dimension of each point, and thenumber k of nearest neighbors (corresponding to a column in Table 1(a)). For eachtable, experiments were run with 100000, 250000, and 500000 points and on 1, 64,80, and 100 processors. The row of processor 1 indicates the running time in minutesof the sequential algorithm. For experiments with 64, 80, and 100 processors, weindicate the efficiency of DKNNG computed as t1/(tp ·p), where tp is the running timeof DKNNG on p processors.

which vary in size from 3.75GB to 22.37GB, is to test the efficiency of DKNNG for

large data sets, where several machines are required just to store the data. We

ran DKNNG using 20, 100, 120, and 140 processors. We based the calculations

30

Page 33: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

[cfg, geo, d = 504, k = 15] [cfg, geo, d = 1001, k = 15]

1000000 2000000 3000000 1000000 2000000 3000000

eff[20, 100] 1.000 0.999 1.000 0.999 0.999 0.999

eff[20, 120] 1.000 0.999 1.000 0.999 0.999 0.999

eff[20, 140] 1.000 0.999 1.000 0.998 0.997 0.999

Table 3Efficiency of DKNNG on the large data sets of Table 1(b). The heading of each tableindicates the type of the points, the distance metric, the dimension of each point,and the number k of nearest neighbors (corresponding to a column in Table 1(b)).For each table, experiments were run with 1000000, 2000000, and 3000000 pointsand on 20, 100, 120, and 140 processors. We report the efficiency of DKNNG on 100,120, and 140 processors in rows eff[20, 100], eff[20, 120], and eff[20, 140], respectively.The efficiency is calculated based on the running time of DKNNG on 20 processors,i.e., 20t20/(tp · p), where t20 is the running time of DKNNG with 20 processors, and tp

is the running time of DKNNG on p processors, for p = 100, 120, 140.

of the efficiency on the running time for 20 processors, since each processor

is capable of storing 1/20-th of the data set in its main memory. This again

ensures a fair computation of the efficiency as the number of processors is

increased, since the performance of DKNNG on 20 processors does not suffer

from frequent disk access. The efficiency is calculated as 20t20/(tp · p), where

t20 is the running time of DKNNG on 20 processors, and tp is the running time

of DKNNG on p processors, for p = 100, 120, 140. The results are presented in

Table 3.

4.3.2 Results

The overall efficiency of DKNNG is reasonably high on all our benchmarks. From

the results in Table 2, the efficiency on 64 processors ranges from 0.889 to 0.998

with an average of 0.974 and median of 0.989. When the number of processors

is increased to 80, the efficiency ranges from 0.853 to 0.998 with an average

of 0.966 and median of 0.986. The efficiency of DKNNG still remains high even

31

Page 34: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

on 100 processors, where the efficiency ranges from 0.813 to 0.998 with an

average of 0.958 and median of 0.982.

For each dimension, we observe that the efficiency of DKNNG increases as the

number of points in the data set increases. As an example, for d = 90 and

p = 100, the efficiency of DKNNG is 0.889 for 100000 points, 0.958 for 250000

points, and 0.976 for 500000 points. One important factor that contributes to

the increase in the efficiency is that each processor Pi has now more points

Si and thus more opportunities to fill in possible idle or waiting times by

computing knn queries for points si ∈ Si it owns.

Our experiments also indicate a high efficiency of DKNNG even when different

metrics are used. Even though the computation of the Euclidean metric is

much faster than the computation of the multiple geodesic metric in SE(3),

the efficiency of DKNNG remains high in both cases.

For a fixed number of points, we observe that the performance of DKNNG in-

creases as the cost of computing the distance metric increases. For a fixed

number of points and a fixed distance metric, we observe that the perfor-

mance of DKNNG decreases as the dimension increases. The increase in the cost

of computing the distance metric increases the useful computation time since

more time is spent computing queries. The increase in the dimension increases

communication costs since more data is communicated over the network when

sending queries from one processor to another. Hence, depending on whether

the increase in useful computations dominates over the increase in communica-

tion costs, the performance of DKNNG either increases or decreases, respectively.

As an example, when using the Euclidean metric, the efficiency of DKNNG for

n = 250000 and p = 100 is 0.958 for d = 90, 0.944 for d = 174, and 0.931 for

d = 258. When using a computationally more expensive metric, such as the

32

Page 35: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

Worst

AverageMedian

Best

64 80 100Processors

0.0

0.2

0.4

0.6

0.8

1.0

Effi

cien

cy

0 475mTime

99.3%

99.4%

99.5%

99.6%99.7%

99.8%

Use

ful C

ompu

tatio

n

(a) Efficiency (b) Useful computation

Fig. 5. Performance of the DKNNG algorithm. (a) Comparison of the ideal efficiencywith the worst, average, median, and the best efficiency obtained for all the datasets of Table 1(a). (b) Characterization of the useful computation for p = 100 forthe data set with d = 105 and n = 500000 by showing what percentage of the timeis spent computing knn queries.

geodesic metric, the efficiency of DKNNG for n = 250000 and p = 100 is 0.994

for d = 105, 0.995 for d = 203, and 0.996 for d = 301.

Figure 5(a) compares the ideal efficiency to the worst, average, median, and

the best efficiency obtained for all the data sets of Table 1(a) when DKNNG is

run on 64, 80, and 100 processors. Figure 5(a) indicates a nearly ideal efficiency

for the DKNNG algorithm.

Figure 5(b) presents logged data for the data set with d = 105 and n = 500000,

when DKNNG is run on 100 processors, showing the percentage of time spent

computing knn queries. The plot in Figure 5(b) is characteristic of the behavior

of DKNNG on the other data sets as well. Figure 5(b) indicates that most of the

time, ranging from 99.35% to 99.77%, is spent doing useful computation, i.e.,

computation of knn queries on behalf of other processors and computation

of knn queries for points owned by the processor, which results in a highly

efficient distributed algorithm.

The results in Table 3 indicate that DKNNG achieves high efficiency for large

data sets. As discussed earlier, increasing the number of points and the cost

of computing the distance metric increases the useful computation time since

33

Page 36: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

more time is spent computing knn queries, resulting in an almost ideal effi-

ciency for DKNNG.

5 Discussion

Our work is motivated by increasingly high-dimensional problems arising from

motion planning [14,31,35,40,45,46], biological applications [3,33,39,42,50,52],

pattern recognition [21], data mining [18, 51], multimedia systems [13], geo-

graphic information systems [34, 43], and many other research fields, which

often require efficient computations of knn graphs based on arbitrary distance

metrics. This paper presents an algorithm for efficiently distributing the com-

putation of such graphs using a decentralized approach.

Our distributed framework is general and can be extended in many ways.

Possible extensions include the computation of graphs based on other types of

proximity queries, such as approximate knn or range queries, as discussed in

section 3.6. In addition, our distributed framework allows for the exploitation

of certain properties of the data structures used for the computation of knn

queries, approximate knn queries, or range queries, to improve the overall

efficiency of the DKNNG algorithm.

The efficiency of DKNNG derives in part from a careful prioritization and han-

dling of requests between processors and a systematic exploitation of prop-

erties of knn data structures. Throughout the distributed computation, each

processor uses computations that do not require communications to fill in idle

times and gives a higher priority to computations requests by other proces-

sors in order to provide a timely response. Information collected during the

computation of knn queries by one processor is shared with other processors

34

Page 37: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

which use it to prune the computations of their knn queries. The information

sharing in some cases makes the computation of certain knn queries com-

pletely unnecessary and thus reduces communication and computation costs.

Our experimental results suggest that our distributed framework supports the

efficient computation by hundreds of processors of very large knn graphs con-

sisting of millions of points with hundreds of dimensions and arbitrary distance

metrics.

We intend to integrate DKNNG with our motion planning algorithms [45,46] and

use it for the solution of robot motion planning problems of unprecedented

complexity. While recent motion planners can successfully solve robot motion

planning problems with tens of dimensions, our objective is to solve robot

motion planning problems consisting of thousands of dimensions. Other chal-

lenging applications stemming from biology, pattern recognition, data mining,

fraud detection, geographic information systems that rely on the computation

of knn graphs could benefit similarly from the use of our distributed DKNNG

framework.

Acknowledgements

The authors would like to thank Cristian Coarfa for helpful discussions and

comments. Work on this paper has been supported in part by NSF 0205671,

NSF 0308237, ATP 003604-0010-2003, NIH GM078988, and a Sloan Fellowship

to LK. Experiments reported in this paper have been obtained on equipment

supported by EIA 0216467, NSF CNS 0454333, and NSF CNS 0421109 in

partnership with Rice University, AMD, and Cray.

35

Page 38: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

References

[1] M. H. Ali, A. A. Saad, M. A. Ismail, The PN-tree: A parallel and distributed

multidimensional index, Distributed and Parallel Databases 17 (2) (2005) 111–

133.

[2] S. L. Altmann, Rotations, Quaternions, and Double Groups, Clarendon Press,

Oxford, England, 1986.

[3] M. S. Apaydin, D. L. Brutlag, C. Guestrin, D. Hsu, J.-C. Latombe, Stochastic

roadmap simulation: An efficient representation and algorithm for analyzing

molecular motion, Journal of Computational Biology 10 (3–4) (2003) 257–281.

[4] S. Arya, D. M. Mount, S. Nathan, An optimal algorithm for approximate nearest

neighbor searching in fixed dimensions, Journal of the ACM 45 (6) (1998) 891–

923.

[5] M. J. Atallah, S. Prabhakar, (Almost) Optimal parallel block access for range

queries, Information Sciences 157 (1-2) (2003) 21–31.

[6] C. M. Bishop, Neural Networks for Pattern Recognition, Oxford University

Press, Oxford, England, 1995.

[7] B. Braunmuller, M. Ester, H.-P. Kriegel, J. Sander, Multiple similarity queries:

a basic DBMS operation for mining in metric databases, IEEE Transactions on

Knowledge and Data Engineering 13 (1) (2001) 79–95.

[8] S. Brin, Near neighbor search in large metric spaces, in: International

Conference on Very Large Data Bases, San Francisco, California, 1995, pp.

574–584.

[9] F. Buccafurri, G. Lax, Fast range query estimation by n-level tree histograms,

IEEE Transactions on Knowledge and Data Engineering 51 (2) (2004) 257–275.

36

Page 39: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

[10] F. Bullo, R. M. Murray, Proportional derivative (PD) control on the Euclidean

group, in: European Control Conference, Vol. 2, Rome, Italy, 1995, pp. 1091–

1097.

[11] D. Burago, Y. Burago, S. Ivanov, I. D. Burago, A Course in Metric Geometry,

American Mathematical Society, 2001.

[12] P. B. Callahan, Optimal parallel all-nearest-neighbors using the well-separated

pair decomposition, in: IEEE Symposium on Foundations of Computer Science,

1993, pp. 332–340.

[13] M. Cheng, S. Cheung, T. Tse, Towards the application of classification

techniques to test and identify faults in multimedia systems, in: IEEE

International Conference on Quality Software, Los Alamitos, California, 2004,

pp. 32–40.

[14] H. Choset, K. M. Lynch, S. Hutchinson, G. Kantor, W. Burgard, L. E.

Kavraki, S. Thrun, Principles of Robot Motion: Theory, Algorithms, and

Implementations, MIT Press, Cambridge, MA, 2005.

[15] B. Cui, B. C. Ooi, J. Su, K.-L. Tan, Contorting high-dimensional data

for efficient main memory knn processing, in: ACM SIGMOD International

Conference on Management of Data, San Diego, California, 2003, pp. 479–490.

[16] B. Cui, H. T. Shen, J. Shen, K.-L. Tan, Exploring bit-difference for

approximate knn search in high-dimensional databases, in: Australasian

Database Conference, Newcastle, Australia, 2005, pp. 165–174.

[17] P. Das, M. Moll, H. Stamati, L. E. Kavraki, C. Clementi, Low-dimensional

free energy landscapes of protein folding reactions by nonlinear dimensionality

reduction, Proceedings of the National Academy of Science USA 103 (26) (2006)

9885–9890.

37

Page 40: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

[18] B. V. Dasarathy, Nearest-neighbor approaches, in: W. Klosgen, J. M. Zytkow,

J. Zyt (Eds.), Handbook of Data Mining and Knowledge Discovery, Oxford

University Press, New York, New York, 2002, pp. 88–298.

[19] M. de Berg, M. van Kreveld, M. H. Overmars, Computational Geometry:

Algorithms and Applications, Springer, Berlin, 1997.

[20] F. Dehne, A. Rau-Chaplin, Scalable parallel computational geometry for coarse

grained multicomputers, International Journal of Computational Geometry &

Applications 6 (3) (1996) 379–400.

[21] R. O. Duda, P. E. Hart, D. G. Stork, Pattern Classification, John Wiley & Sons,

New York, New York, 2000.

[22] P. W. Finn, L. E. Kavraki, Computational approaches to drug design,

Algorithmica 25 (1999) 347–371.

[23] J. H. Freidman, J. L. Bentley, R. A. Finkel, An algorithm for finding best

matches in logarithmic expected time, ACM Transactions on Mathematical

Software 3 (3) (1977) 209–226.

[24] R. Geraerts, M. Overmars, A comparitive study of probabilistic roadmap

planners, in: J.-D. Boissonnat, J. Burdick, K. Goldberg, S. Hutchinson (Eds.),

Algorithmic Foundations of Robotics V, Springer-Verlag, 2002, pp. 43–58.

[25] A. Grama, G. Karypis, V. Kumar, A. Gupta, An Introduction to Parallel

Computing: Design and Analysis of Algorithms, Addison Wesley, Boston, MA,

2003.

[26] A. Guttman, R-trees: A dynamic index structure for spatial searching, in:

ACM SIGMOD International Conference on Management of Data, Boston,

Massachusetts, 1984, pp. 47–54.

[27] P. Indyk, Nearest neighbors in high-dimensional spaces, in: J. E. Goodman,

38

Page 41: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

J. O’Rourke (Eds.), Handbook of Discrete and Computational Geometry, CRC

Press, 2004, pp. 877–892.

[28] H. V. Jagadish, B. C. Ooi, K.-L. Tan, C. Yu, R. Zhang, iDistance: An adaptive

B+-tree based indexing method for nearest neighbor search, ACM Transactions

on Database Systems 30 (2) (2005) 364–397.

[29] R. Jin, G. Yang, G. Agrawal, Shared memory parallelization of data

mining algorithms: Techniques, programming interface, and performance, IEEE

Transactions on Knowledge and Data Engineering 17 (1) (2005) 71–89.

[30] L. E. Kavraki, Geometry and the discovery of new ligands, in: J.-P. Laumond,

M. H. Overmars (Eds.), Algorithms for Robotic Motion and Manipulation, A.

K. Peters, 1997, pp. 435–448.

[31] L. E. Kavraki, P. Svestka, J.-C. Latombe, M. H. Overmars, Probabilistic

roadmaps for path planning in high-dimensional configuration spaces, IEEE

Transactions on Robotics and Automation 12 (4) (1996) 566–580.

[32] F. Korn, B.-U. Pagel, C. Faloutsos, On the ‘dimensionality curse’ and the ‘self-

similarity blessing’, IEEE Transactions on Knowledge and Data Engineering

13 (1) (2001) 96–111.

[33] H.-P. Kriegel, M. Pfeifle, S. Schonauer, Similarity search in biological and

engineering databases, IEEE Data Engineering Bulletin 27 (4) (2004) 37–44.

[34] W.-S. Ku, R. Zimmermann, H. Wang, C.-N. Wan, Adaptive nearest neighbor

queries in travel time networks, in: ACM International Workshop on Geographic

Information Systems, Bremen, Germany, 2005, pp. 210–219.

[35] J. Kuffner, K. Nishiwaki, S. Kagami, M. Inaba, H. Inoue, Motion planning for

humanoid robots, in: D. Paolo, R. Chatila (Eds.), International Symposium of

Robotics Research, Vol. 15 of Springer Tracts in Advanced Robotics, Springer

Verlag, Siena, Italy, 2005, pp. 365–374.

39

Page 42: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

[36] E. Kushilevitz, R. Ostrovsky, Y. Rabani, Efficient search for approximate

nearest neighbor in high dimensional spaces, SIAM Journal of Computing 30 (2)

(2000) 457–474.

[37] S. M. LaValle, P. W. Finn, L. E. Kavraki, J.-C. Latombe, Efficient

database screening for rational drug design using pharmacophore-constrained

conformational search, in: International Conference on Computational

Molecular Biology, Lyon, France, 1999, pp. 250–260.

[38] T. Liu, A. W. Moore, A. Gray, K. Yang, An investigation of practical

approximate nearest neighbor algorithms, in: L. K. Saul, Y. Weiss, L. Bottou

(Eds.), Advances in Neural Information Processing Systems, MIT Press,

Cambridge, MA, 2005, pp. 825–832.

[39] S. McGinnis, T. L. Madden, BLAST: at the core of a powerful and diverse set

of sequence analysis tools, Nucleic Acids Research 32 (2004) W20–W25.

[40] M. Moll, L. E. Kavraki, Path planning for variable resolution minimal-energy

curves of constant length, in: IEEE International Conference on Robotics and

Automation, Barcelona, Spain, 2005, pp. 2143–2147.

[41] M. Morales, S. Rodriguez, N. Amato, Improving the connectivity of PRM

roadmaps, in: IEEE International Conference on Robotics and Automation,

Taipei, Taiwan, 2003, pp. 4427–4432.

[42] M. Paik, Y. Yang, Combining nearest neighbor classifiers versus cross-validation

selection, Statistical Applications in Genetics and Molecular Biology 3 (1)

(2004) article 12.

[43] A. Papadopoulos, Y. Manolopoulos, Parallel processing of nearest neighbor

queries in declustered spatial data, in: ACM International Workshop on

Advances in Geographic Information Systems, Rockville, Maryland, 1996, pp.

35–43.

40

Page 43: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

[44] A. N. Papadopoulos, Y. Manolopoulos, Nearest Neighbor Search: A Database

Perspective, Series in Computer Science, Springer Verlag, Berlin, Germany,

2005.

[45] E. Plaku, K. E. Bekris, B. Y. Chen, A. M. Ladd, L. E. Kavraki, Sampling-based

roadmap of trees for parallel motion planning, IEEE Transactions on Robotics

21 (4) (2005) 597–608.

[46] E. Plaku, L. E. Kavraki, Distributed sampling-based roadmap of trees for large-

scale motion planning, in: IEEE International Conference on Robotics and

Automation, Barcelona, Spain, 2005, pp. 3879–3884.

[47] E. Plaku, L. E. Kavraki, Quantitative analysis of nearest-neighbors search in

high-dimensional sampling-based motion planning, in: Workshop Algo Found

Robot, New York, NY, 2006, in press.

[48] A. Qasem, K. Kennedy, J. Mellor-Crummey, Automatic tuning of whole

applications using direct search and a performance-based transformation

system, The Journal of Supercomputing 36 (2) 183–196.

[49] T. Schwarz, M. Iofcea, M. Grossmann, N. Honle, D. Nicklas, B. Mitschang, On

efficiently processing nearest neighbor queries in a loosely coupled set of data

sources, in: ACM International Workshop on Geographic Information Systems,

Washington, DC, 2004, pp. 184–193.

[50] B. K. Shoichet, D. L. Bodian, I. D. Kuntz, Molecular docking using shape

descriptors, Journal of Computational Chemistry 13 (3) (1992) 380–397.

[51] P.-N. Tan, M. Steinbach, V. Kumar, Introduction to Data Mining, Addison-

Wesley, Boston, Massachusetts, 2005.

[52] S. Thomas, G. Song, N. M. Amato, Protein folding by motion planning, Physical

Biology 2 (2005) S148–S155.

41

Page 44: Distributed Computation of the knn Graph for Large High ... · of the k nearest neighbors for each point s 2S. The computation of the k nearest neighbors for a point s 2S is referred

[53] J. K. Uhlmann, Satisfying general proximity/similarity queries with metric

trees, Information Processing Letters 40 (4) (1991) 175–179.

[54] R. Weber, K. Bohm, Trading quality for time with nearest-neighbor search, in:

C. Zaniolo, P. Lockemann, M. Scholl, T. Grust (Eds.), Inter. Conf. on Advances

in Database Technology, Vol. 1777, 2000, pp. 21–35.

[55] P. N. Yianilos, Data structures and algorithms for nearest neighbor search

in general metric spaces, in: ACM-SIAM Symposium on Discrete Algorithms,

Austin, Texas, 1993, pp. 311–321.

[56] M. Yim, K. Roufas, D. Duff, Y. Zhang, C. Eldershaw, S. Homans, Modular

reconfigurable robots in space applications, Autonomous Robots 14 (2–3) (2003)

225–237.

[57] M. J. Zaki, C.-T. Ho (Eds.), Large-Scale Parallel Data Mining, Vol. 1759 of

Lecture Notes in Computer Science, Springer Verlag, New York, 2000.

[58] A. Y. Zomaya (Ed.), Parallel and Distributed Computing Handbook, McGraw-

Hill Professional, New York, NY, 1995.

42