editor's foreword special issue on parallel and distributed computing, part ii

3
Algorithmica (1988) 3:289-291 Algorithmica ~) 1988 Springer-Vedag New York Inc. Editor's Foreword Special Issue on Parallel and Distributed Computing, Part II Jeffrey Scott Vitter I Recent years have seen phenomenol advances in the technologies of parallel computers and distributed computer systems. To the algorithm designer, both types of systems offer the potential of tremendous computational power. The other side of the coin is that in order to realize that potential, several challenging problems must often be solved. This special issue of Algorithmica is devoted to such problems arising in the areas of parallel and distributed computing. The quality and quantity of submitted papers was extremely high--so high, in fact, that this special issue is presented in two installments. This is Part II of the special double issue; the reader is referred to Volume 3, Number 1 for the first part. The first three papers of this sequel present efficient parallel algorithms for several important combinatorial problems in the class NC. The model of computa- tion used in each case is the parallel random-access machine (PRAM). In the first paper, "Parallel Computational Geometry," Alok Aggarwal, Bernard Chazelle, Leo Guibas, Colm O'Dfinlaing, and Chee Yap give fast parallel algorithms for a wide range of construction problems in computational geometry. The problems include constructing convex hulls and Voronoi diagrams, triangulat- ing a simple polygon, finding the minimal-area triangle enclosing a convex polygon, and data structures for multidimensional queries. For inputs of size n, the algorithms run in time T= O(log k n), for some k-<3, on a concurrent-read exclusive-write (CREW) PRAM with P= n processors. For some of the algorithms, the product P • T matches the lower bound for the execution time in the sequential RAM model, and hence in those cases the algorithms are optimal; put another way, the corresponding problems are in the class PC* defined by Vitter and Simons in their May 1986 paper in IEEE Transactions on Computers. The next paper, "The Accelerated Centroid Decomposition Technique for Optimal Parallel Tree Evaluation in Logarithmic Time," by Richard Cole and Uzi Vishkin, makes two basic contributions: it introduces an interesting tree- contraction technique called accelerated centroid decomposition (ACD) that leads to optimal parallel algorithms for tree evaluation and other problems on trees. The algorithms run in O(log n) time using n/log n processors on the exclusive-read exclusive-write (EREW) PRAM model, the weakest of the PRAM variants. The second contribution is more methodological in nature. Based on their experiences, the authors isolate four techniques (list ranking, the Euler tour technique, centroid decomposition, and ACD) as basic building blocks for con- structing optimal parallel algorithms for tree problems. 1 Department of Computer Science, Brown University, Box 1910, Providence, RI 02912, USA.

Upload: jeffrey-scott-vitter

Post on 10-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Algorithmica (1988) 3:289-291

Algorithmica ~) 1988 Springer-Vedag New York Inc.

Editor's Foreword Special Issue on Parallel and

Distributed Computing, Part II

Jeffrey Scott Vitter I

Recent years have seen phenomenol advances in the technologies of parallel computers and distributed computer systems. To the algorithm designer, both types of systems offer the potential of tremendous computational power. The other side of the coin is that in order to realize that potential, several challenging problems must often be solved. This special issue of Algorithmica is devoted to such problems arising in the areas of parallel and distributed computing. The quality and quantity of submitted papers was extremely high--so high, in fact, that this special issue is presented in two installments. This is Part II of the special double issue; the reader is referred to Volume 3, Number 1 for the first part.

The first three papers of this sequel present efficient parallel algorithms for several important combinatorial problems in the class NC. The model of computa- tion used in each case is the parallel random-access machine (PRAM). In the first paper, "Parallel Computational Geometry," Alok Aggarwal, Bernard Chazelle, Leo Guibas, Colm O'Dfinlaing, and Chee Yap give fast parallel algorithms for a wide range of construction problems in computational geometry. The problems include constructing convex hulls and Voronoi diagrams, triangulat- ing a simple polygon, finding the minimal-area triangle enclosing a convex polygon, and data structures for multidimensional queries. For inputs of size n, the algorithms run in time T= O(log k n), for some k-<3, on a concurrent-read exclusive-write (CREW) PRAM with P = n processors. For some of the algorithms, the product P • T matches the lower bound for the execution time in the sequential RAM model, and hence in those cases the algorithms are optimal; put another way, the corresponding problems are in the class PC* defined by Vitter and Simons in their May 1986 paper in IEEE Transactions on Computers.

The next paper, "The Accelerated Centroid Decomposition Technique for Optimal Parallel Tree Evaluation in Logarithmic Time," by Richard Cole and Uzi Vishkin, makes two basic contributions: it introduces an interesting tree- contraction technique called accelerated centroid decomposition (ACD) that leads to optimal parallel algorithms for tree evaluation and other problems on trees. The algorithms run in O(log n) time using n/log n processors on the exclusive-read exclusive-write (EREW) PRAM model, the weakest of the PRAM variants. The second contribution is more methodological in nature. Based on their experiences, the authors isolate four techniques (list ranking, the Euler tour technique, centroid decomposition, and ACD) as basic building blocks for con- structing optimal parallel algorithms for tree problems.

1 Department of Computer Science, Brown University, Box 1910, Providence, RI 02912, USA.

290 J.s. vitter

In the final paper on parallel algorithms, "Parallel Construction of a Suffix Tree with Applications," Alberto Apostolico, Costas Iliopoulos, Gad Landau, Baruch Schieber, and Uzi Vishkin give a fast parallel construction of the suffix tree of a string. The suffix tree is a digital search tree storing all the suffixes of the string. For each 0< e -< 1, the algorithm runs in O(e -1 log n) time and uses n processors and O(n ~+~) space on a concurrent-read concurrent-write (CRCW) PRAM. Suffix trees were introduced originally in connection with file-transfer techniques, and they have since gained wide use in string-processing applications. The authors show how to extend their suffix-tree construction to get fast parallel algorithms for several string-related problems, such as online string matching, finding longest repeated substrings, and approximate string matching.

The next four papers deal with the design and analysis of protocols in distributed systems. In "Distributed Match-Making," Sape Mullender and Paul Vit~nyi systematically investigate the problem of how mobile processes in store-and- forward communication networks without central control can locate one another. This problem, generically labeled "distributed match-making," is central to appli- cations such as name servers, mutual exclusion, and replicated data management. The authors derive an interesting worst-case tradeoff; in general terms, the more distributed an algorithm is, the more messages it must send. The authors give algorithms that meet the tradeoff for several network topologies and degrees of distributedness; for completely distributed algorithms, for example, the optimal bound on the average number of messages to match a pair of processes is 2x/-n.

The fifth paper, "A Technique for Constructing Highly Available Services," by Rivka Ladin, Barbara Liskov, and Liuba Shrira, presents a practical data- replication technique for constructing a highly available fault-tolerant service for use in distributed systems. The method exploits the semantics of applications to improve performance. Each update and query operation involves only a single replica; the replicas propogate update information in background mode. The operations are ordered in an arbitrary way, but clients can control the order explicitly, if desired, via timestamps. Updates that are not explicitly ordered must satisfy certain semantic conditions. These conditions determine which applica- tions can use the technique, and for many applications the conditions are met.

In the sixth paper, "On the Analysis of Cooperation and Antagonism in Networks of Communicating Processes," Paris Kanellakis and Scott Smolka introduce an interesting algebraic technique for the static analysis of finite-state protocols. It uses the particular structure of the given network of finite-state processes (FSPs) to determine if there is potential deadlock or livelock in the network. Only the "possibilities" relevant to deadlock or livelock are examined, not the exponentially large search space of FSP interactions. The "possibilities" operator is commutative and associative, which allows the composition of FSPs to be examined efficiently. The authors show for acyclic FSPs connected in a tree-like fashion that cooperative properties such as deadlock are NP-complete, and antagonistic properties such as livelock are PSPACE-complete. But if the FSPs are trees, the ralgebraic method yields an efficient polynomial-time solution.

Editor's Foreword 29l

Last but certainly not least, Foto Afrati, Christos Papadimitriou, and George Papageorgiou take a unique approach to protocol design and verification for distributed systems in their paper "The Synthesis of Communication Protocols." Past approaches to the problem have typically consisted of the design of complex (and sometimes incorrect!) protocols, and as a result subsequent verification has been computationally expensive. The authors illustrate this point by showing that the verification problem is PSPACE-complete. The alternative proposed by the authors is to generate a correct protocol automatically from the specifications, so that there is no need for verification. They give a polynomial-time algorithm for determining whether a given specification can be met by a system of FSPs, and if so the system can be constructed (though it may be exponentially large in size). The authors conclude with a discussion of optimization techniques.

As editor of this special double issue, I would like to conclude by thanking all the authors--both in this issue and in Part I--for their outstanding contribu- tions. Thanks go as well to the many referees for their very helpful comments and suggestions.