swiss federal institute of technology, lausanne the ... emergence of cellular computing ... as in...

9
0018-9162/99/$10.00 © 1990 IEEE 18 Computer Cybersquare T The Emergence of Cellular Computing he von Neumann architecture—which is based upon the principle of one complex processor that sequen- tially performs a single complex task at a given moment—has dominated computing technology for the past 50 years. Recently, however, researchers have begun exploring alternative computational systems based on entirely different principles. Although emerging from disparate domains, the work behind these systems shares a common computational philosophy, which I call cellular computing. This phi- losophy promises to provide new means for doing com- putation more efficiently—in terms of speed, cost, power dissipation, information storage, and solution quality. Simultaneously, cellular computing offers the potential of addressing much larger problem instances than pre- viously possible, at least for some application domains. THREE PRINCIPLES At its heart, cellular computing consists of three principles: simplicity, vast parallelism, and locality. Simplicity The basic processor used as the fundamental unit of cellular computing—the cell—is simple: Although a current, general-purpose processor can perform quite complicated tasks, a cell by itself can do very little. Formally, this notion can be captured by, say, the dif- ference between a universal Turing machine and a finite-state machine. In practice, our field’s 50 years of experience in building computing machines gives us a good notion of what “simple” means. For exam- ple, an AND gate is simple while a Pentium proces- sor is not. Vast parallelism Most parallel computers contain no more than a few dozen processors. In the parallel computing domain, the term massively parallel usually describes those few machines that consist of several thousand or, at most, tens of thousands of processors. Cellular computing involves parallelism on a much larger scale, with the number of cells often measured by the exponential notation 10 x . To distinguish this huge number of processors from that involved in clas- sical parallel computing, I use the term vast paral- lelism. This quantitative difference leads to novel qualitative properties, as Philip Anderson notes in an article on many-body physics whose title, “More Is Different,” befits cellular computing as well. 1 For a more detailed exploration of the differences Von Neumann’s complex-processor architecture is the ruling yin of computer technology, but cellular computing promises to be the new yang. Moshe Sipper Swiss Federal Institute of Technology, Lausanne

Upload: truongliem

Post on 19-May-2018

214 views

Category:

Documents


2 download

TRANSCRIPT

0018-9162/99/$10.00 © 1990 IEEE18 Computer

Cybe

rsqu

are

TThe Emergence ofCellular Computing

he von Neumann architecture—which is based uponthe principle of one complex processor that sequen-tially performs a single complex task at a givenmoment—has dominated computing technology forthe past 50 years. Recently, however, researchers havebegun exploring alternative computational systemsbased on entirely different principles.

Although emerging from disparate domains, the workbehind these systems shares a common computationalphilosophy, which I call cellular computing. This phi-losophy promises to provide new means for doing com-putation more efficiently—in terms of speed, cost, powerdissipation, information storage, and solution quality.Simultaneously, cellular computing offers the potentialof addressing much larger problem instances than pre-viously possible, at least for some application domains.

THREE PRINCIPLESAt its heart, cellular computing consists of three

principles: simplicity, vast parallelism, and locality.

SimplicityThe basic processor used as the fundamental unit of

cellular computing—the cell—is simple: Although acurrent, general-purpose processor can perform quite

complicated tasks, a cell by itself can do very little.Formally, this notion can be captured by, say, the dif-ference between a universal Turing machine and afinite-state machine. In practice, our field’s 50 years ofexperience in building computing machines gives usa good notion of what “simple” means. For exam-ple, an AND gate is simple while a Pentium proces-sor is not.

Vast parallelismMost parallel computers contain no more than a

few dozen processors. In the parallel computingdomain, the term massively parallel usually describesthose few machines that consist of several thousandor, at most, tens of thousands of processors.

Cellular computing involves parallelism on a muchlarger scale, with the number of cells often measuredby the exponential notation 10 x. To distinguish thishuge number of processors from that involved in clas-sical parallel computing, I use the term vast paral-lelism. This quantitative difference leads to novelqualitative properties, as Philip Anderson notes in anarticle on many-body physics whose title, “More IsDifferent,” befits cellular computing as well.1

For a more detailed exploration of the differences

Von Neumann’s

complex-processor architecture

is the ruling yin of computer technology, but

cellular computing promises to be the new yang.

Moshe Sipper Swiss Federal Institute of Technology, Lausanne

July 1999 19

between cellular and parallel computing, see the“Cellular vs. Parallel Computing” sidebar.

LocalityCellular computing is also distinguished by its local

connectivity pattern between cells. All interactions takeplace on a purely local basis: A cell can only communi-cate with a few other cells, most or all of which are phys-ically close by. Further, the connection lines usually carryonly a small amount of information. One implicationof this principle is that no one cell has a global view ofthe entire system—there is no central controller.

A different paradigmCombining these three principles results in the def-

inition cellular computing = simplicity + vast paral-lelism + locality. Because the three principles arehighly interrelated, attaining vast parallelism, forexample, is facilitated by the cells’ simplicity and localconnectivity.

Changing any single term in the equation results ina different computational paradigm, as Figure 1’s“computing cube” shows. In it, each of the three prop-erties has been placed along one axis. So, for exam-ple, foregoing the simplicity property results in thedistributed computing paradigm. Cellular computinghas been placed further along the parallelism axis toemphasize the “vastness” aspect.

The “Cellular Computing Examples” sidebar showshow cellular computing can be applied to six real-world computing tasks. To illustrate cellular comput-ing’s key concepts, I refer to these examplesthroughout this article.

PROPERTIES OF CELLULAR MODELSThe properties of cellular computing models are

highly flexible and can be tailored to specific tasks.Each characteristic that follows presents a choice fromseveral alternatives; the ensemble of these choicesresults in a specific cellular model.

Cellular versus Parallel Computing

You could argue that the concept of cel-lular computing is not new at all, but is sim-ply a synonym for parallel computing. Yetthe two domains are quite disparate in termsof the models employed and issues studied.Parallel computing has traditionally dealtwith a small number of powerful processors,addressing issues such as scheduling, con-currency, message passing, and synchro-nization. The only conceivable area ofintersection between parallel and cellularcomputing concerns the few so-called “mas-sively parallel machines” built and studiedby parallel computing practitioners.1

We see that decades of parallel computingresearch have not produced the expectedresults—parallel machines are not ubiqui-tous, and most programmers continue touse sequential programming languages. Ibelieve one reason for this lack of success isthe domain’s ambitious goal, at least at theoutset, of supplanting the serial computingparadigm. However, as parallel computingpioneer Michael J. Flynn recently remarked,“Human reasoning ... is basically a sequen-tial process, although its implementationmay be parallel.” He noted that “we signif-icantly underestimated the difficulty inachieving the performance speedup expectedfrom parallel processors.”2

Commenting on the reasons parallel

computing has not entirely lived up to itspromise, Flynn said: “It is difficult to findlarge and consistent degrees of parallelismwithin a single program, because parti-tioning is too difficult. It is difficult to findparallel algorithms or tools to decomposean existing algorithm into many concur-rent tasks.” He concluded that “If parallelprocessors are going to solve this and otherproblems, the problems’ representationsshould be designed for these machines ....We need to represent problems in a cellu-lar form .… ”

Parallel computing teaches cellular com-puting practitioners that they should notaim for an all-encompassing, general-pur-pose paradigm that will supplant thesequential one. Rather, we should find thoseniches where such models can excel. Severalclear proofs-of-concept already exist thatdemonstrate how cellular computing canefficiently solve difficult problems.

Consider a high-speed bullet train thatarrives at its destination, only to allow its pas-sengers to disembark through but a singleport. This is clearly a waste of the train’s par-allel exit system, which consists of multipleports dispersed throughout the train. Thismetaphor, which I dub the slow bullet train,illustrates an important point about parallelsystems: their potential (ill-) use in a highlysequential manner. Arthur W. Burks, whocompleted von Neumann’s work on self-

reproducing cellular automata,3 notes: “Thusvon Neumann’s cellular structure allows foran indefinite amount of parallelism. But indesigning his self-reproducing automaton,von Neumann did not make much use of thepotential parallelism of his cellular structure.Rather, his self-reproducing automatonworks like a serial digital computer .… ” Thisis perhaps the quintessential example of aslow bullet train: embedding a sequential uni-versal Turing machine within the highly par-allel cellular-automaton model.

For most cellular computing models, youcan prove computation universality byembedding some form of serial universalmachine. This proves that in theory themodel is at least as powerful as any otheruniversal system. However, in practice sucha construction defeats the purpose of cellu-lar computing by completely degeneratingthe parallelism aspect. Thus, on the whole,we want to avoid slowing the fast train.

References

1. W.D. Hillis, The Connection Machine,MIT Press, Cambridge, Mass., 1985.

2. M.J. Flynn, “Parallel Processors Were theFuture ... and May Yet Be,” Computer,Dec. 1996, pp. 151-152.

3. J. von Neumann, Theory of Self-Repro-ducing Automata, edited and completedby A.W. Burks, University of IllinoisPress, Urbana, 1966.

20 Computer

Cell typeBecause the cell is the basic unit of computation, we

must decide first the exact form it will take:

• Discrete cells take on a value from a discrete,finite range of possibilities. These values are oftencalled states. The number of possible states maybe small or large.

• Continuous cells take on an analog value from agiven continuous range. The state in this case isanalog.

Cell definitionThe cell’s dynamic behavior over time is defined as

a function of its neighbors’ values, to which it hasaccess. Any of the following techniques may be used.

• Exhaustive enumeration lists the cell’s next state foreach of the possible neighborhood configurationsof states. This technique is usually used for discretemodels with a small number of states per cell.

• Parameterized function describes the cell’s nextstate as a function of its neighbors. You can distin-guish between linear and nonlinear cell functions.

• Program computes the cell’s next state given theneighboring values.

• Behavioral rules specify the cell’s behavior in dif-ferent situations, thus resulting in a specificationof its dynamic behavior. These rules may be

drawn from, for example, those of molecularbiology or quantum physics.

Cell mobilityCells may either be immobile or mobile. An immo-

bile cell changes only in value. A mobile cell actuallymoves within a given environment. The self-replicat-ing loops of the satisfiability problem example in the“Cellular Computing Examples” sidebar can be con-sidered mobile at the loop level but immobile at thecellular-automaton level, where only their state val-ues change in time.

Connectivity schemeBecause cells interact with their neighbors, a con-

nectivity scheme must be specified:

• Regular grid. A regular grid involves an n-dimen-sional array (where n = 1,2,3 is used in practice)of a given geometry: rectangular, triangular,hexagonal, and so on. Regularity implies that allcells have the same connectivity pattern. In thecellular neural network example, the grid has atwo-dimensional, rectangular geometry, with eachcell connected to its eight immediate neighbors,whose state values it can access. When finite gridsare involved, you must specify the boundary con-ditions. Usually, you either assign fixed boundarycells that do not change over time or consider the

Cellularcomputing

Partially connectedneural networks

Fully connectedneural networks

Distributedcomputing

Shared-memoryparallel computing

General-purposeserial architecture

Finite-statemachinesSimpleComplex

Seri

alPa

ralle

l

Globa

l

Loca

l

Figure 1. A graphicalrepresentation of theequation simple +vastly parallel + local= cellular computing.

July 1999 21

grid to be wrapped around itself in a torus-likefashion.

• Graph. This is essentially a nonregular grid thatcan take the form of a directed or undirected graph.

As with the notion of cell simplicity, we resort to argu-ments from practice for our definition. A fully con-nected grid, where each cell is connected to each other,also comprises a possible connectivity scheme, but onethat falls outside the realm of cellular computing.

Cellular topology or underlying environmentThe connectivity scheme applies to the cellular topol-

ogy itself: The cells are what make up the grid or graph.Alternatively, you can create an environment for the

cells. In this case, the topology need not necessarily berendered explicitly, as in graph form, but may be givenimplicitly via a physical or artificial environment thatinduces spatial contact through random or directedmotion. In the DNA-computing example, the physicsof molecular motion in a test tube provides an implicitenvironment, and thus there is no rigid topologicalstructure.

Marco Dorigo and Luca Maria Gambardella2 pro-vide an example of an explicit underlying environ-ment. Simple mobile cells, called ants, explore inparallel a given graph to solve the traveling salesmanproblem. In this case, the mobile cells are the ants, andthe graph represents not the connectivity of the cellsthemselves but rather their underlying environment.

Connection linesAs with cells, connection lines are simple, in the

sense that the information content transmitted over

them is small. A connection might involve, say, merelytransmitting the state values of the neighboring cells.

Temporal dynamicsCellular computing involves systems that change in

time at both the cellular and system level. These tem-poral changes involve two separate choices:

• Synchrony versus asynchrony. Synchronous temporal dynamics describe cells that advance indiscrete steps, all changing their values simulta-neously. Asynchrony means that no such syn-chronous updating schedule exists. This is notnecessarily a black-and-white choice. For exam-ple, you may have a melange of both strategiesin which all cells within a block are updatedsimultaneously, while the blocks themselves areupdated asynchronously.

• Discrete versus continuous. Discrete time describessystem behavior in terms of discrete temporalevents, be they synchronous or asynchronous.Continuous time means that no such discrete divi-sion exists. The cell’s behavior may be specified inthis latter case by differential operators.

UniformityThis property refers to the degree of regularity in

cellular computing’s other properties.

• Cells. Are all cells of the same type? Are they iden-tically defined so that they execute the same func-tion? For example, my colleagues and I studiedthe pseudorandom numbers example withnonuniform cellular automata.3 In this model,

Cellular Computing ExamplesThese six examples span a wide range

of cellular-computing properties and illus-trate the paradigm’s key concepts.

Generating pseudorandom numbers with cellular automata

Cellular automata may be the quintes-sential example of cellular computing, aswell as the first to appear on the scene.Conceived in the late 1940s by StanislawUlam and John von Neumann, cellularautomata model a dynamic system inwhich space and time are discrete.1

A cellular automaton consists of anarray of cells, each of which can be in oneof a finite number of possible states. Thecells are updated synchronously in discretetime steps, according to a local, identical

interaction rule. The state of a cell at thenext time step is determined by the currentstates of a surrounding neighborhood ofcells. This transition is often specified inthe form of a rule table, which delineatesthe cell’s next state for each possible neigh-borhood configuration. (The cellular arrayis n-dimensional, where n = 1,2,3 is usedin practice.)

Since their inception, cellular automatahave been used as a formal model forstudying phenomena in several fields,including physics, biology, and computerscience. In recent years, interest in usingcellular automata as actual computingdevices has grown.2 Pseudorandom num-ber generation is one example of this prob-lem type. Many applications requirerandom numbers, yet finding good ran-

dom-number generators is difficult. In thepast decade, cellular automata have beenused to generate random numbers, withtheir efficacy and viability demonstratedby applying standard tests of randomness.2

Cellular adderIn a recent work, Simon Benjamin and

Neil Johnson3 presented a cellular automa-ton that can perform binary addition: Giventwo binary numbers encoded as the initialconfiguration of cellular states, the grid con-verges in time toward a final configurationthat is their sum. This work suggests a pos-sible wireless nanometer-scale realization ofthe adding cellular automaton that uses cou-pled quantum dots. As Benjamin andJohnson point out, the device is a nanome-ter-scale classical computer rather than a

22 Computer

different cells may execute different transitionfunctions, as opposed to the original uniformmodel, in which all cells execute the same func-tion. Recent work indicates that, for a certainclass of cellular automata, nonuniformity pre-sents definite computational advantages.4

• Connectivity. Depending on how a cell connectsto its neighbors, the connectivity scheme can beeither a regular or nonregular grid.

Determinism versus nondeterminismIn a deterministic model, for any given input the

system always goes through the same trajectory ofstates, ending with the same output. For a nondeter-ministic system, the same input may result in differ-ent trajectories, and possibly different outputs. Non-determinism may be inherent to the system’s func-tional definition, as with probabilistic and fuzzy sys-tems, or it may result from faults.

OPERATIONAL AND BEHAVIORAL ASPECTSTo understand how a cellular-computing system

functions as a whole, you must first grasp sevenoperational and behavioral aspects of cellular com-puting.

ProgrammingBecause cellular computing differs from the single-

sequential-complex processor model, new program-ming techniques are required. These techniques canbe divided broadly into two categories:

• Direct programming. Given a problem to solve,the programmer completely specifies the system

at the outset. For example, in cellular automataand cellular neural networks this approachimplies not only specifying the cell type, connec-tivity, and so forth, but also delineating the pre-cise cellular function for each cell. Thus, whenthe system receives an input that is an instance ofthe current problem, it computes a correct out-put. Direct programming does not imply a com-pletely handcrafted system, however, becausesynthesis tools can be used; rather, it means thatthe programmer derives a complete specificationat the outset. As we shall see, direct programmingbecomes difficult when global problems areinvolved.

• Adaptive methods. For many problems, the pro-grammer cannot fully specify the system at theoutset. In these cases, the programmer only par-tially specifies the system, which is then subjectedto an adaptive process—such as learning, evolu-tion, or self-organization—to produce the desiredfunctionality. For example, nonuniform cellularautomata have been evolved using an evolution-ary algorithm known as cellular programming.3

In this case, the programmer determines the basicstructure—grid dimensionality, connectivity pat-tern, and so on—at the outset, then evolves thecellular-transition functions. The connectivityscheme can also be evolved.3 Both learning meth-ods and evolutionary algorithms have beenapplied to cellular neural networks to find the cel-lular-function parameters needed to solve a givenproblem. Evolutionary algorithms have also beenapplied in DNA computing, for example, to findgood encodings of nucleotide sequences.5

true quantum computer. Because it need notmaintain wave function coherence, it is farless delicate than a quantum computer.

Solving the contour-extraction problem with cellular neural networks

We can regard a cellular neural networkas a cellular automaton in which cellularstates are analog rather than discrete, andthe time dynamics are either discrete orcontinuous.4 The past decade’s study hasresulted in a lore of both theory and prac-tice, including potentially major applica-tion areas such as image processing. Forexample, in the contour-extraction prob-lem, the network is presented with a gray-scale image, and it extracts contours thatresemble edges (resulting from largechanges in gray-level intensities). This

operation, oft-used as a preprocessingstage in pattern recognition, is one exam-ple of the many image-processing prob-lems solved by cellular neural networks.

Solving the directed Hamiltonian path problem by DNA computing

Programmers have considered usingnatural or artificial molecules as basiccomputational elements—cells—for sometime. Leonard Adleman5 recently gave thedecisive proof-of-concept by using molec-ular-biology tools to solve an instance ofthe directed Hamiltonian path problem:Given an arbitrary directed graph, youmust find whether there exists a pathbetween two given vertices that passesthrough each vertex exactly once.

Adleman used oligonucleotides, short

chains of usually up to 20 nucleotides, toencode vertices and edges of a given graph.Next, he placed multiple copies of theoligonucleotides in a real test tube; theseoligonucleotides then randomly linkedwith each other, forming molecules thatrepresent paths through the graph.Adleman then applied molecular-biologyprocedures to sift through the plethora ofcandidate DNA-molecule solutions, thussolving the problem—that is, finding outwhether a Hamiltonian path exists or not.

The extremely small cell size in this formof cellular computing gives rise to vast par-allelism on an entirely new scale. Adlemanestimated that such molecular computerscould be profoundly faster, more energy effi-cient, and able to store much more infor-mation than current-day supercomputers,

July 1999 23

The cellular properties described earlier relate tocharacteristics of the basic system and can be consid-ered as the first-order dynamics. Programming, espe-cially via adaptive methods, can be considered a formof high-order dynamics, where some of these basicproperties change in time.

Local versus global problemsA local problem involves computing a property that

can be expressed in purely local terms, as a functionof the local cellular neighborhood. Such local opera-tors abound in the domain of low-level image pro-cessing, for example, where we define various filtersand noise-reducing operations in terms of the cell’snearest neighbors.

I define a global problem in this context as one thatinvolves computation of a nonlocal property.

The difference between global and local problemsis not a hard distinction but a matter of degree. Someproperties may be only slightly nonlocal, requiring buta small extension beyond the cell’s local neighbor-hood, while other properties may be highly global, inthe sense that the entire ensemble of cells must be con-sidered to compute a correct output. Although thisdistinction between global and local problems is byno means formal, it does provide an idea of the dif-ferent problem classes that exist. The majority exam-ple typifies a global problem in which, to compute acorrect output, you must consider the entire grid.

Finding local interaction rules to solve global prob-lems poses a considerable challenge for the systemdesigner. Especially when addressing global problems,it can be difficult to design highly local systems toexhibit a specific behavior. Such problems are prime

candidates for adaptive programming methods.The distinction between local and global can be

quite subtle. For example, in the majority exampleyou might consider the naive solution, whereby everycell computes the local majority of its neighboringcells. This does not, however, solve the problem at allsince it does not result in computation of the globalmajority over the entire grid. You can easily solve themajority problem using, for example, a fully con-nected neural network: A single neuron connected toall input bits, with its threshold properly set, comprisesa solution. Thus, in this case, full connectivity is a formof global information.

The issue of local versus global problems relates toemergent computation, the appearance of globalinformation-processing capabilities not explicitly rep-resented in the system’s elementary components norin their local interconnections. Currently, emergentcomputation is ill-defined, with a rigorous, accepteddefinition still lacking.

Input/outputThe I/O question involves two issues. First, as with

any problem-solving paradigm, you must specify theset of legal inputs and the desired outputs for the prob-lem at hand. Second, with regard to implementation,you must consider the representation of inputs andoutputs in the cellular model, how inputs are pre-sented to the system, and how the output is read out.For example, in the cellular automata solution to themajority example, the input is considered to be theinitial configuration of states, and the output is readoff the final configuration, in a particular manner.3 Inthe cellular adder example, some researchers6 present

at least with respect to certain problem classes.You could argue that DNA molecules vio-

late the cell-simplicity principle. However,while such molecules may exhibit complexbehavior from the biologist’s viewpoint, theycan be treated as simple elements from thecomputational point of view. As RichardLipton notes, “Our model of how DNAbehaves is simple and idealized. It ignoresmany complex known effects but is anexcellent first-order approximation.”6 InDNA computing we typically treat the basiccell, or DNA molecule, as a simple elemen-tal unit on which a few basic operations canbe performed in the test tube.6 This paral-lels several other instances in computer sci-ence where irrelevant low-level details areabstracted away. For example, we usuallyregard the transistor as a simple switch, con-

sidering immaterial the complex atomic andsubatomic physical phenomena that under-lie its operation.

Solving the satisfiability problem with self-replicating loops

Since the publication of von Neumann’sseminal work in the late 1940s,7 the studyof artificial self-replicating structures hasproduced a plethora of results. Althoughmuch of this research has concentrated ontheoretical issues, recent studies have raisedthe possibility of using such self-replicatingmachines to perform computations.8

Hui-Hsien Chou and James Reggia’swork,9 for example, shows that self-repli-cating structures can be used to solve theNP-complete problem known as satisfia-bility. Given a Boolean predicate like

you must find the assignment of Booleanvalues to the binary variables x1, x2, and x3

that satisfies the predicate by making it eval-uate to True, if such an assignment exists.Chou and Reggia embedded within a two-dimensional, cellular-automaton “uni-verse” simple self-replicating structures inthe form of loops. Each loop represents onepossible satisfiability solution to the prob-lem; when the loop replicates it gives rise to“daughter” loops, representing differentcandidate solutions, which go on to self-replicate in their turn. Under a form of arti-ficial selection, replicants representingpromising solutions proliferate while thoserepresenting failed solutions are lost.

This work can be considered a form of

( ) ( )x x x x x1 2 3 1 3∨ ∨¬ ∧ ∨ ¬ x2∨¬

24 Computer

the cellular automaton’s possible implementation asa nanometer-scale device, specifically addressing theI/O issue.

ImplementationWhile many experiments with cellular-computing

systems are done via software simulation, a primarygoal is to construct actual machines based on thesenovel principles. Only then will the full power of theparadigm be realized.

One major implementation advantage involves thelow degree of connectivity. As noted by Danny Hillis,“As switching components become smaller and lessexpensive, we begin to notice that most of our costsare in wires, most of our space is filled with wires, andmost of our time is spent transmitting from one end ofthe wire to the other .... Most of the wires must beshort. There is no room for anything else.”7 Anotheraspect that potentially facilitates implementation isthe cells’ simplicity.

To date, several novel implementations of “cell-ware” have been demonstrated, including

• cellular automata implemented as special-pur-pose digital hardware;

• evolving, nonuniform cellular automata imple-mented using configurable processors;

• the cellular neural network analog chip;• molecular computing carried out in actual test

tubes; and• quantum-dot cellular automata, where logic

states are encoded not as voltages, as with con-ventional digital architectures, but rather by thepositions of individual electrons.

In cellular computing, implementation issues areoften entwined with those concerning the underlyingmodel. That is, you don’t design a theoretical modeland then consider its implementation, but rather youmust address these two issues simultaneously.Choosing the basic properties of the model has imme-diate consequences on implementation, and viceversa—often you start with implementation con-straints that have implications where the underlyingmodel is concerned. Indeed, this interplay betweenmodel properties and system aspects is characteristicof cellular computing in general.

ScalabilityCellular computing offers a paradigm potentially

more scalable than classical models, thanks to local con-nectivity, the absence of a central processor that mustcommunicate with every single cell, and the simplicityof the cells themselves. Thus, prima facie, adding cellsshould not pose a major problem. In reality, however,this issue is not trivial at either the model or implemen-tation level. As I note elsewhere,3 simple scaling, whichinvolves a straightforward augmentation in resourcessuch as cells and connections, does not necessarily bringabout task scaling, defined as the maintenance of at leastthe same performance level for the problem at hand.

RobustnessA system’s robustness derives from its ability to func-

tion in the face of faults. When cells function incorrectly,communication links fail, or some other mishap occurs,we want the system to continue functioning, or at leastexhibit graceful degradation.

The attributes of cellular systems give rise, in many

DNA computing in a cellular automaton,using self-replicating loops in a vastly paral-lel fashion. A molecular implementation ofthis approach might be had by using recentlycreated synthetic self-replicators. Lipton6

presented a DNA-computing solution to thesatisfiability problem that’s similar toAdleman’s method, noting that “biologicalcomputations could potentially have vastlymore parallelism than conventional ones.”

Solving the majority problem with cellular automata

In this problem, a binary-state cellularautomaton should classify an arbitrary ini-tial configuration of states (the input) bywhether there is a majority of 0s or 1s. Sincemajority is a global configuration property(the 1s can be distributed throughout theentire grid) whereas the cellular automaton is

highly local (that is, no single cell can directlycompute the answer by itself), the problemis not trivial. Several researchers have studiedit over the past decade, applying evolutionarycomputation techniques to evolve solutions.2

References

1. J. von Neumann, Theory of Self-Repro-ducing Automata, edited and completedby A.W. Burks, University of Illinois Press,Urbana, 1966.

2. M. Sipper, Evolution of Parallel CellularMachines: The Cellular Programming Ap-proach, Springer-Verlag, Heidelberg, 1997.

3. S.C. Benjamin and N.F. Johnson, “A Pos-sible Nanometer-Scale Computing DeviceBased on an Adding Cellular Automa-ton,” Applied Physics Letters, 1997, pp.2,321-2,323.

4. L.O. Chua and T. Roska, “The CNN Par-adigm,” IEEE Trans. Circuits and Sys-tems, Mar. 1993, pp. 147-156.

5. L.M. Adleman, “Molecular Computationof Solutions to Combinatorial Problems,”Science, 1994, pp. 1,021-1,024.

6. R.J. Lipton, “DNA Solution of HardComputational Problems,” Science, 1995,pp. 542-545.

7. M. Sipper et al., eds., special issue on Arti-ficial Self-Replication, Artificial Life, 1998.

8. M. Sipper et al., “A Phylogenetic, Onto-genetic, and Epigenetic View of Bio-Inspired Hardware Systems,” IEEETrans. Evolutionary Computation, 1997,pp. 83-97.

9. H.-H. Chou and J.A. Reggia, “ProblemSolving During Artificial Selection of Self-Replicating Loops,” Physica D, 1998, pp.293-312.

cases, to increased resilience. Local connectivity allowsfor easier fault containment by confining the fault toone region, preventing its easy spread throughout thesystem, while the vast number of extant cells enablesa large part of the system to remain operational.

HierarchyHierarchical decomposition is ubiquitous in both

natural and artificial systems.In computer science there is the programming hier-

archy: high-level language, intermediate code, assem-bly language, machine language, and the transistorlevel. Such a hierarchy facilitates the end-user’s taskenormously.

In nature we find organizational hierarchies, such asthe molecule-cell-organ infrastructure, as well as pro-cessing hierarchies, such as the human visual system,which begins with low-level image processing in theretina and ends with high-level operations—such asface recognition—performed in the cortical regions.

Such hierarchical forms also have their place in cel-lular computing, being either fixed at the outset oremerging through adaptive programming. As an exam-ple, consider self-replicating loops, where we can dis-tinguish between two levels: the lower, cellular-automaton level and the higher, self-replicating loopslevel. These levels can be considered separately, and youcan even imagine a different low-level implementationusing synthetic molecules. Thus, we can consider thetwo levels a programming hierarchy. Furthermore, thelevels in this example exhibit different behaviors: Thecellular-automaton level consists of immobile cells thatoperate in synchronous mode, while the loop level con-sists of mobile cells—the loops—that can be consideredto operate asynchronously.

Cellular computing has attracted increasingresearch interest recently. Work in this field hasproduced exciting results that hold prospects

for a bright future. Yet several questions must beanswered before cellular computing can become amainstream paradigm. What classes of computationaltasks are most suited to it? Can these be formallydefined, or informal guidelines established? How dowe match the specific properties and behaviors of agiven model to a suitable class of problems?

Once we derive a baseline from the answers to thesequestions, we can extend cellular computing’s pro-gramming methodologies and possibly introducenovel ones. Evolution, learning, and self-organizationhave been shown viable. More recently, the use ofontogeny—the developmental process of a multicel-lular organism—has also been explored.8

What specific application areas invite a cellular-computing approach? Research has raised several pos-sibilities:

• Image processing. Applying cellular com-puters to perform image-processing tasksarises as a natural consequence of theirarchitecture. For example, in a two-dimen-sional grid, a cell (or group of cells) cancorrespond to an image pixel, with themachine’s dynamics designed to perform adesired image-processing task. Researchhas shown that cellular image processorscan attain high performance and exhibitfast operation times for several problems.

• Fast solutions to NP-complete problems.Even if only a few such problems can be dealtwith, doing so may still prove highly worthwhile.NP-completeness implies that a large number ofhard problems can be efficiently solved, given anefficient solution to any one of them. The list ofNP-complete problems includes hundreds ofcases from several domains, such as graph the-ory, network design, logic, program optimiza-tion, and scheduling, to mention but a few.

• Generating long sequences of high-quality ran-dom numbers. This capability is of prime importin domains such as computational physics andcomputational chemistry. Cellular computersmay prove a good solution to this problem.

• Nanoscale calculating machines. Cellular com-puting’s ability to perform arithmetic operationsraises the possibility of implementing rapid calcu-lating machines on an incredibly small scale. Thesedevices could exceed current models’ speed andmemory capacity by many orders of magnitude.

• Novel implementation platforms. Such platformsinclude reconfigurable digital and analog proces-sors, molecular devices, and nanomachines.

For cellular computing to become viable, it mustprove its scalability. This issue is still in doubt, asLipton noted when he observed that Adleman “solvedthe HPP [Hamiltonian Path Problem] with brute force:He designed a biological system that ‘tries’ all possi-ble tours of the given cities.”9 Yet even if you use 1023

parallel computers, Lipton warns, you “cannot try alltours for a problem with 100 cities. The brute forcealgorithm is simply too inefficient.”10

Scalability also relates to the hierarchy question.For example, when either artificial or natural systemsare scaled up by orders of magnitude, a hierarchicalstructure is usually imposed, be it programming, orga-nizational, processing, or some other form.

Lipton10 addresses the issues of implementation androbustness as well, citing the errors generated by less-than-perfect operation as a possible barrier to build-ing DNA computers. This imperfection motivatedRussell Deaton and colleagues5 to apply evolutionarytechniques that search for better DNA encodings, thus

July 1999 25

As we work to shapethis new paradigm, it is encouraging to consider thatnature is cellular

computing’s ultimateproof-of-concept.

reducing the errors during DNA computation. In myown work,3 I’ve studied the effects of random faults onthe behavior of some evolved cellular automata, show-ing that they exhibit graceful degradation in perfor-mance and can tolerate a certain level of faults.

Many natural systems exhibit striking problem-solv-ing capacities, while adhering to the principles of cel-lular computing. There may be a two-way street: Wecan seek inspiration in natural processes, creating novelcellular methodologies; at the same time, research intocellular computing, though mainly an engineeringactivity, may increase our understanding of computa-tion in nature. Indeed, as we work to shape this newparadigm, it is encouraging to consider that nature iscellular computing’s ultimate proof-of-concept. ❖

AcknowledgmentsI thank Mathieu Capcarrère, Daniel Mange, Eduardo

Sanchez, Marco Tomassini, and the anonymous review-ers for their many helpful remarks and suggestions. Thiswork was supported in part by grant 2000-049349.96from the Swiss National Science Foundation.

References1. P.W. Anderson, “More Is Different,” Science, 1972, pp.

393-396.2. M. Dorigo and L.M. Gambardella, “Ant Colony Sys-

tem: A Cooperative Learning Approach to the Travel-ing Salesman Problem,” IEEE Trans. EvolutionaryComputation, 1997, pp. 53-66.

3. M. Sipper, Evolution of Parallel Cellular Machines: TheCellular Programming Approach, Springer-Verlag, Hei-delberg, 1997.

4. M. Sipper, “Computing with Cellular Automata: Three

Cases for Nonuniformity,” Physical Review E, 1998, pp.3,589-3,592.

5. R. Deaton et al., “A DNA Based Implementation of anEvolutionary Search for Good Encodings for DNA Com-putation,” Proc. 1997 IEEE Int’l Conf. EvolutionaryComputation (ICEC 97), IEEE Press, Piscataway, N.J.,1997, pp. 267-271.

6. S.C. Benjamin and N.F. Johnson, “A Possible Nanome-ter-Scale Computing Device Based on an Adding Cellu-lar Automaton,” Applied Physics Letters, 1997, pp.2,321-2,323.

7. W.D. Hillis, The Connection Machine, MIT Press, Cam-bridge, Mass., 1985.

8. M. Sipper et al., “A Phylogenetic, Ontogenetic, and Epi-genetic View of Bio-Inspired Hardware Systems,” IEEETrans. Evolutionary Computation, 1997, pp. 83-97.

9. L.M. Adleman, “Molecular Computation of Solutions toCombinatorial Problems,” Science, 1994, pp. 1,021-1,024.

10. R.J. Lipton, “DNA Solution of Hard Computational Prob-lems,” Science, 1995, pp. 542-545.

Moshe Sipper is a senior researcher in the Logic SystemsLaboratory at the Swiss Federal Institute of Technology,Lausanne. His chief interests involve the application ofbiological principles to artificial systems, including evo-lutionary computation, cellular computing, bio-inspiredsystems, evolving hardware, complex adaptive systems,artificial life, and neural networks. Sipper has publishedclose to 70 papers in these areas and the book Evolu-tion of Parallel Cellular Machines: The Cellular Pro-gramming Approach (Springer-Verlag, 1997). Sipperreceived a BA in computer science from the Technion-Israel Institute of Technology and an MSc and a PhD incomputer science from Tel Aviv University.

Contact Sipper at [email protected].

As seen in the

May issue of Computer

Perl Creator Larry Wall“The most revolutionary thing about languagedesign these days is that we’re putting more effortinto larger languages.”

Jini Lead Architect Jim Waldo“The big wads of software that we have grown usedto might be replaced by small, simple componentsthat do only what they need to do and can be com-bined together.”

Tcl Creator John Ousterhout“Scripting languages will be usedfor a larger fraction of applica-tion development in the yearsahead.”

Innovative Technology for Computer Professionals May 1999

http://computer.org

Two NewAwards HonorInnovators, p. 11

MiddlewareSteps Forward

Thompson OnUnix and Beyond

CORBA 3Preview

http://computer.org

Softwarerevolutionaries

on software

Softwarerevolutionaries

on software