the dda integrator as the iterative module of a variable structure process control computer

9
.4utomatica, Vol. 5, pp. 41-49. Pergamon Press, 1969. Printed in Great Britain. The DDA Integrator as the Iterative Module of a Variable Structure Process Control Computer* J. HATVANYt Trends in computer organization, control technology and large-scale integration point towards highly reliable process control computers with a fixed GP part and a variable structure part composed of an array of DDA integrators. (Although this paper does not emphasize automatic control or computer control, it has been included in this automatic control journal because it discusses the organization and control of computers and their use in control systems, particularly in process control. Furthermore, it includes an interesting survey of the subject, and, as indicated in the previous article by B. Gaines and R. Shemer, this subject constituted a significant role in the IFAC Budapest Sym- posium where it was presented.---editor) Summary--The present trends in computer organization, control technology and large-scale integration are evaluated to deduce the features of a new computer for process control purposes, and to propose an actual design. The proposed computer will have a variable structure organization, with a fixed part, a GP computer, and a variable part composed of an array of DDA integrators, acting as processor elements. The latter can well be imple- mented as LSI circuits, to form iterative modules. The topology of the computer is of the partially interconnectible, decentralized type. The machine lends itself particularly well to the applica- tion of advanced reliability strategies. The problem domain which can be solved is relatively broad, and although the proposed organization does not comply strictly with the criteria of Shannon or Holland, its practical facilities for control applications are not impaired, while implementa- tion and programming are rendered easier. An appropriate autocode for programming the computer is based on the differential calculus, and it would be particularly suitable for the application of Newell's problem-solving techniques. 1. INTRODUCTION CONTROL engineering has like many others, swiftly become a computer-based technique. And the users, who have been eagerly waiting for neat and settled product patterns to evolve in the com- puter industry, have always tried to persuade themselves that this was actually taking place and that the momentary labels on their pigeonholes were there to stay. Their wishful thinking has, however, continued to receive rude jolts, such as the break-through of symbolic programming, the advent of the second and third generation of computer hardware, of the $ 10k computer class, * This paper is a revised version of the one presented at the IFAC Symposium held in Budapest, Hungary in April 1968; received 24 May 1968 and in revised form 13 August 1968. Recommended for possible publication by associate editor W. Nelson. t Research Institute for Automation of the Hungarian Academy of Sciences, Budapest. of time-sharing and the on-line console. And now all the old issues have been stirred up again, more fundamentally than ever before, and the new ones of Large Scale Integration are once more forcing a radical rethinking of the whole field. At the 1967 Spring Computer Conference in the USA, debate at the session on the best approach to large com- puting capability "exceeded the propriety that should be accorded to public dialogue on con- troversial technical questions" [1]. The designers of the new super machines themselves qualify pre- liminary specifications with remarks like: "We may find we have to back off from that a little bit", or else predict that "The ultimate data rate will be well beyond the design goal" [2]. The issues that have again become bones of contention are: special purpose versus general purpose, sequential versus parallel structures, conventional versus DDA or hybrid processors, and very large time-shared computers versus a proli- feration of small decentralized ones. The first of these, GP versus specialized, is perhaps the most interesting--the availability of small, relatively fast, and by previous standards very cheap computers would appear to indicate that the philosophy of using mass-produced general purpose computers everywhere and letting only the software be specia- lized, would triumph. Yet the signs are clearly discernible that this is not so. The cheap GP computers are, it seems, creating a big, new market, and that market is big enough to allow even specialized hardware to be mass-produced--also with large-scale integration and the computer- aided synthesis and assembly of equipment the very concept of mass production has changed. And for the larger units the "multi-special-purpose com- puter" [3] is the new solution. It may thus be seen that no universal panacea is emerging. On the contrary there is a continually increasing differentiation according to the demand each product is intended to satisfy and the tech- niques which it uses. The particular new require- ments to whose new means of satisfaction this 41

Upload: j-hatvany

Post on 15-Jun-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The DDA integrator as the iterative module of a variable structure process control computer

.4utomatica, Vol. 5, pp. 41-49. Pergamon Press, 1969. Printed in Great Britain.

The D D A Integrator as the Iterative Module of a Variable Structure Process Control Computer*

J. H A T V A N Y t

Trends in computer organization, control technology and large-scale integration point towards highly reliable process control computers with a fixed GP part and a variable structure part composed of an array of DDA integrators.

(Although this paper does not emphasize automatic control or computer control, it has been included in this automatic control journal because it discusses the organization and control of computers and their use in control systems, particularly in process control. Furthermore, it includes an interesting survey of the subject, and, as indicated in the previous article by B. Gaines and R. Shemer, this subject constituted a significant role in the IFAC Budapest Sym- posium where it was presented.---editor)

Summary--The present trends in computer organization, control technology and large-scale integration are evaluated to deduce the features of a new computer for process control purposes, and to propose an actual design.

The proposed computer will have a variable structure organization, with a fixed part, a GP computer, and a variable part composed of an array of DDA integrators, acting as processor elements. The latter can well be imple- mented as LSI circuits, to form iterative modules. The topology of the computer is of the partially interconnectible, decentralized type.

The machine lends itself particularly well to the applica- tion of advanced reliability strategies. The problem domain which can be solved is relatively broad, and although the proposed organization does not comply strictly with the criteria of Shannon or Holland, its practical facilities for control applications are not impaired, while implementa- tion and programming are rendered easier. An appropriate autocode for programming the computer is based on the differential calculus, and it would be particularly suitable for the application of Newell's problem-solving techniques.

1. INTRODUCTION

CONTROL engineering has like many others, swiftly become a computer-based technique. And the users, who have been eagerly waiting for neat and settled product patterns to evolve in the com- puter industry, have always tried to persuade themselves that this was actually taking place and that the momentary labels on their pigeonholes were there to stay. Their wishful thinking has, however, cont inued to receive rude jolts, such as the break- through of symbolic programming, the advent o f the second and third generation o f computer hardware, o f the $ 10k computer class,

* This paper is a revised version of the one presented at the IFAC Symposium held in Budapest, Hungary in April 1968; received 24 May 1968 and in revised form 13 August 1968. Recommended for possible publication by associate editor W. Nelson.

t Research Institute for Automation of the Hungarian Academy of Sciences, Budapest.

o f t ime-sharing and the on-line console. And now all the old issues have been stirred up again, more fundamental ly than ever before, and the new ones of Large Scale Integrat ion are once more forcing a radical rethinking of the whole field. At the 1967 Spring Computer Conference in the USA, debate at the session on the best approach to large com- puting capability "exceeded the propriety that should be accorded to public dialogue on con- troversial technical questions" [1]. The designers o f the new super machines themselves qualify pre- liminary specifications with remarks like: "We may find we have to back off f rom that a little bit", or else predict that "The ultimate data rate will be well beyond the design goal" [2].

The issues that have again become bones of content ion are: special purpose versus general purpose, sequential versus parallel structures, conventional versus D D A or hybrid processors, and very large time-shared computers versus a proli- feration of small decentralized ones. The first o f these, G P versus specialized, is perhaps the most interest ing--the availability o f small, relatively fast, and by previous standards very cheap computers would appear to indicate that the phi losophy o f using mass-produced general purpose computers everywhere and letting only the software be specia- lized, would tr iumph. Yet the signs are clearly discernible that this is not so. The cheap GP computers are, it seems, creating a big, new market, and that market is big enough to allow even specialized hardware to be mass-produced--a lso with large-scale integration and the computer- aided synthesis and assembly o f equipment the very concept of mass product ion has changed. A n d for the larger units the "multi-special-purpose com- puter" [3] is the new solution.

It may thus be seen that no universal panacea is emerging. On the contrary there is a continually increasing differentiation according to the demand each product is intended to satisfy and the tech- niques which it uses. The particular new require- ments to whose new means o f satisfaction this

41

Page 2: The DDA integrator as the iterative module of a variable structure process control computer

42 J. HATVANY

paper is a contribution, are those posed by modern process control techniques and by large-scale integration. The new process control approach, using feed-forward and on-line optimalizing meth- ods, requires the efficient, highly reliable, real-time generation not of primitive transfer functions, but of solutions to the entire systems of differential equations that describe the process under control [4]. The arithmetic-type processors and conven- tional sequential structures of present-day GP computers are, as the reference also notes, part- icularly ill-suited to this task. Large-scale integra- tion, the other new feature, requires that the computer be made up of a minimum variety of LSI circuits and that the structure of each circuit should need the minimum number of external con- nections [5]. These then, are the main criteria to be met by an LSI process control computer.

2. TRENDS AND DEDUCTIONS

One of the main trends in computer architecture today, is towards parallel, and indeed highly parallel processing. A recent survey [6] cites upwards of twenty different projects in hand, and a meeting of the most outstanding American computer de- signers convened to discuss computer organization [7], resulted in the submission of eight papers of which five were concerned with parallel processor structures. The trend towards multi-processor, parallel computers is unmistakable. The reasons are twofold. First--and herein lies the justification for the very large parallel systems being developed -- is that the rate of computing can, with compon- ent circuits of a given speed, best be increased by abandoning the step-by-step, sequential operations of the traditional mode in favour of performing several algorithmic steps at once. Of course, in principle it is immaterial whether several types of operation are being performed simultaneously or whether the same operation is performed at once on several different sets of data. The speed require- ments which these computers are to satisfy, often arise, moreover, from problems in meteorological forecasting, nuclear reactor simulation, the inver- sion of large matrices, etc., which possess an intrinsic parallelism in their structure. The second reason for the trend towards multi-processors is one of hardware, namely that parallel structures are necessarily made up of iterative modules. These may, of course, in extreme cases vary from a set of four or five gates in UNGER'S pattern- recognition computer [6], to a complete sub- computer with a 2k core memory in Illiae 4 [2], but either case offers the obvious advantage of being able to utilize the same circuitry a considerable number of times.

In a control computer, it is evidently the latter feature that will be the more important. HOLLAND,

in a restatement of his iterative circuit philosophy [8], has particularly stressed the advantages of using identical modules in a large-scale integration con- text: Set-up costs can be spread over a large number of identical units; production, inventory and repair can all be centred on a single integrated component, short leads and simple inter-unit con- nection procedures can be used. From the operat- ing point of view, the most important feature for a control environment is that, "Because all units are identical and because programs can be executed simultaneously, diagnostic procedures can con- tinually sample modules, with a negligible loss in efficiency--if faulty or failing modules are located, it is possible to program around them until replace- ment".

Another persistently advancing trend that is closely linked to the nature of LSI fabrication techniques, with their requirement for the minimum number of external connections, is that of using trains of pulses to convey information from one unit of a computer to the next. This means of internal information exchange offers as its main advantages a minimum number of leads and soldered connections, and a very high impervious- ness to noise. The latter property can be particularly enhanced by letting the pulse-trains represent quantities by stochastic parameters [9-11]. The trade-off in speed and accuracy required in exchange for the high noise tolerance and LSI compatibility of pulse train, average and absolute rate and number, techniques, is usually well within the acceptable limits for process control applications.

It may thus be seen, that the two major trends of parallel structure and pulse train information con- veyance are both not only closely linked to the new hardware environment of LSI, but also offer particular advantages in control applications. In one feature, however, the requirements formulated at the end of the previous section differ to some extent from those at which the main stream of the present trend is directed. This point of divergence is that whereas the large parallel processor systems are--as has been pointed out above--mainly aimed at increasing the speed of computation in general, the need in process control is for high efficiency in one field of computation in particular--the con- tinuous solution of systems of differential equations. This involves primarily the process of integration.

The most efficient digital means of integration is the use of Digital Differential Analyzer techniques. Structurally, the DDA has developed from a totally interconnectible, distributed topology [12] of the type shown in Fig. 1 and exemplified by the TRICE class [13], towards highly centralized topologies, Fig. 2, such as the CORSAIR [14]. This has, in fact, meant that the DDA has gradually shifted from being an analogue-type computer with operational

Page 3: The DDA integrator as the iterative module of a variable structure process control computer

The DDA integrator as the iterative module of a variable structure process control computer 43

© O

. @ O-- O -

C~ . . . . .

Q © FI6. 1. Totally interconnectible distributed topology.

I lrdhmatic i-~ j . . . .

(onlro/

FfG. 2. Centralized topology.

units linked to perform a given task, towards resembling the conventional yon NEUMANN struc- ture of a digital computer, differing only in its special arithmetic hardware and the unusual organization of its operative store. While this trend has offered the advantages of more conventional design techniques and hardware, leading to easier manufacture and maintenance by conventionally oriented groups, it has also involved a loss of the advantages of parallelism offered by the previous distributed topology. The resultant drawbacks have, however, been to some extent compensated by the greater speed of the circuits used, and also the fact that the quasi-conventionally structured DDA has made it rather easy to implement its control, the programming, interconnections, scal- ing, I/O etc. by means of a fully conventional computer unit that can share much of the hardware of the DDA. These features have been clearly evident in the military DDA-type computers of recent years.

An entirely new situation in this respect has been created by the appearance on the market of LSI circuits enabling complete DDA integrators to be composed of two, or at the most three flatpacks-- the terminals of one commercially available DDA adder element, which together with a dual shift register in one other flatpack constitutes a complete integrator [15], are identified in Fig. 3. Obvious advantage has here been taken of the fact that it is the DDA integrator which, of all conceivable digital units, possesses the highest concentration of processing power per number of external con- nections. It may in all certainty be predicted that this new development will lead to a reversion, at a

9 o o o 9 /

i

~ - ~ - ~ . I ~ y - , ~ y +

SCALE I N 6 0 N B I T - - - - o i

LOAD IN IdNB "Pout " ~

Yo YAX

I~1, v B I 'Paul r ;

Y~N Yo, t <'

• ~o ~ elk'2 V~o 6No , i I i

I' L i I

F[6. 3. DDA adder element.

higher level, to the distributed, interconnectible processor approach of an earlier stage, allowing all the speed and reliability qualities of parallel DDA processing now to be combined with the cited advantages of iterative circuit hardware.

A synthesis of the trends that have been con- sidered allows the following tentative conclusions to be proposed with respect to the next generation of process control computers:

a. organiza- it higher

maintain-

The computer will have a parallel tion of iterative modules, to give speed, easier manufacture, better ability, higher operating reliability.

b. The computer will, where possible, use pulse- trains for internal information inter-change, because of the fewer-leads and interconnec- tions, the higher reliability and better diag- nostic facilities* they afford.

c. The computer will utilize DDA techniques because of their greater processing efficiency, particularly with respect to systems of differential equations.

An attempt will now be made to outline some design considerations for such a computer.

3. DESIGN CONSIDERATIONS Since DDA techniques depend for their pro-

gramming mainly on the interconnections made between the individual integrators, the parallel structure adopted will have to differ considerably from that used in the case of processor elements whose mathematical operations are under central instruction control, with the interconnection paths between processors influencing the program se- quence rather than the operation performed, e.g. in the SOLOMON computer [16]. The structure need not, moreover, aim at achieving either maximum speed or maximum programming facility at the expense of exorbitant hardware costs, as is the case with the highly parallel, globally controlled machines proposed primarily to increase computing speeds [17].

* This aspect is treated in a paper of P. Kardos, which was also presented at the IFAC Symposium in Budapest, Hungary.

Page 4: The DDA integrator as the iterative module of a variable structure process control computer

44 J. I-IATVANY

Nor, on tile other hand, does the classical form of HOLLAND'S fully distributed control network [18] or its extension to three-dimensional [19] or N-cube [20] structures appear appropriate, for the DDA integrator is ill-suited to decision making and the building of the paths which actually determine the operations being executed, the functions being generated, requires a considerable amount of checking, involving arithmetical and logical com- putations, to ensure stability, correct scaling, initial conditions, etc.

The solution should thus be sought in arrange- ments that combine the advantages of global and distributed control, somewhat along the lines of ESTRIN'S "Restructurable" or "Variable structure" computer [21]. The distinguishing feature of this mode of organization is that the computer actually consists of two main parts: a fixed part which is a general purpose computer, and a variable part comprised of special-purpose digital substruc- tures. In the present context this latter part would be a planar configuration such as that of Fig. 4, or possibly a higher-order one, of DDA integrator processing elements, restructurable under the control of the fixed part. This principle is thus an intermediate, compromise solution between total global control on the one hand, and fully distri- buted control on the other.

7z:-)

FIG. 4. Two-dimensional configuration.

A possible structure of the processing element in the variable part of the computer could be of the kind shown in Fig. 5. The left-hand side of the diagram contains the computing parts of the element, which in the present state-of-the-art could be implemented in three LSI fiat-packs. The function of the internal switching matrix is to interconnect these computing parts to yield the particular mode of operation required. Some of the modes, achievable simply by changing intereon- nections, in which the units shown could be operated, are: Integrator with biased or unbiased remainder; two independent reversible counters; two independent serial adders; "soft" or "hard" servo; scaler; two independent storage units. The function of the external switching matrix is to interconnect complete processing elements, to form the several special-purpose computing structures of

Y reqtster

BOA adder

reqls ter

~r~o adder

Addres¢ ] req,ster

1 tn~ernal

~wttchmq n?atmx

swHchznq matrL

FIG. 5. Contents of processing element.

the variable part of the computer. The current instructions for both the external and internal switching matrix are held in the "address register", which receives them from the fixed GP computer part.

The implementation of the possible internal switching arrangements will be evident from Figs. 3 and 5, and Ref. [15], and the number of intercon- nections and possible patterns is low enough to be easily achieved through LSI techniques. With regard to external switching, the usual restriction applied to iterative circuit systems, of being able to communicate directly only with the four immediately neighbouring processing elements and reaching others only by path-building techniques, would in the present case be highly wasteful of hardware and restrictive of the computing powers of the array. On the other hand, electronic facilities directly to connect any processing element to any other, total interconnectibility, would involve prohibitively complex external switching matrices and intercon- nection hardware. One solution would be that adopted in the ease of the UCLA Restructurable Computer [21], where restructuring is effected by manually replacing one flexible printed wiring harness by another. The wiring harnesses are, of course, computer designed and automatically made under numerical control. This method, while saving considerably on electronic switching and interconnection costs, and permitting any process- ing element to be connected to any other, at the same time severely limits the flexibility of the computing system--a drawback which in many control applications, may well be tolerated, since the process to be controlled is represented by equations requiring only infrequent change.

Another approach to external switching is to incorporate only partial interconnectibility in the topological structure. One attractive scheme, for instance, is to permit total interconnectability between the processing elements of any one column in Fig. 4, i.e. within each subset Bp where

B p = { P E o , p . . . PEn, p},

O<~ p<~ m,

and only restricted interconnectability between the subsets B . . . . ,,. An implementation of this concept is schematically shown in Fig. 6. It may

Page 5: The DDA integrator as the iterative module of a variable structure process control computer

The DDA integrator as the iterative module of a variable structure process control computer 45

,, \.\ //

. . ~ . ~ , .- / ,, . \ "-. t z i

~/..( ~i /"L N\'~ /II / ~..7 l'~ \~ . . . . ~ / X / / - - " • ', ;' \ I i ........ { ~

t / "~" "'" \ / L _ - . ~ ~ X / 11 \ • I / - - - -~ '

i l it i

.......... thiLia[ condt/zons ond zn/erno{ awdc/Ttng in31rucbbns

. . . . . . ln/erconnecthgn.~ Wltht,'7 a ~ubset

t~Ts#uc&ons lo ex/erna/ swz[che~ rnalrLr of PE

zn~t~aFconnecLtzgns bBLwe@n ~ub3els

tiTs/rue/tOns ~9 dubset $~¢ddT~ng m ~ x

edarnaZ *nqk~chtnq malrz, x of PE

~.~ subseL ~w//ch/n~ rnaitrzx

Fie. 6. Partially interconnectible topology.

be seen that each of the processing elements can be interconnected through its own external switching matrix to any other processing element belonging to its own subset, and that each subset may be connected through its subset switching matrix to any other subset. In multi-dimensional structures this hierarchy can be continued. It will be noted that in such an arrangement the central control unit's links to its satellites are themselves also hierarchical.

The advantage of the structure presented in Fig. 6 is that each of the columns of, say, a planar array can then be used to perform a more or less in- dependent algorithm, e.g. the solution of one particular differential equation, while the subset interconnection facilities can be used to implement redundancy-based checks and certain types of cross- connection. The interconnection and switching facilities required are evidently far less elaborate than in the case of total interconnectibility. The extent of the inter-subset links available will, how- ever, have to be carefully matched to the nature of the problems to be solved. In the case of a linear system of differential equations, for instance, the last row of the Cauchy-normal form requires interconnection to all the previous rows, each superior row to one less, with the rows here materializing as columns of integrators, so that subset interconnections may possibly in some cases

require hardware devices, wiring harnesses, even if interconnections within the subset are electronically switched. Certainly, the limitation of inter-subset communication imposes restrictions on the theo- retical problem solving domain of the system--it is felt, however, that in the case of a control computer the trade-off may well be a favourable one.

As regards the fixed-structure GP computer at the centre of the proposed network, in process control applications it is required to be reliable, rather than fast or sophisticated. The tasks of the fixed computer include the calculation of the structures to be implemented in the variable part, the supply of initial data, scaling and issuing external and internal switching instructions, the automatic restructuring of the variable part if it has developed a fault, alarm surveillance, and the computing of arithmetical and logical algorithms. It will be noted, that the variable part of the computer can-- once its structure has been established and initial data supplied--continue to operate indefinitely without further intervention from the fixed part, until such time as its structure or operating con- stants require to be changed. This highly sign- ificant feature implies of course, that the fixed part could, if required, be operated off-line, or else time-shared between a number of variable parts, or have its functions discharged by a superior level hierarchical computer. Since, as has been pointed

Page 6: The DDA integrator as the iterative module of a variable structure process control computer

46 J. HATVANY

out, any one variable part will not, in a process- oriented system, require the fixed part to be fast or sophisticated, a GP computer that does possess these properties would indeed be ill-employed if it did not share its facilities in some such way. Input and output arrangements as far as the process under control is concerned--do not require the services of the fixed computer part: the nth row of processing elements of Fig. 4 can receive inputs in the form of pulse trains or coded signals, the 0th row can emit the same, e.g. towards digital-type actuators or convertors. A normal organizational form of the columns of processor elements in a planar array, would be that shown in Fig. 7. The information flow is from the lower rectangle

I INPUT I

FIG. 7. Configuration restructurable by columns.

towards the upper. Each of the vertical lines represents a column of processor elements, and those which intersect at top and bottom are identically operating parallel subsets of processors. In other words, the columns, which may be re- garded as the meridians in the Lambert projection of a sphere, form redundant parallel processing channels grouped for optimum value of a given reliability object function. The horizontal equa- torial line represents the inter-subset connections, the blocks joining the apices at the N and S poles respectively, contain the input and output switching and interface circuitry, and in the latter ease also the voting arrangements for qualitative evaluation of the parallel columns, subsets. As far as human input and output and numerical evaluation are concerned, these will, of course have to be carried out by the fixed part, but need not be continuous.

4. SOME SYSTEM ASPECTS

Let us briefly consider into what categories the computer oi'ganization that has been outlined can be fitted. Whether it can be called highly parallel, is a matter of terminology--CoMFORT [22] defines a highly parallel machine as "any multiprocessor of order 3 or more" [op.cit.p.128]. This means that to qualify, a machine must have more than 100 processing elements, and in a process control computer of the type proposed, this would evidently be the rule.

Whether it would be a truly iterative circuit computer, is more disputable. HOLLAND [8] states that the class of iterative circuit computers is the set of all devices specified by the admissible substitution instances of the quintuple (A, A °, X, J; P) which he defines.* Since the organization we have discussed is not a distributed control one of the kind known as HOLLAND machines, its variable structure part does not fully comply with the above stipulation, in that the requirements related to the path building feature are not fulfilled. Those, however, which specify the permissible geometrical configurations, the standard neighbourhoods of the processing elements, their storage register capacity and instruction set, are fully valid. So is the evident statement that the processing elements of the variable array are, in physical fact, iterative circuits.

That the machine proposed complies with SHANNON'S conditions for a universally programm- able DDA [23], can again not be unequivocally stated, since there is a limitation on the intercon- nectibility of the processor elements, which in themselves, however, satisfy the functional require- ments of Shannon, since they can be freely used as integrators, adders, function units or scalers. What Shannon states, of course, is that "the most general system of ordinary differential equations:

fk(X; Yl ;Yi '" . y]m); Y2, Yi'" "Y(2 ")'" "Y,, Y',''. y,(")) = 0,

k = l , 2 . . . . n

of the mth order in n dependent variables can be solved on a differential analyzer using only a finite number of integrators and adders, providing the functions fk are combinations of non-hypertran- scendental functions of the variables". Implicit in his theorem, [op.cit. No. XI], is the assumption that the "finite number of integrators and adders" are totally interconnectible. Since this condition is in the case of the proposed computer met only within each subset, there will evidently have to be a practical trade-off between the, theoretically un- limited, subset size and inter-subset intercon- nectibility on the one hand, and order of solvable problems on the other. The actual parameters will depend on the concrete need.

As shown, a practical control computer based on the design considerations deduced from current trends, while offering numerous advantages in the specific field for which it is intended, does not fit precisely into the theoretical systems categories so

* Briefly: A determines the underlying geometry of the array, A0 the standard neighbourhood of the modules in it, X the storage register capacity of the module, j"the instruc- tion set at module level, and P the path-building capabilities of the modules.

Page 7: The DDA integrator as the iterative module of a variable structure process control computer

The DDA integrator as the iterative module of a variable structure process control computer 47

far set up, and does not continue that part of the pre- LSI trend which strove at all costs strictly to implement these catagories, e.g. by "reducing cell logic to it most rudimentary form" [24], or using the iterative array only for storage and communication [25]. Nor does it comply with ESTRIN'S clear topological classes [12], since it is not fully distri- buted, centralized, decentralized or totally inter- connectible--working outwards from the centre we find a decentralized network with mutually interconnectible sub-centres, followed by totally interconnectible networks connected to these sub- centres at the peripheries. The systems philosophy followed here has been the more practical and also more limited one of satisfying one particular set of extant needs with one particular emergent technique in a manner which though theoretically hybrid-- as most engineering realities usually are--is feasible at this moment, and carries promise of being economically competitive in a matter of one or two years. The latter property, however, requires not only low-cost hardware design, but also powerful and competitive programming facilities.

5. P R O G R A M M I N G

The four cardinal properties of the proposed computer from the programming point of view are its special facilities in the differential calculus, the propensity of the incremental techniques used for implementing on-line optimizing schemes, the possibility of using advanced reliability methods and the applicability of a number of existing and evolving programming techniques.

The familiar thirteen theorems of SXqANNON [23], the wealth of examples compiled by FORBES [26], and the considerable theoretical literature of DDA principles, e.g. [27], have made it clear that though trigonometric, algebric and even logical problems are all amenable to treatmertt by the DDA, the fundamental field of its operation is the calculus, since the most intrinsic mathematical operation it performs is the process of integration. This operation, however, possesses two more properties: it is the most frequently required operation in solving the systems of equations which represent the behaviour of the vast majority of continuous industrial processes, and it is the least efficiently performed operation in a conventional-type GP computer. In a system composed of DDA inte- grators and their adjuncts on the other hand, programming the solution of differential, or partial differential, equations is performed primarily in terms of integration, with all the advantages in speed, simplicity and checking facilities which this entails. Since, moreover, information between the processing elements of the proposed scheme is conveyed inerementaUy and this can, by simple

switching, be made to apply to scaling factors and other constants as well as the variables, the intro- duction of optimizing or adaptive features into control systems is also rendered rather easy. Unit steps along any gradient in a hill-climbing set-up for instance, can be effected by channeling the properly scaled outputs of criterion evaluating integrators to vary, e.g. the scaling factors of the process-model integrators concerned. For more complex schemes involving composite object- functions, the fixed, GP part of the computer can, of course also be used for evaluation, while the necessary change can be just as directly and simply effected.

Perhaps the most important feature demanded of a control computer is great reliability. In this respect the proposed system is advantageous not only because its hardware can be built with a minimum number of leads and soldered connections in relation to its computing power, but also because it is highly amenable to the application of the redundancy techniques proposed in the recent literature [28]. For one thing, the two main parts, fixed and variable, of the system--operating independently of each other as they do--can be used periodically to check each other, and a good stand-by emergency program can be devised for either to take over the most essential functions of the other in case of a breakdown. It is relatively easy to incorporate checks in the program of the fixed GP part, which will in the event of a failure shut it down and prevent it issuing erroneous out- puts. Consequently the failure of the fixed, GP part will in the worst case lead merely to the con- tinuation of the work of the variable part in unaltered form, and if it is presumed that the algorithm it was computing before the failure was a correct one, there will be no catastrophic effect on a steady-state process. The functions of limit surveillance will, however, have to continue to be exercised in respect to the most important para- meters and these must, upon failure of the fixed part, be taken over by the variable part.

A failure of the variable part could have graver consequences, and a means must therefore be immediately found here also to discover the oc~urence of such a condition. This is available in the type of configuration shown in Fig. 7. The planar array of Fig. 4, can in fact be thought of as the plane Lambert projection of a sphere, of which the vertical columns of the array are the meridians. Since, moreover, there is no constraint on the sequence of the lateral interconnections of the columns, the sphere can be imagined as a four- dimensional spheroid, i.e. there is no constraint to prevent the meridians changing places with one another. This means, that if the GP computer part should decide to change the rodundancy set-up, by,

Page 8: The DDA integrator as the iterative module of a variable structure process control computer

48 J. HATVANY

say, transferring one of the four identical processor columns in the figure to one of the groups of two identical ones, it may do so without any constraint, transferring and recortfiguring any given column to become part of any other group of identical, redundant, columns. The reliability safeguards of the program will, therefore, form sets of identically structured parallel columns connected to each other at nodal points, of which one will be the common input, the other a suitably chosen voting circuit. Evidently provision can be made for the most important parts of the program to be checked by the most elaborate redundancy technique and vice versa. By this means, failures are immediately detected and an appropriate restructuring can take place without them having any ulterior effect on the process. In the event of a total failure of the variable part, the fixed part cart be programmed to continue some rudimentary control activities. It will be noted moreover, that the system is highly flexible with respect to the particular redundancy scheme used, so that new techniques such as the proposed segmentation of redundant parts [29] can easily be implemented.

The actual programming of a computer of the type suggested can thus be seen to involve three main jobs: a mapping of a mathematically form- ulated set of operations into a set of permissible computing structures, the determination of scaling factors and stability, and the computing of an appropriate reliability strategy. Of these none present anything like the problems arising in either the big parallel computers of the SOLOMON type [6], or the path-building iterative computers of HOLLAND [22]. Nor is the complex assignment procedure of the UCLA Variable Structure system [30] necessary, since no cost optimum need be found with respect to the assignment of a computing phase to the fixed or the variable part of the computer.

The first task, the mapping on to an appropriate structure, may be performed by any of the familiar explicit deterministic methods used in programming analogue computers or their digital simulations, in which case the formulation of the problem in the notation of the calculus evidently offers a convenient autocode, since the basic equation-solving structure is, in the DDA context, uniquely determined by the number of differentiations and integrations to be performed from one term to the next. The scaling, stability, etc. checks can then be regarded as con- straints on the above process, and the reliability strategy is computed separately at the end, combined with an optimal hardware utilization program.

Another, and very promising approach to programming the proposed computer would be along the lines set out by NEWFLL [31]. Although the artificial intelligence techniques suggested in the cited paper are oriented towards a HOLLAND

machine, the principles enunciated are equally applicable to other highly parallel structures, and indeed any automaton whose programming is primarily structural. The basic features of the proposal are that the machine should possess a fully problem-oriented external language com- pletely independent of the internal structure of the computer, an associative store in which problem patterns become associated with the computing structures able to solve them, and heuristic search methods to achieve actual problem solution. Since the structuring of DDA units to solve systems of process equations is a fairly well-defined class of not very complex problem-solving activities, while the facility of stating the problem in terms divorced from the computer and its technology is especially important in the industrial control environment, the proposed computer may well be a field where NEWELL'S philosophy could be applied with relative ease, to achieve highly important results.

6. CONCLUSIONS

An attempt has been made to outline the main features of a process control computer incorporat- ing parallel processing, pulse-train information transfer, LSI-feasible hardware, high reliability and relatively easy programming, and to show that such a machine would be able to implement some of the most modern control and cybernetic trends. While no rigorous check of the ideas proposed has been carried out, they are considered plausible since they do not include any absolutely novel theoretical feature--the novelty of the approach is rather in the sacrifice of "pure" solutions on the altar of technical expediency. This, on the other hand may also be interpreted as the general prelude to in- dustrial application, and the swiftly developing LSI market will undoubtedly lead to some efforts along the lines suggested.

Acknowledgement The study of iterative circuit techniques was suggested to the author by L. Baj~iki. Many of the ideas proposed have matured in discussions with him and other members of the author's Institute.

REFERENCES

[1] Session on Large Computers Raises Tempers at SJCC. Datamation No. 6, p. 97 (1967).

[2] W. B. RILEY: llliac 4, world's fastest computer, won't be slowed by criticism. Electronics 40, (10), 141 (1967).

[3] D. P. ADAMS: In Proc. 1962 Workshop on Computer Organization (Edited by A. A. BARNUM and M. A. KNAPP), p. 89. Spartan Books, Washington (1963).

[4] H. H. ROSENBROCK: The status of applications of control theory and technology to industrial process systems. Instrument Practice. Jan., p. 44 (1965).

[5] M. G. SmTH: Systems considerations for large-scale integration. WESCON Techn. Papers, Los Angeles (1966).

[6] J. C. MtrRTrtA: Highly parallel information processing systems. Adv. Computers 7, 2 (1966).

Page 9: The DDA integrator as the iterative module of a variable structure process control computer

The D D A i n t e g r a t o r as the i t e ra t ive m o d u l e o f a va r i ab le s t ruc tu re p rocess c o n t r o l c o m p u t e r 49

[7] A. A. BARNUM and M. A. KNAPP (Editors): Proc. 1962 Workshop on Computer Organization. Spartan Books, Washington (1963).

[8] J. H. HOLLAND: Iterative circuit computers, charac- terization and r6sum6 of advantages and disadvantages. Proc. Symp. Microelectronics and Large Systems, Washington, p. 171 (1964).

[9] B. R. GAINES: Stochastic computer thrives on noise. Electronics 40, No. 18, p. 72 (1967).

[101 B. R. GAINES and P. L. JOYCE: Phase computers. Vth Int. Congr. AICA, Lausanne (1967).

[I1] S. T. RIBERIO: Random-pulse machines. IEEE Trans. Electronic Computers 16, 261 (1967).

[12] G. ESTmN: Mieroelements in processor networks. Proc. @rap. Microelectronics and Large @stems, Washington, p. 157 (1964).

[13] R. E. BRADLEY and J. F. GENNA: Design of a one- megacycle iteration rate DDA. Proc. 1962 Spring Joint Computer Congr., p. 353.

[14] P. L. OWEN, M. F. PARTRIDGE and T. R. H. SIZER: CORSAIR--a digital differential analyzer. Electron. Engng 32, No. 394, p. 740 (1960).

[151 J. D. CALI.AN: MTOS integrated digital differential analyzer. General Instrument Corp. Microelectronics Application Notes (1967).

[16] D. SLOTNICK, W. C. BROCK and R. C. McR~YNOLDS: The SOLOMON computer. Proc. 1962 Fall Joint Computer Conf., p. 97.

[17] J. S. SQtURE: Programming and design considerations of a highly parallel computer. Proc. 1963 Spring Joint Computer Conf., p. 395.

[18] J. H. HOL~-AND: Iterative circuit computers. Proc. 1960 Western Joint Computer Con[., p. 259.

[19] R. GONZAL~S: A multilayer iterative circuit com- puter. IEEE Trans. Electron. Computers 12, 781 (1963).

[20] H. L. GARNER and J. S. SQUIRE: lterative circuit com- puters. Proe. 1962 Workshop on Computer Organization, p. 156 (Edited by A. A. BARNUM and M. A. KNAPP). Spartan Books, Washington (1963).

[21] G. ESTRIN, B. BUSSEL, R. TURN and J. BIBB: Parallel processing in a restructurable computer system. IEEE Trans. Electron. Computers 12, 747 (1963).

I22] W. T. COMFORT: Highly parallel machines. Proc. 1962 Workshop on Computer Organization, p. 126 (Edited by A. A. BARNUM and M. A. KNAPP). Spartan Books, Washington (1963).

[23] C. E. SHANNON" Mathematical theory of the differential analyzer, d. Math. Phys. XVIH, 337 (1939).

[24] J. K. HAWKINS and C. J. MUNSEY: A two-dimensional iterative network computing technique and mechaniza- tions. Proc. 1962 Wcrkshop on Computer Organization, p. 93 (Edited by A. A. BARNUM and M. A. KNAPP). Spartan Books, Washington (1963).

[25] W. T. COMFORT: A modified Holland machine. Proe. 1963 Fall doint Computer Conf., p. 481.

[26] G. F. FORBES: Digital Differential Analyzers. Author, Pacoima (1957).

[27] A. V. KALYAYEV: Vvedenie v teoriya cifrovikh inte- gratorov. Naukova Dumka, Kiev (1964).

[28] W. H. P~ERCE: Failure-Tolerant Computer Design. Academic Press, N.Y. (1965).

[29] N. D~o: Partial versus total redundancy. Electronic Letters 3, No. 1., p. 2 (1967).

[30] G. ESTRIN and R. TORN: Automatic assignment of computation in a variable structure computer system. 1EEE Trans. Electronic Computers 12, 755 (1963).

[31] A. NEW~LL: On programming a highly parallel machine to be an intelligent technician. Proc. 1960 Western Joint Computer Conf., p. 267.

RCsumC----L'article 6value les tendances actuelles en organisa- tion des calculateurs, en technologic de l'Automatique et en intCgration ~t grande 6chelle afin d'en dCduire les caractC- ristiques d'un nouveau calculateur destin6 ~t la commande des processus et de proposer une rCalisation concrCte.

Le calculateur propos6 aura une organisation ~t structure variable, avec une partie fixe, un calculateur universel, et une partie variable compos~e d'une s~rie d'integrateurs d'analyse diffCrentielle numCrique fontionnent en taut qu'elements de traitement. Ces derniers peuvent fort bien ~tre r~lis~s sous la former de circuits d'int~gration ~t grande 6ehelle afin de former des modules it~ratifs. La topologie du calculateur est du type pouvant ~tre partiellement interconnect~ et decentralis&

Le calculateur se prate particuli~rement tt l'application des stratggies de fiabilit6 poussCes. Le domaine des probl~mes pouvant ~tre resolu est relativement large et, quoique l'organisation proposCe ne correspond pas strictement aux crit~res de Shannon ou de Holland, ces facilit6s pratiques pour les applications de commande ne sent pas diminuCes, alors que la realisation et la programmation sent rendus plus faciles. Un auto-code appropri~ A la programmation du calculateur est bas6 sur le calcul diffCrentiel et serait par- ticuli~rement adapt~ /t l'application de la technique de rCsolution des probl~mes de Newell.

Zusammenfassung--Die gegenw~irtigen Tendenzen bei der Rechnerorganisation, Kontrolltechnologie und Integration groBer Systeme werden abgesch~itzt, um die Besonderheiten eines neuen ProzeBrechners darzulegen und einen aktuellen Entwurf vorzuschlagen.

Der vorgeschlagene Rechner wird eine variable Organisa- tionsstruktur besitzen mit einem festen Teil, n~imlich einem Universalrechner und einem variablen Teil, der aus einer Reihe digitaler Integrieranlagen zusammengesetzt ist, die als Elemente wirken. Die letzteren kSrmen gut als integ- rierte Schaltkreise ausgefiihrt werden, um iterative Modulen aufzubauen. Topologisch ist der Rechner dezentralisiert, aber mit partiellen Verkniipfungen versehen.

Die Maschine ist fiir die Anwendung von anspruchsvollen Zuverl~issigkeitsstrategien besonders geeignet. Der damit 15sbare Problemkreis ist relativ breit und obgleich die vorgeschlagene Organisation den Kriterien yon Shannon oder Holland nicht genau entspricht, werden die praktischen Erleichterungen f'tir Kontrollanwendungen nicht beeint- r~ichtigt, wahrend die Durchf'til~rung und die Program- mierung leichter werden. Ein geeigneter Autocode fiir die Programmierung des Rechners basiert auf der Differential- rechnung und wiJrde besonders bei der Anwendung der ProblemlSsungstechm'k yon Newell vorteilhaft sein.

Pe3mMe---CTaTb~I oueHrmaeT CoBpeMeHHble TeH~(enllllH B opraHH3aUnH Bblq~ICm~Te:rumblx Mammt, B TeXHOYlOrHH ABTOMaTHgII H B IlnlpOKOMaCmTa6HOM HHTerpHpoBaHHH C ~enmo B~ecTn xapaKTepHcTm~ HOBO~ B~IHCJIHTCY~HOi~ MaImmbI ilpe/l~a3na~ehq~ol~ K yripannema~o npoHecCaMH H HpennoxnT~, xormperHoe ocymecranerme. Hpe~noxem~an BLr~CnnTeJ~Has Mamm~a ~aeT o6nanaTb oprann3a~eR c nepeMeimol~ cTpyKTypo~, C HOCTOIIHHOt~ ~IaCTlbIO, yttHBep- C~IbHO ~ BI~IqHCHIITe~I~HO ~ MamHHO~ H C nepeMeHHol~ ~aCTbIO, COCTO~me~ n3 ps~a ~HTerpaTOpOB m~OpoBoro n~p0epe- HLr~an~Horo aHayni3a ~eI~cTByIonIHx B IOICCTBC 3YleMeHTOB 06pa60TXH. HocnenHne MOFyT 6HTb BecbMa JIerKo OCymCcTBeneH~ B B ~ e nmpoKo-MacmTa6H~X HHTerpH- py~onmx KOHTypOBqTO6H COCTaBwrb nOBTOpHT~JIbItble MO~ynn. TononorH~ BbIHHC.rIHTeY~HO~ MaLUHRbI HpHHa~- HeXHT K ~eHenTpanH30Bah~oMy Tm~y C BO3MO~KHOCTbIO qaCTHqHO]~ B3aHMOCB~I31I.

Mamm~a oc06crmo ner~o no~aeTcg npnMeHeHmo CO- Bp~MeHttbIX cTpaTerm~ Ha~ex~ocm. O6nacT~ penmM~x 3a~a~ c p a B ~ H O ttmpoxa H XOTff i/pe~(.rio~KeHitalI oprarm3at0~ He cooTseTc~ycr CTpOrO xpnTepHma lllcnnona ~ m Xoananna, ee rrpax~0~ecxaa np~Mem~MOCT~ K ynpa~e- Hn~o He yMes~meHa a ocy~ecTBHelme nnporpaMMnpo~HHe o6ner~e~L Honxon~um~ ~ nporpaMMnposamuo m,r~n- CYIHT~JI~HO~ ManIHH~ aBTOKO~ OCHOBaH Ha ~Hq~pepel~ HaJ~HOM B~HCJIeH]~I H OCO6CHHO npIIMeHM If HcnoJI~3OBa- HmO TeXH~K p e m e m ~ 3a~aq H ~ o B e ~ .