real-time digital simulation for systems control

11

Click here to load reader

Upload: sl

Post on 22-Mar-2017

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Real-time digital simulation for systems control

1802 PROCEEDINGS OF THE IEEE, VOL. 54, NO. 12, DECEMBER, 1966

Real-Time Digital Simulation for Svstems Control

d

A. P. SAGE, SENIOR MEMBER, IEEE,

Abstract-This paper discesses techniques employed in the discrete modeling of physical system for digital simulation and control applicatiom. Traditional nmerical integration techniques provide accurate means of model making but prove too slow for real-time simulation of complex system or system with fast response. For rapid digital simulation, a simplified discrete approximation is sought for the linear integmditierential operators of a continuous system. This discrete operator, a digitized transfer function, yields difference equations hoperuUy permitting real-time approximation of continuom system performance on a digital computer. Determination of the discrete operator is the essential goal of each of the simulation schemes described herein, though differing initial pssomptions and approximatiom alter the resulting forms After a brief review of these approaches to simula- tion, techniques for improved approximations for linear system transforms and for discrete parameter optimization and identification are developed. The optimum discrete transfer function which minhnizes the sum of error squared between a linear continuous system output and a Linear discrete system output is obtained. By adjusting gain parameters in the discrete transfer function, the simulation result is shown to be improved for v a h inputs and system nonlinearities. Application of standard variational metbods to optimize the desired parameters l e a d s to a two-point nonlinear bodary-value problem which is resolved via the techniques of quasilinearization and differential approximation. The procedure for application of v a h simulation metbods is summarized, and tbe effectiveness of the methods is shown by the simulation of a second-order, nonlinear system for various inpats and sample intervals.

R INTRODUCTION

EAL-TIME digital simulation of dynamic systems has been receiving increasing attention due to. recent developments in speed of digital hardware and in

advances in computational methods. The traditional tech- niques of numerical analysis for solution of differential equations have proved too inefficient in time required for computation when applied to the equations of complex systems. Over approximately the past 20 years several methods for discrete approximation of continuous systems have been proposed and have been employed to obtain computational algorithms for digital simulation. One of the earliest techniques, Tustin’s method, has probably been best accepted for simulation work and is presently quite widely used. Most recently the digital simulation method intro- duced by Fowler has provided a significant advance in the possibilities for system simulation, especially for nonlinear systems.

In this paper are reviewed some of the more significant ap- proaches developed for discrete approximation of con- tinuous systems. The procedure for application of each

Manuscript received April 27.1966; revised August 22,1966. This work was supported by the National Aeronautics and Space Administration under Grant A 26 NsG-542.

The authors are with the Department of Electrical Engineering, Uni- versity of Florida, Gainesville, Fla.

AND s. L. SMITH, MEMBER, IEEE

technique is briefly summarized and there is presented a discussion of a new approach to the discrete approximation procedure. The “optimum” discrete approximation to a linear transfer function is developed based on a proposed criterion for closely approximating continuous system per- formance. The response of the digitized system is tailored for changing inputs and sampling intervals by adjusting certain parameters in the discrete transfer functions via the method of quasilinearization. After discussing the methods for digital simulation, these techniques are employed to simulate a second-order nonlinear system. An experimental study is made of the simulation sensitivity to change in sampling interval size and detailed data are presented on the error of each simulation approach.

The approach taken in the discussion of simulation tech- niques is that of considering a linear transfer function as an operator. The result of a procedure for discretization of a continuous system transfer function is then a pulse transfer function which will hopefully perform the same operation on an input signal. Such a discrete operation can be im- plemented on a digital computer by expressing the pulse transfer function in its difference equation form. The result- ing equations provide recursive relationships for efficient digital computation formuIation of a continuous system and hence provide the most probable avenue to real-time digital simulation.

An important class of digital simulation techniques exists in the simulation languages or digital analog simulators. These techniques emphasize convenience for the pro- grammer at the expense of computation time. With a simulation for temporary study having no emphasis on real-time operation, convenience and speed in program- ming are a decided advantage even at the expense of com- putation time. Despite the significance of simulation lan- guages they will not be discussed here since the present emphasis is on real-time computation techniques.

METHODS FOR DISCRETE APPROXIMATION Runge-Kutta Integration

The differential equations describing system dynamics may be solved by any of the standard numerical integration methods; however, these methods cannot generally satisfy the requirement for real-time computation and simulation. In the comparison of discrete methods for approximating continuous system response, such integration schemes do provide a convenient means of obtaining results for the continuous system, especially in the nonlinear case. One of the best known approaches to integrating differential equa-

Page 2: Real-time digital simulation for systems control

SAGE A N D SMITH: SIMULATION FOR SYSTE$iS CONTROL 1803

TABLE I EXAMPLE^ OF DISCRETE FORMS FOR INTEGRATING OPERATORS

I ~ Methods of

Operators

Approximation I

1 Is I I/? I u S 3

Tustin

T ~ z - ' ( ~ + 2 - l )

(1 - z-l)3

T(1 + 2-1) [ 2(1 - 2 - 1 , ]

I T(l + 2-1)

2(1 - 2-1 ) q1 - 2 - 1 y 24(1 - z-1)3 Madwed-Truxal

T2(1 + 42-' + z - ~ ) 7 3 ( 1 + 1 ~ 1 + 1lz-2 + z - 3 )

Boxer-Thaler T( l + .z - ' ) i T2(1 + 102-l + 2 - 2 )

2(1 - 2-1) i I

12(1 - 2 - 1 ) 2 I I T ~ [ z - '(1 + Z - l ) ]

2(1 - z-l)3

tions is the Runge-Kutta method, for which there are many modifications [ 1 l-[3]. A commonly employed formulation is shown here. Given a differential equation

where T is the increment in r, the sampling interval, and x is an n-vector. There is now computed a set of coefficients ai, where

a1 = Tf(nT, x(nT)) ,

a4 = Tf(n + lT, x(nT) + a3).

The new value of x is then computed from

x(n + IT ) = x(nT) + 3 a l + 2a2 + 2a3 + a4). ( 2 )

This formulation is that of a fourth-order Runge-Kutta method, having a truncation error proportional to P. Selection of a sufficiently small increment in the independent variable produces a solution of the desired accuracy; how- ever, the very small increment size sometimes required for a suitable result and the calculation of a complete new set of coefficients at every iteration combine to frustrate any at- tempt to utilize such a technique for real-time simulation for most applications.

z-Transform Method Knowledge of z-transform analysis of linear sampled-

data systems is fundamental to all the approaches to digtal simulation. This topic is well presented in many places [4]-[6] and only features immediately applicable are men- tioned here. The transform variable z is taken as z= e'T and

represents a unit time advance operator as used in the analysis methods to be discussed. z-Transforms of con- tinuous transfer functions are often readily obtained from available transform tables. For more complex problems digital computer programs are available to aid in perform- ing the z-transform analysis [7]. Application of z-transform concepts to the discrete approximation of continuous sys- tems can be made in two different ways. Substitution may be made for integration operators s-" of a transfer function, or the pulse transfer function may be obtained for the com- plete continuous transfer function. Use of the integrator or s-" substitution method has been discussed by Fryer and Shultz [8]. Multiplication of the z-transform of s-" by the sampling interval Tis required for use of the transform ex- pression as an integration operator; this has been done in Table I. It is noted that application of Blum's technique [9] to discretization.of a transfer function yields the z-transform pulse transfer function multiplied by T ; hence, for integra- tion operators the Blum approximation and the z-transform approximation are the same. Further use of Blum's tech- nique is not made here but results for this method and others are reported in [8].

The pulse transfer function resulting from the z-transform operation may be implemented as a difference equation relating the input and output variables of the system under study. Consider a system represented by Fig. 1 where the input r ( t ) and output c(t) are sampled; hence, the pulse transfer function may be represented as G(z) = C(z)/R(z), or

G(z) = a, + a,z-1 + a2z-2 + . . . + a$-" 1 + b,z-' + b2z-2 + . . . + b,z-4 . (3)

Knowing that z-"F(z) = [f(t - n T)]*, where [ ]* represents the z-transform of the function within the brackets and F(z)= [f ( t )]*, leads to a difference equation relating r(kT) and c(kT), when G(z) is replaced by C(z)/R(z) and the in- verse transform obtained for the resulting expression. This difference equation may now be written as a recursion

Page 3: Real-time digital simulation for systems control

1804 PROCEEDINGS OF THE IEEE DECEMBER

rct) Rts) rcnT) : % --d

T

Fig. 1. Linear sampleddata system.

formula for c(k7),

c(kT) = aor(kT) + a,r(k - 1 T ) + . . . + a,r(k - n T )

- b,c(k - 1 T ) . ' . - b,c(k - 4 T), (4)

which is the desired relationship for digtal computer im- plementation of the discrete approximation resulting from the z-transform method as well as for other methods to be discussed. For a system with transfer function C(s) and in- put R(s), the sampled system output is given by [G(s)R(s)]*. When the expression obtained from [G(s)R(s)]*/[R(s)]* is realizable, the G(z) so defined yields an exact representation for the continuous system.

Tustin Method Originally presented as a general approach to linear sys-

tem analysis through the representation of time functions in terms of sequences of numbers [lo], the practical applica- tion of the Tustin method may be redwed to. the use of Tustin's definition of the differentiating and .integrating operators. A linear transfer function G(s) expressed as a ratio of polynomials in s is readily digitized by substituting for s" the Tustin operator expressed as

s"= [: -~ ; 5I:]",

where Tis the sampling interval. It should be noted-that this corresponds to repeated usage of the trapezoidal rule.

Madwed-Truxal Method Madwed extended the Tustin time series approach to

system analysis and developed higher order- integrating operators of increased accuracy in his comprehensive treat- ment [ l l ] of this technique. The' complex notaion de- veloped by Madwed for the polygonal approximation of time functions was clarified by Truxal [12] in his formula- tion of the numerical convolution operation for system analysis using z-transform notation.The first-order integra- tion operator of Madwed is the trapezoidal rule encountered in the Tustin method; however, higher order Madwed integration operators assume different forms as shown by the examples of Table I. To digitize a transfer function using this approach, the linear transfer function G(s) is expressed as a ratio of polynomials in s-" and the appropriate in- tegrating operator substituted for s-" to obtain G(z).

Boxer-Thaler Method The Boxer-Thaler method was presented as a technique

for numerical inversion of Laplace transforms [13], [14]. The procedure for application of this approach follows that of the previous methods in that substitutions are made for the complex variable s, but the integrating operators em-

ployed are the "z-forms" developed by Boxer and Thaler. These special forms were developed in the frequency domain in contrast to the time domain development of Tustin and Madwed. It was noted that polynomial approximation for s-' = T/ln z could be obtained by expanding In z in a rapidly convergent series and then expressing the operator s-' as

T T s - 1 = - =

In z 2(u + u3/3 + u5/5 + . . . ) ' ( 5 )

where

1 - z - l 1 + z - l

u = -.

From this expression there results by synthetic division

T 2

s-1 = - ( u - l - u/3 - 4 ~ ~ 1 4 5 - . . ), (6 )

which leads to z-forms for s-" when both sides of the above expression are raised to the nth power and the constant term and principal part of the resulting series retained. Table I contains such z-forms for several orders of integrating operators. A linear system transfer function may be dis- cretized by expressing it as a ratio of polynomials in s-" and substituting the appropriate z-forms.

Anderson-Ball- Voss Method This technique was presented [15] as an approach to

discretization of linear differential equations. If the input to a system can be approximated by a polynomial in time, this approximation is substituted into the system differential equation for the input and the analytical solution de- termined. For linear differential equations of the form

d"y A , - + A, , -I- d"- 'y + . . . + AoY = x( t ) , (7) dt" dt"-

the input x(t) is approximated by a low-order polynomial h(f) permitting the solution At) to be written as

n

At) = 1 cieClit + H(t),

where ai are assumed to be distinct, and the ci and coeffi- cients of H(t ) are to be determined. A recursion formula for At) can be obtained by writing the solution as

i = 1

~ ( n - 1 T ) = q,y(nT) + qzy (n - 1 T ) + . . + qjy(n - j + 1 T )

+ b 1 x ( m T ) + Bzx(n7) + * . . + &+ ,x(n - k + 1 T ) , (8)

where j is the order of the differential-equation, and k is the degree of the input approximation polynomial. The co- efficients for the terms of y(n7) are evaluated from the dif-

Page 4: Real-time digital simulation for systems control

1966 SAGE AND SMITH: SIMULATION FOR SYSTEMS CONTROL I805

ferential equation coefficients and the h(t) polynomial co- efficients.

IBM Method The approach to digital simulation developed by Fowler

[16] employs root-locus techniques in conjunction with the z-transforms for the continuous system under considera- tion. For a, nonlinear system such as in Fig. 2, the z-trans- forms of the individual linear transfer functions are first de- termined. The nonlinearity is replaced by a representative gain and the poles of the resulting closed-loop pulse transfer function

Gl(Z)G2(4

1 + G l ( Z ) G Z ( Z ) H ( Z ) (9)

made equal to the poles of the z-transform of the closed-loop continuous linearized system

G(z) = [ 1 + GG:I:IGGG::H(s)l*

by adjusting gain. parameters in the, forward and feedback paths. This may be accomplished by equating the denom- inator terms of (9) and (10) and solving for the unknown gain parameters. Consider a system such as in Fig. 2 where

6 25 Gl(s) = - and G2(s) = -

s + 6 ' 6s

have z-transforms

6 25 Gl(Z) = 1 - e - 6 ~ z - 1 and G2(z) = q1 - z-1)

The expression for G2(z) is multiplied by the sampling in- terval T to obtain an integration operator. The numerator of Cl(z) is replaced by a gain parameter K to be adjusted for the correct closed-loop transfer function condition stated above. The expressions for the discrete system oper- ators of Fig. 3 now appear as

where the change in notation reflects the changed character of the transfer functions. Taking H(z)= z- in (9), Fl(z) and F2(z) yield the closed-loop expression

K T(25/6) 1 + z-'(25/6KT- 1 - e-6T) + z-2e-6r ' (11)

Equation (10) becomes

~ - ' 2 5 e - ~ ~ sin 4T G(z) = 1 - z-12e-3TCOS4~+ Z - 2 e - 6 T ' (12)

The gain parameter K of Fl(z) is now sought to make the roots of the denominators of ( 1 1 ) and (12) equal ; hence, K is chosen so that

I Fig. 2. General nonlinear system configuration.

Fig. 3. Coniiguration for the IBM discrete approximation.

yielding

K = (6/25T)(1 + e-6T - 2e-3T cos 4T). (13)

Additional requirements of steady-state gain and system input approximation are met by determining an input pulse transfer function so that the product of this transfer func- tion, and the closed-loop expression (9) equals the z-trans- form of the product of the desired input approximation or data hold and the linearized closed-loop transfer func- tion. In the present example a zero-order data hold is em- ployed and the approximation is desired to give a half sample period lead. For more compact presentation the result is shown for T= 0.1 second, so that

The desired input transfer function Fi(z) may be obtained by dividing the numerator of (13) by the numerator of (1 1) using Kas determined above, with the result

Fiz = A + Bz-' + C Z - ~

= .1504 + .7469z-' + . 1 0 2 7 ~ - ~ . (15)

With the results of (1 3) and (1 5), information is complete for the discrete approximation which may be expressed as a set of difference equations for digital computer implementa- tion. The input transfer function may sometimes be simpli- fied or neglected as indicated by Fowler [16], Hurt [17], and others [7]. Use of the z-transform approximation requires introduction of a delay in the feedback -loop to obtain a realizable formulation for any single closed loop. It appears that such arbitrary inclusions of delay may in some cases degrade the simulation performance. The approach to discrete approximation discussed in the following section provides a means of avoiding this problem.

Optimum Digital Simulation Discrete approximation of a continuous linear system

may be approached by a direct effort to minimize the error between the output of the discrete system and the sampled output of the continuous system, as shown by Sage and

Page 5: Real-time digital simulation for systems control

1806 PROCEEDINGS OF THE IEEE DECEMBER

rtt) J-----r:?$ Htz) F C )

Fig. 4. System representation for formulation of the optimization problem.

c#l

Burt [ 181, [19]. The signal comparison is made as illustrated in Fig. 4, where r ( t ) is the input, I(#) is the ideal (continuous) operation, H(z) is the unknown pulse transfer function, and F(z) is-the fixed portion of the system. The resulting error sequence is

e(nT) = ci(nT) - cd(nT), (16)

where c,(nT.) is the ideal output after sampling, and cd(nT) is the actual discrete system output. Taking the z-transform of 4 n T ) yields

E(z) = [R(s)Z(s)]* - R(z)F(z)H(z), (17)

where [R(s)l(s)]* is the z-transform of R(s)Z(s). Since the desired approximation is sought to minimize the

error above, the criterion for optimization is chosen as the sum of error squared which may be expressed as

1 P

where the contour of integration r is the unit circle. Sub- stituting the expression for E(z) into the integral yields

m 1 r

. [A( ; - ' ) - R(z-')F(z-')H(z-')]z-' dz, (19)

where

A(z) = [R(s)Z(s)]*.

The sum of error squared may be minimized by applying the calculus of variations to the integral of (19)yielding the result [18],

where the symbol P.R. refers to the physically realizable por- tion of the term within the braces, and the + and - s u b scripts refer to the conventional spectrum factorization operator denoting extraction of the multiplicative term containing poles and zeroes either inside (+) or outside (- ) the unit circle.

In the digital simulation of closed-loop systems, use of the techniques discussed earlier requires the introduction of a delay in the feedback path to conveniently implement the closed-loop approximation, particularly in the nonlinear

case. The need for the delay can be eliminated if transfer functions are approximated in a manner such that computa- tion of the output requires knowledge of only previous values of the output variable. This is achieved in the ap- proach under discussion by taking the fixed portion of the system as

If F(z) is taken to be z - ', the pulse transfer function result- ing from this approximation technique is determined to give the least sum of error squared when the present value of the dependent variable is not known. Pulse transfer func- tions with delay are termed closed-loop realizable and those without delay as open-loop realizable. Simulation of a single loop requires only one closed-loop realizable pulse transfer function. Some typical optimum pulse transfer functions are shown in Table 11. A system may be discretized with this method by reduction of the continuous system to phase variable form and making repeated application of the desired optimum digital integrator.

Consider the integration operator G(s)= l/s where the closed-loop realizable discrete approximation isdesired for a ramp input. Then n = 1 and R(s) = l/sz, so that

m m

R(z)R(z- ')F(z)F(z- 1) = 1 1

(1 - z-1y (1 - z)2'

which with (20) yields

I T2z-'(1 + z-')(z)(Tz) 2(1 - z-1)3(1 - z)2 1

I , . ' I

(T2z(l + z-1))

Expansion of the term within the brackets yields

z + l 1 - 3z-' + 32-2 - 2 - 3 '

and subtracting z from this expression results in

(1 - z-')2 '

which is physically realizable. The optimum closed-loop discrete approximation for the integrator becomes

Z - ' H0(z) = -( T/2) z-'(4+- 32-' + 2 - 2 )

1 - z-'

Sage and Burt [18] have presented a study of the integrator forms. An alternate .approach is to obtain the optimum discrete approximation for a complete transfer, function, such as G,( s ) of.Fig. 2. Having the pulse transfer functions

Page 6: Real-time digital simulation for systems control

1966 SAGE AND SMITH: SIMULATION FOR SYSTEMS CONTROL 1807

TABLE I1 EXAMPLES OF OPTIMUM DISCRETE OPERATORS

: -"Ho(z) G(s)

I n R(s) = l / ~ R(s) = 1,'s'

I

T(1 + z - 1 )

2(1 - 2-1)

0 1

S2 -

1

t I

2(1 - 2-1)

j d - 1 ' (1 + 2-1)

2(1 - 2 - 1 y

1 T'ZY'(1 + 2-1 )

2(1 - z-1y

a s + a

0

1

for the linear portions of a system, the desired recursive relationships for digital simulation can be obtained.

Quasilinearization For a general nonlinear system, the linear approximation

becomes an inadequate representation and the approxima- tion can be improved by taking the nonlinearity into con- sideration. This may be achieved by including an adjustable gain parameter in selected pulse transfer functions de- termined for the linearized system. It is then desired to find values for the gain parameters so that the actual digital sys- tem state variables yd(nT) approach the continuous system state variables y,(nT) for a given input to the system. Know- ing the continuous system response for a given input, a set of gain parameters b are to be adjusted to minimize

1 N-1 = 5 - Y d ( K g l l i > (22)

K = O

subject to the constraints

yd(n + 1 T ) = AYAnT), b), (2 3 4 Yd(0) = Y , m

b(n + 1 T ) = b(nT), (23b)

where yd(nT), y,(nT) are m-vectors, b(nT) is a p-vector, R is an m x m positive semidefinite weighting matrix, and I ly l l i = y ' R y with y' being the transpose of y.

The minimization of J is accomplished via variational calculus procedures, using the Lagrange multiplier formula- tion for discrete systems [20]: Adjoining the system con-

straints to J, one obtains the augmented cost function

N- 1

J* = .(1/21Iya(KT) - yAKT)/12 K = O

+ A ~ , ( ~ T ) [ y ~ ~ T ) - AyAKT), b ) ]

+ & , ( m . T ) [ b ( - T ) - b(KT) ] }

where 1' denotes the transpose of a vector 1 and R=Z, the identity matrix. Considering first-order variations in the usual variational minimization approach, equations for the adjoint variables are

A,(nT) = [VJ(yd(nT), 4]1,(rn T ) + Y h T ) - y d ( m (244

and

1 b ( n T ) = [Vhf(Yd(nT), b)]Ay(= T )

+ l,(n + 1 T ) (24b)

with boundary conditions

1y(NT) = 1b(NT) = n b ( o ) = 0

whete VJ'(z)= [2&/2zi]. Solution of (23) and (24) with the boundary conditions poses a two-point boundary-value problem which may be solved by the quasilinearization ap- proach [21], [22]. Define a 2(m + p ) vector

x' (nT) = CyXnr), b', AiinT), Ab(nT)],

Page 7: Real-time digital simulation for systems control

1808 PROCEEDINGS OF THE IEEE DECEMBER

which describes (23) and (24), and which may be written as

x(n + 1 T ) = g[x(nT)] (25)

with boundary conditions

(cfiT), xGT)) = dfiT), j = 0, N i = 1,2, . . . , (m + p ) , (26)

where c and x are 2(m + p ) dimensional vectors and (, ) de- notes the inner product. An initial estimate of the parameters b permits (25) to be solved, yielding an initial trajectory xo(nT). The (q+ 1)st approximation is then obtained from the qth by

xq+ '(n + 1 T ) = g[xq(nT)] + [V,~'[X~(~T)]]'[X~+'(~T) - x'(nT)], (27)

where [V,g']' is the Jacobian matrix having as its ijth ele- ment the partial derivative dg,/dq. Equation (27) is linear in the (q+ 1)st approximation, and the solution may be formed by obtaining the homogeneous and particular. solu- tions and determining the necessary constants according to the boundary conditions of (26). Let W+ '(nT) be the funda- mental matrix of

w + ' ( n + 1 T ) = [V,g'[xq(nT)]]'oq+f(nT),

with

w + '(0) = I,

the identity matrix, and P+ ' ( nT) be the particular solution of

p4+ '(n + 1 T ) = g[x"nT)] + [VXg'[xq(nT)]]'[P4+'(nT) - xq(nT)],

where

P+'(O) = 0.

The solution for (27) is then

~ q + ' ( m T ) = (Pq+'(nT)Vq+' + P+' (nT) (28)

where the constant vector Vq+ is found from the boundary conditions by solving

(CiO'T), W+'O'T)Vq+' + Pf 'O'T) ) = dfiT), j = 0, N

i = 1,2, . . . , (m + P). (29)

The process is terminated by satisfaction of some desired criterion, for example, a test for the change in the gain parameters between iterations, or a test for the change in the magnitude of the criterion of (22) between iterations.

Estimation of the initial values for the set of parameters b in the method of quasilinearization may require some care- ful consideration to assure convergence of the process. The region of convergence determined by parameter values ap-

pears dependent upon the system in question, the number of parameters to be adjusted, and the sampling interval T. The method of differential approximation also discussed by Bellman [2 1 3 may be employed to obtain initial parameter estimates. Consider again (23a) for the state variables of the discrete approximation

Y A r n T ) = AyAnT), b)

where the parameters b are to be determined so that.y&T) closely approximates y,(nT), the continuous system state. If some suitable values of parameters bo can be found so that

y,(n + 1 T ) =f(y.(nT), bo),

this set of parameters with the initial condition y,(O) will make y&T) identical with y,(nT). Such a set of parameters may not exist ; however, b may be determined to make

Yo(= T ) - f(y,(nT), b) (30)

as near zero as possible. The set of parameters may then be chosen so that

is minimized with respect to b. The minimization may be accomplished by equating to zero the partial derivatives of (31) with respect to the components of b, yielding equations N-1

1 [Vhf(y,(nT), b)l[y,Cn + 1 T ) -f(Y,(nT), 4 1 = 0 9 (32) n = O

where V, is the operator previously defined, and 0 is the p - dimensional null vector. The solution of thesep equations in the components of b provides the desired initial estimates for the gain parameters.

APPLICATION AND COMPARISON OF METHODS The methods discussed for digital simulation will now be

applied to the simulation of a second-order nonlinear sys- tem. Comparison of the performance of the methods will be made by obtaining the discrete system response to a step input for several different sampling intervals and by de- termining the sum of error squared between the discrete system response and that of the continuous system for each case. The system considered here is the second-order non- linear system shown in Fig. 5 .

Since the first-order integration operator for the Tustin, Madwed-Truxal, and Boxer-Thaler methods is the trape- zoidal rule, the discrete approximations resulting from ap- plication of these techniques to the present example are identical. Employing the trapezoidal rule integration op- erator to carry out the integrator substitution as earlier discussed, one obtains the discrete system which may be im- plemented on a digital computer as a set of difference equa- tions, if a unit time delay is assumed in the feedback path to make the closed-loop approximation realizable. Inser- tion of the feedback path delay permits the following equa-

Page 8: Real-time digital simulation for systems control

1966 SAGE AND SMITH: SIMULATION FOR SYSTEMS CONTROL 1809

r T T I x 2 Fi pv U=V..OWJ

Fig. 5. Nonlinear system for the applications example.

tions to be written for what will here be termed the Tustin approximation :

25 xl(n + 1 T ) = x , (nr ) + E T[x,(nT) + .Olx;(nr)

+ x 2 ( n + 1 T ) + . O l x z ( x T ) ]

x2(n + 1 T ) = ~

1 - 3T x,(nT) + ~ 3T [ r ( x T ) 1 + 3T 1 + 3T

- x , (nT) + r(nT) - xl(n - 1 T ) ] ,

where the state variable notation corresponds to that of Fig. 5 .

In developing the recursion formulas for the Anderson- Ball-Voss method, the linear operators of the system are discretized by assuming a linear time function to approxi- mate the input to each operator. Again introducing a feed- back path time delay, the difference equations are

x l ( n + 1 T ) = x , ( n T )

+ - [ X , ( . + 1 T ) + .Olx;(n + 1 T ) + 4X2(nq 25T - 24

+ .04x;(nT) + xz(n - 1 T ) + .Olx;(n - 1 T ) ]

x2(n + 1 T ) = e-6Tx2(nT)

+ - - - ( 1 - e - 6 T ) [r(n + 1 7 ' ) - x,(nT)] [: 1:T 1 - + ( 1 - e - 6 T ) [ r ( n T ) - x l ( n - 1 T ) ]

- - - - (1 - e - 6 T [: 1:T 1 - ) [r(n - 1 T ) - xl(n - 2 T)] .

Development of the required discrete operators for the IBM method was accomplished earlier in discussing the basic aspects of this approach. The results are applicable here if the nominal gain assumed for the nonlinearity is made unity for the linearized approximation. The recursion relationships for the discretized nonlinear system when ex- pressed for a general sampling interval Tare

xl(n + 1 T ) = x , ( n T ) 25T - + - [x , (n + 1 T ) + .Olxl(n + 1 7 3 1 6

x 2 ( z 1 T ) = e-6Tx2(nT)

+ - [C,r(n + 1 T ) + C,r(nT) 6 25 T

+ C34n - 1 T ) - C,x,(nT)]

where

C, = 1 - e-'.5T(cos 2T + .75 sin 2 T ) C, = e-"5T(cos 2 T + .75 sin 2 T )

+ e-4.5T(cos 2T - .75 sin 2 T ) - 2e-3T cos 4T C 3 - - e -6T - e-4.5T(cos 2T- .75 sin 2T) C, = 1 + e - 6 T - 2 e - 3 T ~ ~ ~ 4 T .

Difference equations for the optimum discrete approxi- mation for this example may be written immediately utilizing the three-point extrapolation rule for integration developed previously and obtaining the minor loop ap- proximation from Table 11. The resulting equations take the form

xl(n + 1 T ) = x , (nT)

+ - q 4 X 2 ( n q -k .UX;(nr) - 3 X 2 ( a r )

- .03x;(n - 1 T ) + xz(n - 2 T ) + .Ol$(n - 2 T ) ]

25 12

x2(n + 1 T ) = e-6Tx2(nT) 1

6T 1

6T

+ - ( 6 T - 1 + e-6T)[r (n + 1 T ) - x,(h + 1 T)]

+ - ( 1 - e - 6 T - 6Te-6T)[r(nT) - x , (nT) ] .

To formulate a basis for determination of the error in the discrete approximations the solutions to the continuous system differential equations were also generated by means of a fourth-order Runge-Kutta method. Figure 6 illustrates the results obtained when the difference equations were solved for a number of different sampling intervals and the sum of error squared between the discrete system response and the continuous system response computed for an ob- servation period of 5 seconds and an input step of magnitude 10.

For the input taken here the error criterion shows the decided advantage of the. optimum discrete approximation at small sampling intervals, and also makes apparent the advantages of the IBM method for both low error and re- duced sensitivity to changes in the sampling interval. A simulation of the example system in whxh the transient period of the response is of importance would employ a sampling interval not greater than 0.1 second, in order to adequately define state changes. In this range of sampling periods, the optimum discrete approximation offers ad- vantages of simplicity of derivation as well as low error.

The quasilinearization approach to adjustment of simula- tion gain for error reduction was employed for the optimum approximation with the result shown in Fig. 7 . Perhaps the most significant features of the adjusted approximation are those of the extended region of stability achieved and the reduced sensitivity to change in sampling period. The quasi- linearization process itself fails to converge for very large sampling intervals. Adjustment of two gain parameters in the optimum approximation required solution of eight simultaneous difference equations and resolution of the two-

Page 9: Real-time digital simulation for systems control

1810 PROCEEDINGS OF THE IEEE DECEMBER

IO .20

I J

.3 0 T sec.

Fig. 6. Error analysis of example system with input step of magnitude 10.

+ Optimum, &(O) IQ

10-7 0. 30 .20 .30

T sec. Fig. 7. Error analysis showing results of quasilinearization on optimum

approximation and IBM method with and without input approxima- tion.

point boundary-value problem, for which quasilineariza- tion was employed. Convergence on the gain values was achieved with 3 to 5 iterations of the procedure. Rapid con- vergence for the gain parameters is generally assured by starting the process at small sampling intervals for which the parameters are near unity in value. The final parameter

2.0 1 1.5 1

/4

/ Gain p a r m t e r s :

b,, integrator 4, minor loop

u .l .2

T sec. Fig. 8. Variation of gain parameters of optimum discrete approximation

as determined by the quasilinearization procedure.

XI 14

13

12

11.

10

9.

0.

7.

6.

5.

4.

3.

2.

1.

C

A a

a a

a

a X

I

J

a a 4 4

o-Optimum, q = 1.

0-Optimum,W Gain

X- IBM

a - Tustin

I U . I > I I . 1 . . . 1

.25 .5 75 1.0 125 time - sec.

Fig. 9. Example system response for input step of magnitude 10 and sampling interval of 0.1 second.

values for one interval then become the initial estimates for the next interval and aid in accelerating the convergence. Figure 8 illustrates the modification of the gain parameters determined by the quasilinearization procedure. Additional evidence of the results achieved by means of this technique is found in Figs. 9 and 10 where the system step response is shown for the Tustin, IBM, and optimum approximations.

A final comparison of the IBM and optimum approxima- tion methods is made via the sine wave response of the discrete systems previously formulated for step inputs. The results are presented in Figs. 11 and 12 for response of the simulations to an input sine wave of peak amplitude 10 and at the damped natural frequency of the continuous system. For this case the IBM method offers clear advantage in low

Page 10: Real-time digital simulation for systems control

1966 SAGE AND SMITH: SIMULATION FOR SYSTEMS CONTROL

0

/ y 0

@Optimum, bi = 1

v-Opt'mum,Q-L Gams

1 .I I - . . * . . . . . . . V ~ " " " ' "

.25 .5 .75 1.0 1.25 tlme sec.

Fig. 10. Optimum discrete simulation step response, illustrating - corrective action of quasilinearization gain adjustment.

0. 30 .20 T set.

.3 0

Fig. 1 1. Error analysis for input of 10 sin 21.

-2:-

1811

Fig. 12. Sine wave response for IBM (x) and optimum (0) discrete simulations.

error, although the sensitivity to sampling interval size is now approximately that of the optimum approximation. For both step and sine waveinputs, the superiority of the IBM method. at increasing sample intervals is due to the presence of the input approximation. It is clear that use of input approximations for the optimum simulztim would greatly improve its performance at larger sample periods. Performance of the IBM method without the input ap- proximation is shown in Figs. 7 and 11.

CONCLUSIONS

A detailed study of the relative effectiveness of different approaches to digital simulation has revealed the improve- ment possible in .simulation accuracy .through- use of re- cently developed methods for discrete .approximation of continuous systems. While recognizing the specific nature of the example chosen for comparison of the simulation techniques, the results appear consistent with other re- ported work. Of the methods considered here, the IBM method possesses the desirable characteristics of low simula- tion error and reduced sensitivity of simulation error to changes'in sampling interval for certain types of inputs. The "optimum" discrete approximation technique offers a means of achieving very low simulation'error for small sampling intervals band relative simplicity of .derivation of the discretized system. Reduction in simulation error and extended simulation stability for increased- sampling in- tervals is possible through modification of simulation gains via the method of quasilinearization and by introducing in: put approximations.

A basic limitation to the use of the quasilinearization technique for parameter identification in the discrete model is the possible instability of the identification procedure itself for large sample intervals. Concise and meaningful measures for the regions of convergence of the process for arbitrary sampling interval and initial parameter estimate are not now available, but.would be of great value to the simulation designer.

Page 11: Real-time digital simulation for systems control

1812 PROCEEDINGS OF THE IEEE, VOL. 54, NO. 12, DECEMBER, 1966

REFERENCFS

[ l ] R. W. Hamming, Numerical Methods for Scientists and Engineers. New York: McGraw-Hill, 1962.

[2] F. B. Hildebrand, Introduction to Numerical Analysis. New York: McGraw-Hill, 1956.

[3] P. Henrici, Discrete Variable Methods in Ordinary Differential Equa- tions. New York: Wiley, 1962.

[4] J. R. Ragazzini and G. F. Franklin, Sampled-Data Control Systems. New York: McGraw-Hill, 1958.

[SI J. T. Tou, Digital and Sampled-Data Control Systems. New York: McGraw-Hill, 1959.

[6] E. I. Jury, Theory and Application of the z-Transform Method. New York: Wiley, 1963.

[7] “Numerical techniques for real-time digital flight simulation,” IBM Manual, E20-0029-1, 1964.

[8] W. D. Fryer and W. C. Schultz, “A survey of methods for digital simulation of control systems,” Cornell Aeronautical Lab., Buffalo, N. Y., Rept. XA-1681-F-I, July 1964.

[9] M. Blum, “Recursion formulas for growing memory digital filters,” IRE Trans. on Information Theory, vol. IT-4, pp. 24-30, March 1958.

[lo] A. Tustin, “A method of analysing the behaviour of linear systems in terms of time series,” JIEE, vol. 94, pt. 11-A, May 1947.

[l 1 ] A. Madwed, ”Number series method of solving linear and nonlinear differential equations,” M.I.T. Instrumentation Lab., Cambridge, Mass., Rept. 6445-T-26, April 1950.

[I21 J. G. Truxal, “Numerical analysis for network design,” IRE Trans. on Circuit Theory, vol. C T - I , pp. 4940, September 1954.

1131 R. Boxer and S. Thaler, “A simplified method of solving linear and nonlinear system,” Proc. IRE, vol. 44, pp. 89-101, January 1956.

[I41 R. Boxer, “A note on numerical transform calculus,’’ Proc. IRE, vol. 45, pp. 140-1406, October 1957.

[IS] W. H. Anderson, R. B. Ball, and J. R. Voss, “A numerical method for solving differential equations on digital computers,” J. ACM, vol. 7,

pp. 6148, January 1960. [16] M. C. Fowler, “A new numerical method for simulation,” Simula-

tion, vol. 4, pp. 324-330, May 1965. [I71 J. M. Hurt, “New difference equation technique for solving nonlinear

differential equations,” 1964 AFIPS Con$ Proc., vol. 25, pp. 169- 179.

[I81 A. P. Sage and R. W. Burt, “Optimum design and error analysis of digital integrators for discrete system simulation,” 1965 AFIPS Con$ Proc., vol. 27, pt. 1, pp. 90S914.

[I91 A. P. Sage, “A technique for the real-time digital simulation of non- linear control processes,” 1966 Proc. IEEE Region 3 Conf

[20] J. R. Tou, Modern Control Theory. New York: McGraw-Hill, 1964. [21] R. Bellman and R. Kalaba, Quasilinearization and Nonlinear Bound-

ary-Value Problems. New York: American Elsevier, 1965. [22] R. McGill and P. Kenneth, “Solution of variational problems by

means of a generalized Newton-Raphson operator,” AIAA, vol. 2, pp. 1761-1766, October 1964.

[23] W. Hurewicz, “Filters and servo systems with pulsed data,” Theory of Servomechanisms, M.I.T. Rad. Lab. Ser., vol. 25. New York: Mc- Graw-Hill, 1947.

[24] J. E. Gibson, Nonlinear Automatic Control. New York: McGraw- Hill, 1963, pp. 94159.

[25] J. M. Salzer, “frequency analysis of digital computers operating in real time,” Proc. IRE, vol. 40, pp. 457-466, February 1954.

[26] H. Freeman, Discrete-Time Systems. New York: Wiley, 1965. [27] R. E. Kalman and J. E. Bertram, “A unified approach to the theory of

sampling systems,” J. Franklin Inst., vol. 267, pp. 405436, May 1959. [28] R. K. Adams, “Digital computer analysis of closed-loop systems

using the number series approach,” Trans. AIEE (Applications and Industry), vol. 80, pt. 11, pp. 37G378, January 1961.

[29] M. Cuenod, “Methods de calcul a I’aide de suites,” Lausanne: Im- primerie de la Concorde, 1955.

[30] J. M. McCormick and M. G. Salvadori, Numerical Methods in Fortran. Englewood Cliffs, N. J.: Prentice-Hall, 1964.

A Method of Processor Selection for Interrupt Handling in a Multiprocessor System

R. J. GOUNTANIS AND N. L. VISS

Abstract-A method of assigning external interrnpts to processors io a maltipmeessor system is described. Features of a multilevel priority intermpt system are incorporated into a hardware component called the Interrupt Directory. The directory selects the most appropriate p r o a s o r for s e n i c i the interraption at the time the event occurs. ’Zbe “appropriateness’’ for ioter- nrption is based 011 the priority level of a procesor’s current task, tho p m vidingdynnmicpriorityPUocatiwoftasks.Qwwingofiuterruptsisako provided. ’zbe armngement described in this paper simplifies and increases the effectiveness of executive c o n t r o l programs. Implicatioaf of the Interrupt Directory on reliability and ‘‘fpil-soft” operation are ako disepssed.

1 1. INTRODUCTION

NTERRUPT handling presents common problems in differing system types. An excellent summary of in- terrupt features and problems has been presented by

Manuscript received June 30, 1966; revised August 22, 1966. The authors are with the Univac Division of Sperry Rand Corpora-

tion, St. Paul, Minn.

Borgers’ and supplemented by Bennet2 for what is evi- dently a single computer case. A multisystem3 has the same problems, plus additional ones arising mainly from having more than one processor in the system.

One problem peculiar to multisystems is what to do with

’ E. R. Borgers, “Characteristics of priority interrupts,” Datamation, vol. 1 I , pp. 31-34, June 1965.

J. G. Bennet, “Letters,” Datamation, vol. 11, p. 13, October 1965. By a multisystem is meant a system consisting of two or more central

processing units that can communicate without manual intervention. (See G. A. Blaauw, “IBM System/360 multisystem organization,” 1965 IEEE Intern’l Conv. Rec., pt. 3, p. 227.) The term encompasses both multicom- puter and multiprocessor systems. A multicomputer system is a multisys- tem in which central processors have dedicated “private” memories and can communicate with each other via 1/0 channels. Multiprocessor denotes a multisystem wherein the central processors intercommunicate chiefly by access to a shared memory.