learning issues in model reference based fuzzy control

7

Click here to load reader

Upload: mw

Post on 19-Sep-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Learning issues in model reference based fuzzy control

Learning issues in model reference based fuzzy control

D.S. Reay M.W.Dunnigan

Indexing terms: Fuzzy model reference leurning control, Control behaviour, Proportional plus integral action, Nonlinear problems

Abstract: The question of learning in fuzzy model reference learning control (FMRLC) is investigated and it is proposed that the extent and importance of learning may potentially be less than has been claimed previously. An alternative view of FMRLC, incorporating proportional plus integral action in place of the updating mechanism of the fuzzy logic controller, and which has similarities with a form of linear high- gain robust control, is presented. The two approaches are applied to two nonlinear problems, discussed previously in the literature, and the results compared. It is concluded that although FMRLC, as applied to these two examples, is an effective form of control, it does not recall previously learned control behaviour as claimed.

1 Introduction

In recent years, fuzzy control has grown in popularity. Its use has been proposed in applications that extend beyond those in which it is sought to replace human operators, having first extracted from them linguistic expressions of their expert knowledge. Fuzzy control is now often considered for applications in which human operators have never been used, e.g. servomechanisms. In such applications, it is the facility of fuzzy systems to synthesise nonlinear control laws, or functions, or to act as function approximators, rather than their corre- spondence to linguistically expressed expert knowledge, that is exploited. Nonlinear control has the potential to outperform linear methods. However, the issue arises of how to design nonlinear fuzzy control laws. Granted that in applications where human operators have never been involved, expert control design knowledge (or conventional control theory) may be used in setting up a rulebase, another option is to construct learning, or adaptive, fuzzy systems to synthesise suitable rulebases automatically. 0 IEE, 1997 ZElZ Proceedings online no. 19971438 Paper first received 8th July 1996 and in revised form 6th May 1997 The authors are with the Department of Computing and Electrical Engi- neering, Heriot-Watt University, Edinburgh EH14 4AS, UK

2

Fuzzy control systems based on model reference archi- tecture, for which adaptive and/or learning properties are claimed, have been reported by a number of researchers [ 1-31. These share the common architecture shown in Fig. 1. The principal components of this system are a reference model, a primary, or direct, fuzzy logic controller, and an adaptation mechanism.

Model reference based fuzzy control

ym(t) reference model

Fig. 1 Model reference bused fuzzy control

The reference model embodies the desired perform- ance characteristics of the overall system. Typically, this is a first-order or well-damped second-order linear system although it could alternatively be nonlinear.

The primary, or direct, fuzzy logic controller (FLC) might be implemented using fuzzy logic, neural net- works or a combination of these or other technologies, as investigated by Nie and Linkens [4-71. The common, and most important, feature of the direct fuzzy control- lers used in the examples cited in this paper is that they implement an adjustable nonlinear mapping between inputs (usually including, but not limited to, linear functions of the error, e(t)), and output (control action,

The role of the adaptation mechanism is to adjust the characteristic of the FLC in response to the error, y,(t) between the outputs of reference model and plant, in order to minimise that error in some sense, i.e. with respect to some norm.

The adaptation mechanisms presented in [l, 21 may further be subdivided into an inverse plant model designed to give an indication of the required correc- tion, Au(t), to the control signal u(t) and an updating algorithm in order to effect that correction via the direct fuzzy logic controller.

Behmenberg and Schwarz [3 ] present a taxonomy of inverse plant models. An inverse plant model might be implemented in any one of a variety of different forms. Layne and Passino [2], for example, use a fuzzy system. Nie [l] does not refer to an inverse model but simply to

u(t>>.

605 IEE Proc -Control Theory Appl , Vol 144, No 6, November 1991

Page 2: Learning issues in model reference based fuzzy control

adaptation gains. In each of these cases, however, the effective inverse plant model is of a gradient type and the iterative nature of the updating algorithm is relied upon for convergence. For a nonlinear plant, y(t> (and possibly other signals) might be required as additional inputs to a nonlinear inverse plant model.

The updating algorithm modifies the FLC character- istic such that its output u(t), corresponding to a partic- ular input, is altered by, or in the sense of, the correction value Au(t). The means by which the updat- ing algorithm is derived differs in the approaches of [l-31 in relation to the timescales over which adapta- tion takes place. Whereas Nie bases his adaptation on the technique of iterative learning [8] in which adapta- tion takes place over several repetitions of an action, Layne demonstrates faster convergence than this in a number of applications [2, lo]. Each author claims that learning is achieved.

3 control

Learning in model reference based fuzzy

In this Section the learning properties of two schemes that share the model reference architecture described in the previous Section, but differ in their respective adap- tation mechanisms, are reviewed briefly.

3. I Iterative learning Nie and Linkens [5 , 61 base their approach on the iter- ative learning method, described by Arimoto et al. [8], which concerns the control of repetitive actions. Arim- oto iteratively modified a stored sequence of control values u(kT), covering a finite period of time, as the result of repetitive trials such that u(kT) tended to um(kT), the (unknown a priori) control sequence yield- ing the desired response y,(kT).

Whereas Arimoto considered that the sequence u(kT) would be replayed sequentially from memory as a feed- forward term, in Nie's approach, previously learned control values are referenced not by time but by values of the inputs to the FLC, e(kT) and its discrete time derivative, c(k7'). This allows for a control sequence other than exactly that recorded during the final itera- tion of learning to be output by the controller. How- ever, the motivation for this approach and its intended benefits are not discussed in any depth by Nie. Effec- tively, Nie's FLC can interpolate between training

examples, i.e. samples from the sequence u(kT), and thereby affords the possibility of output values other than exactly those of the samples with which it was trained. However, the FLC output will be zero if the pairs of values of e(kT) and c(kT) input subsequently are not close to those experienced during training.

Subsequently, Nie and Linkens developed an active online learning scheme [4] very similar to FMRLC. Even then, however, they concentrated on learning control sequences for specific step responses iteratively, i.e. they do not claim that a global control law is learned and their scheme requires several repetitions of the same trajectory in order to learn.

3.2 Fuzzy model reference learning control (FMRLC) Rather than memorising a particular control sequence after a number of repetitions, FMRLC (Fig. 2) is pre- sented as learning a more global control function with faster convergence. The FLC employed in [2, 9-16] has been described many times. Adaptation is effected by adjusting the centre values of the output fuzzy sets.

YZ

y.' y-4 ' YL3 Y;? U;' Y1' U: Y: Y: Y: Y,'

1.0 1.0 1.0 1.0 1.0 1.0 0.8 0.6 0.4 0.2 0.0

y;" 1.0 1.0 1.0 1.0 1.0 0.S 0.6 0.4 0.2 0.0 -0.2

y,' 1.0 1.0 1.0 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 -0.4

y: 1.0 1.0 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 -0.4 -0.6

y1' 1.0 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 -0.4 -0.6 -0.8

yp 1 .0 0.8 0.6 0.4 0.2 0.0 -0.2 -0.4 -0.6 -0.8 -1.0

y:, 0.8 0.6 0.4 0.2 0.0 -0.2 -0.4 -0.6 -0.8 -1.0 -1.0

y: 0 6 0.4 0.2 0.0 -0.2 -0.4 -0.6 -0.8 -1.0 -1.0 - 1 0

yL! 0.4 0.2 0.0 -0.2 -0.4 -0.6 -0.8 -1.0 -1.0 - 1 . 0 -1.0

y: 0.2 0.0 -0.2 -0.4 -0.6 -0.8 -1.0 -1.0 -1.0 -1.0 -1.0

0.0 -0.2 -0.4 -0.6 -0.8 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0

Fig. 3 Rulebuse for fuzzy inverse model (after 121)

The inverse model used by Layne and others [2, 9-11, 131 is implemented similarly to the FLC, i.e. as a fuzzy system, but in this case the rulebase is pre- specified and is not subsequently modified. Supervised tuning of the inverse model has been implemented by Kwong et al. 1171 although this feature is not funda-

I direct fuzzy Logic controller I Fig. 2 Fuzzy model reference learning control (FMRLC)

IEE Proc.-Control Theory Appl., Vol. 144, No. 6, November I997 606

Page 3: Learning issues in model reference based fuzzy control

mental to the operation of FMRLC. A fuzzy inverse model of this form could be nonlinear. However, the rulebase and input fuzzy set membership functions pre- sented by Layne and others [9-151 and reproduced in Figs. 3 and 4 are essentially linear over the normal operating range and very similar to proportional and derivative action gains applied to the error signal ye(t) as illustrated in Fig. 5 where gy,, gyc and gp are the input and output scaling gains of the fuzzy inverse plant model. A nonlinear inverse model has been implemented in [17] although this feature is not funda- mental to the operation of FMRLC.

y - 5 c c y - 4 y - 3 c Yc- 2 Y, -1 Yc 0 Y,' Y, 2 Y p Y," Y:

-1.0 0 1.0 yc

Fig.4 Input fuzzy set membership functions used in inverse model

Au(kTl ye (kT) --F?x yc(kT) Tz

Fig. 5 membership functions of Fig. 4

Alternative inverse plant models (assuming rulebase of Fig. 3 and

4 control

FMRLC viewed as linear high-gain robust

The application of FMRLC to a variety of different problems [2, 9-16] has been reported. Fast conver- gence, synthesis of nonlinear control functions and the ability to recall control actions corresponding to differ- ent operating conditions learned previously are claimed in a number of different applications and these quali- ties are attributed to the learning properties of the sys- tem. In this Section, it is proposed that the learning properties of FMRLC in these examples have been overemphasised and that the effectiveness of FMRLC may be viewed from an alternative perspective. It is proposed that FMRLC is very similar to a nonadap- tive, high-gain robust linear control scheme having no learning capabilities.

An alternative interpretation of FMRLC is that the

adaptation mechanism forms the primary feedback loop around the plant and its action is simply to force the plant output y(kT) to track the reference model output y,(kT). This feedback path comprises inverse plant model and updating algorithm. The FLC may be replaced by an alternative implementation of the signal path from Au(kT) (nominally an adjustment rather than an input to the FLC) to u(kT).

If the inputs to the FLC were constant then the action of Layne's updating algorithm would be to inte- grate Au(kT), storing it in the centre values of a number of output fuzzy sets. This integral action would be greatest as the system approached steady state. If, on the other hand, the FLC inputs varied rapidly with respect to the input fuzzy membership functions, inte- gration of Au(kT) would be minimised and the function relating u(kT) to Au(kT) would resemble a simple gain. In general, the degree of integral action applied to Au(kT) will be determined by the variation of the inputs relative to the input fuzzy membership functions in a manner that depends on a number of factors including the sampling rate, T, the characteristics (both width and shape) of the input fuzzy membership func- tions and the normalising gains g, and g,.

A simple approximation to this, however, is provided by a fixed linear proportional plus integral (PI) charac- teristic. If this is adopted, it follows that the control signal u(kT) no longer depends on the signal e(kT) and its derivative c(kT), i.e. the inputs to the FLC, directly. The FMRLC architecture may therefore be modified as shown in Fig. 6. There can be no long-term learning involved in this control scheme.

In this system, the reference model acts as a prefilter on the reference input yr(kT). This interpretation of FMRLC is similar to the two-degree-of-freedom, high- gain robust controller architecture described by h t r o m and Wittenmark [ 181. The combination of proportional plus derivative action in the inverse model, followed by proportional plus integral action replacing the updating algorithm is equivalent to PID control.

5 Simulation results

Layne and Passino [lo] and Kwong and Passino [l 11 found that the performance of FMRLC compared favourably with that of conventional MRAC schemes based on linear models of a nonlinear plant. In this paper, FMRLC is compared with the form of robust high-gain linear control described in the previous Sec- tion. The examples appear to demonstrate that the degree of learning claimed for FMRLC by the afore- mentioned authors is not achieved in the references cited. The case studies concern nonlinear, time-varying plants for which Layne and Kwong considered it diffi- cult to design conventional controllers.

y (kT) plant

I inverse plant update model algori thm

Fig. 6 Alternative linear interpretation of FMRLC

IEE Pro,.-Control Theory Appl., Vol. 144, No. 6, November 1997 607

Page 4: Learning issues in model reference based fuzzy control

5, I Rocket velocity control The following example is taken from 121. The rocket is a time-varying nonlinear process. The simplified model of the rocket dynamics used in the simulations is given by

2 8000-

x u .- < 6000- a, >

ul 0 [51 4000 1

ul

0 z $ 2000-

m R dt 111-mt

I

where y ( t ) is the rocket velocity at time t, a(t) is the altitude of the rocket (above sea level), u(t) is the veloc- ity of the exhaust gases, M = 15000kg is the initial mass of the rocket and fuel, m = 100kgis is the exhaust gases mass flow rate, A = 1.0m2 is the maximum cross- sectional area of the rocket, g = 9.81mis2 is the acceler- ation due to gravity at sea level, R = 6.37 x 106m is the radius of the earth, pa = 1.21 kg/m3 is the density of air and Cd = 0.3 is the drag coefficient of the rocket.

Exhaust gas velocity, u(t) is considered to be the con- trol input, varied by means of the cross-sectional area of the rocket motor nozzle. Plots of simulated rocket velocity, y( t ) and exhaust gas velocity, u(t) were pre- sented in [2].

The system was simulated using MATLAB. The scal- ing of the input and output variables, the universes of discourse, membership functions and sampling rate used in the FLC and inverse plant model were similar to those in [2]. The rocket dynamics, fuzzy inverse plant model and FLC were implemented using S-func- tions.

It was found that several of the parameters of the FLC and inverse plant model, e.g. the number of fuzzy input sets and the input scaling gains, g, and g,, could be adjusted, and the performance altered, apparently without causing instability. This, effectively, is acknowledged by Layne when describing a method for tuning FMRLC by trial and error. Equally, and again as indicated by Layne, unwanted oscillations could be induced by parameter variation. Nie and Linkens too found that increasing the adaptation gains could cause a system to oscillate [7].

It proved difficult to reproduce Layne's results exactly. The reasons for this may include slight differ- ences in the implementation of the FLC. However, in so far as the simulated rocket velocity y ( t ) was made to track the output of the reference model y,(kT) closely, the effectiveness of FMRLC was demonstrated. The control signal u ( k q shown in Fig. 7 is very similar to that reported in [2]. For each successive step change in rocket velocity, the corresponding control signal tran- sient is smaller in magnitude and tends towards a higher value. This is consistent with the nonlinear, time-varying dynamics of the rocket.

Figs. 8 and 9 show the results of simulation of the rocket velocity control problem using linear interpreta- tions of FMRLC. In the case of the results shown in Fig. 8, the input to the FLC, e(kT) was held at zero, effectively removing the negative feedback connection in the primary closed-loop system and reducing the role of the FLC to that of integrating Au(kT). In the case of the results shown in Fig. 9, the inverse plant model was replaced by discrete-time proportional plus derivative action and the updating mechanism used in the FLC was replaced by discrete-time proportional plus integral action, as shown in Fig. 6. The rocket velocity may be controlled without the need to learn or adapt.

608

1000

800

600

10000~

-

- ( 1 )

\ -

1200r

c E x 4- .- U 0

a, > - I

W Y U

P

0 20 LO 60 80 100 120 time,s

Fig.7 Simulation results for FMRLC control of rocket velocity (0 YAW, (10 ~ ( k n , (iii) YAW

loooor

Y

a, Y U

e

tirne,s

Fig.8 mary feedback removed) (0 y,(kT), (11) Y(W, (iii) y , (W

Simulation results for FMFXC control of rocket velocity (pri-

IEE Proc.-Control Theory Appl., Vol. 144, No. 6, November I997

Page 5: Learning issues in model reference based fuzzy control

1000

\" E 800-

~

U 0 a,

c a, Y U e

-

6001

\" 8000- E x .-

6000- - (U

I

I

0 20 LO 60 80 100 120 time,s

Fig.9 Simulation results for high gain linear control of rocket velocity

loooo[

lzoor

IOoo t r 7 -

I I

0 20 40 60 80 100 120 t ime,s

Fig.10 (learning disabled after one previous simulation) (0 Y,(W (ii) y(kT), (iii) YAW

Simulation results for FMRLC control of rocket velocity

IEE Proc.-Control Theory Appl., Vol. 144, No. 6, November 1997

In the application described above, FMRLC appears to have a limited ability to learn. The FLC having inputs e(kT) and c(kT) has no way of discriminating between step changes in rocket velocity that occur at different altitudes or at different times. In other words, it cannot discriminate between different operating con- ditions for this time-varying nonlinear plant and, for this reason, it cannot learn how to control the plant at different operating points for future recall as suggested in [2]. As a consequence of the local generalisation properties of the FLC, different parts of the rulebase are filled in for different values of the input variables. However, during successive step changes in rocket velocity, the information learned and stored by the FLC is overwritten repeatedly. At the end of the simu- lation, the control function, or sequence, corresponding to the first step change in velocity has been forgotten. These effects are illustrated in Fig. 10 which shows the results of a subsequent simulation in which learning was disabled. The fact that some information is stored in the FLC is evident from the nonzero control signal u(kT), but the FLC does not act as a satisfactory con- troller.

Furthermore, the simulation results suggest that FMRLC does not rely on learning properties in order to achieve effective control.

Fzktromagnet

ball (

ball position sensor- + / / / / / / / I / / / / / / / /

Fig. 11 Magnetic ball suspension system (afer [ I l l )

5.2 Magnetic ball suspension system A further example that illustrates limitations to the extent to which FMRLC may learn concerns the con- trol of the magnetic suspension of a steel ball. This example was investigated by Kwong and Passino [l 11. The nonlinear system, shown in Fig. 11, is described by

dz ( t ) d t

w(t) = Ri(t) + L-

where y( t ) is the ball position in metres, A4 = 0.1 kg is the ball mass, g = 9.8m/s2 is the acceleration due to gravity, R = 50Q is the winding resistance, L = 0.5H is the winding inductance, v(t) is the input voltage and i(t) is the winding current.

Kwong demonstrated successful ball position control using FMRLC for a square wave reference input but went on to illustrate some apparent shortcomings of the scheme. FMRLC failed to control the position of the ball for a reference input comprising the sum of two sinusoids, i.e. yr(t) = 0.05 (sin(t) + sin(l0t)).

Kwong claimed that different parts of the rulebase of his controller were filled in based on different operating conditions of the system, and therefore that it both

609

Page 6: Learning issues in model reference based fuzzy control

adapted to new situations and remembered how it had adapted to past situations. In fact, his results for the square wave input sequence demonstrate that long- term learning in FMRLC is not possible in his approach to this application.

Fig. 12 shows the results of a simulation of the mag- netic ball suspension system using a linear interpreta- tion of Kwong's FMRLC design using the same square wave input sequence as used in [11]. The inverse plant model was replaced by discrete-time proportional plus derivative action and the FLC updating algorithm was replaced by fixed discrete-time proportional plus inte- gral action.

5 Dl 0

0 1 -

0.25r

0.20 E c 0 .- 4- .- g 0.15 a

0 d d

n 0.10

0.051 I

O ' O T

0.021

-0.06[ I I

0 2 L 6 8 10 12 t ime,s

Fig.12 (step input sequence) (0 Y ( W > (io Y,(kT)

Simulation results fo r high gain linea? control of bull position

The control signal v ( k q is very similar to that reported in [11]. As in [ll], for successive step changes in demanded ball position, at times t = 2s and t = 4s, v(kT) follows a different trajectory even though the inputs to the FLC follow similar trajectories. In partic- ular, just prior to the steps at t = 4s and t = 6s, the inputs to the FLC, e(kT) and c ( k q are both approxi-

610

mately equal to zero. However, the output at t = 4s (= -19V) is not the same as at t = 6s (= -16V). The FLC does not, at t = 6s, remember its output at t = 4s.

Given that similar trajectories in e(t) and c(t) are fol- lowed in the intervals 2s < t < 4s and 4s < t < 6s, but that the control outputs over those time intervals are quite different, the control output for the earlier of the two periods must have been overwritten or forgotten. At t = 6s, the FLC can at best remember only how to control the ball under the circumstances experienced during the period 4s < t 4 6s and may remember even less, i.e. the centre values of the output fuzzy sets may reflect previous control outputs only for a shorter time period.

Given the apparent instability of the FMRLC scheme for a different input signal, Kwong investigated a number of schemes whereby the performance of FMRLC in this application could be improved. These concemed adapting and offsetting the normalisation gains at the inputs to the FLC. Paradoxically, these schemes actually depend on overwriting previously stored information efficiently, making long-term learn- ing impossible.

Fig. 13 shows the results of a sum of two sinusoids input. The high-gain linear control system performs as well as, if not better than, the modified FMRLC pre- sented by Kwong.

-5

-IO[

-30t

0.:

0:

0.05 0 2 L 6 8 10 12

t ime,s Fig.13 (sum of sinusoidr input sequence) (0 Y ( W , (io Y,(W

Simulation results for high gain linear control of bull position

It is concluded from the simulation results that the learning properties of FMRLC are not as claimed in this application. FMRLC cannot recall previously learned control actions corresponding to different oper- ating conditions if, as in the examples presented by Layne and Kwong, the FLC input variables cannot dis- criminate between those different operating conditions.

IEE Proc.-Control Theory Appl., Vol. 144, No. 6, November 1997

Page 7: Learning issues in model reference based fuzzy control

A robust high-gain linear control scheme, derived from FMRLC, but having no learning capabilities, has been demonstrated, leading again to the conclusion that learning capabilities are not essential to the effec- tiveness of FMRLC and suggesting that the perfonn- ance of FMRLC may be understood in terms of the adaptation mechanism acting as a high-gain feedback controller forcing the plant output y( t ) to follow the output of the reference model y,(t).

6 Conclusions

FMRLC has previously been demonstrated (at least in simulation) to be an effective control method with apparent applicability to a range of nonlinear prob- lems. It addresses the question of how to design a rule- base automatically where, for whatever reason, expert knowledge for that purpose is not available. However, a number of issues concerning FMRLC do not appear to have been resolved fully. Its design is essentially ad hoc, relying on trial and error. Even though a system- atic design procedure has been suggested [2], there are no known proofs of its stability. Indeed, Kwong and Passino [I 11 demonstrated that different input signals can lead to instability in FMRLC. They suggest modi- fications to the basic scheme but again these do not guarantee stability. More recently, the stability proper- ties of a closely related control scheme have been inves- tigated [19].

This paper has explored the hypothesis that the learning capabilities of FMRLC may be less well devel- oped than has been claimed. It has been demonstrated that in application examples reported previously in the literature, FMRLC does not learn in the manner previ- ously claimed. However, its effectiveness is not dis- puted. Instead, an alternative interpretation of the control scheme with no learning capabilities, namely a linear high-gain robust controller incorporating PI action in place of the FLC updating algorithm has been proposed. On the basis of its simulated perform- ance, it is concluded that this is a valid substitution and that it may be appropriate to view FMRLC as a robust system rather than as a learning system.

~

7

1

2

3

4

5

6

7

8

9

References

NIE, J., and LINKENS, D.A.: ‘FCMAC: a fuzzified cerebellar model articulation controller with self-organising capacity’, Auto- matica, 1994, 30, (4), pp. 655-664 LAYNE, J., and PASSINO, K.M.: ‘Fuzzy model reference learn- ing control’, J. Zntell. Fuzzy Syst., 1996, 4, (l), pp. 3347 BEHMENBERG, C., and SCHWARZ, H.: ‘Model reference adaptive systems with fuzzy logic controllers’. 2nd IEEE confer- ence on Control applications, Vancouver, BC, 1993, pp. 171-176 LINKENS, D.A., and NIE, J.: ‘Back-propagation neural net- work based fuzzy controller with a self-learning teacher’, Znt. J. Control, 1994, 60, (l), pp. 17-39 LINKENS, D.A., and NIE, J.: ‘Constructing rule-bases for mul- tivariable fuzzy control by self-learning. Part l : System structure and learning algorithms’, Znt. J. Syst. Sci., 1993, 24, pp. 111-127 LINKENS, D.A., and NIE, J.: ‘Constructing rule-bases for mul- tivariable fuzzy control by self-learning. Part 2: Rule-base formu- lation and blood pressure control application’, Znt. J. Syst. Sci., 1993, 24, pp. 129-167 NIE. J.. and LINKENS. D.A.: ‘Fuzzv neural control: Drincides.

1 I ,

algorithms and applications’ (Prentice-Hall, 1995) ARIMOTO, S., KAWAMURA, S., and MIYAZAKI, F.: ‘Bet- tering operation of robots by learning’, J. Robotic Syst., 1984, 1, DD. 123-140 I I

LAYNE, J., PASSINO, K.M., and YURKOVICH, S.: ‘Fuzzy learning control for antiskid braking systems’, ZEEE Trans. Con- trol Syst. Technol., 1993, 1, (2), pp. 122-129

10 LAYNE, J., and PASSINO, K.M.: ‘Fuzzy model reference learn- ing control for cargo ship steering’, ZEEE Control Syst. Mag., December 1993, 13, (6), pp. 23-34

11 KWONG, W.A., and PASSINO, K.M.: ‘Dynamically focused fuzzv learning control’. IEEE Trans. Svst. Man Cvbern. B. 1996. 26, (l), pp. 57-74

12 TANG, Y., and XU, L.: ‘Fuzzy logic application for intelligent control of a variable speed drive’, ZEEE Trans. Enernv Convers., -. 1994, 9, (4), pp. 679-685

13 LENNON, W.K., and PASSINO, K.M.: ‘Intelligent control for brake systems’.Proc. 1995 IEEE ISIC, Monterey, CA, 1995, pp. 499-504

14 PASSINO. K.M.: ‘Intelligent control for autonomous svstems’. ZEEE Spectrum, June 1995, 32, (6), pp. 55-62

15 MOUDGAL, V.G., KWONG, W.A., PASSINO, K.M., and YURKOVICH, S.: ‘Fuzzy learning control for a flexible-link robot’. Proc. ACC, Baltimore, MD, 1994, pp. 563-567

16 KWONG, W.A., PASSINO, K.M., and YURKOVICH, S.: ‘Fuzzy learning systems for aircraft control law reconfiguration’. Proc. 1994 IEEE ISIC, Columbus, OH, 1994, pp. 333-338

17 KWONG, W.A., PASSINO, K.M., LAUKONEN, E.G., and YURKOVICH, S.: ‘Expert supervision of fuzzy learning systems for fault tolerant aircraft control’, Proc. ZEEE, 1995, 83, (3), pp. 466483

18 BSTROM, K.J., and WITTENMARK, B.: ‘Adaptive control’ (Addison-Wesley, 1989)

19 SPOONER, J.T., ORDONEZ, R., and PASSINO, K.M.: ‘Stable direct adaptive control of a class of discrete time nonlinear sys- tems’. Proceedings of 13th IFAC Triennial World Congress, San Francisco, CA, 1996, pp. 343-348

IEE Proc.-Control Theory Appl., Vol. 144, No. 6, November 1997 611