5-nonlinear system identification

84
Nonlinear System Identification

Upload: anurag-shah

Post on 11-Nov-2015

17 views

Category:

Documents


1 download

DESCRIPTION

Get a taste of Nonlinear System Identification with this wonderfully compiled pptx.

TRANSCRIPT

Nonlinear System Identification

Nonlinear System IdentificationSystem Description

Identification of nonlinear systems using black-box methods has gained high attention in the last decades. This is mainly because the theory of linear system identification is already well developed due to the relative simplicity of linear models, but also, because in several cases linear system identification might not give satisfactory results when the system has highly nonlinear dynamics. In some nonlinear systems though, a linear model can be sufficient for the purpose of control. In other cases where the dynamics of the system varies a lot for different operating points, this may not be possible. An examples where linear system identification fails is in flight control, where gain scheduling is often used to overcome the problemNonlinear model structuresMost common model structures that are used for nonlinear system identificationDiscrete-time nonlinear difference equationsNARX and NARMAX are the nonlinear equivalent to the ARX, Auto Regressive with eXogeneous inputs, and ARMAX, Auto Regressive Moving Average with eXogeneous inputs, model structures. The ARMAX model structure is as follows

The ARMAX model structure is very general, any linear finite-order system with stationary disturbance and rational spectral density can be described using the ARMAX model structure. By extending the ARMAX by introducing nonlinearities in the model structure it is generalized to a nonlinear ARMAX structure, denotes NARMAX. For the single-input single-output system, the structure is

F(.) is an arbitrarily nonlinear function. This nonlinear function can for instancebe chosen as a polynomial of the arguments. ny, nu and ne are the orders for each signaland d is time delay of the input. ARX, and NARX can be regarded as a special case of the ARMAX or NARMAX model structure respectively. Block-oriented nonlinear modelsIn order to describe nonlinear systems, block-oriented nonlinear models can sometimes be used. Such models divide the system into linear and nonlinear subsystems, i.e. blocks.The most common block-oriented models include1)Hammerstein modelsare in many cases a good approximation of nonlinear systems. The model structure consists of a static nonlinearity followed by a linear transfer function. The input and the output are measurable, but the intermediate signal is not accessible. The parameters of the model are then identified for instance by separation of the linear and nonlinear part and iterative identification. Examples where Hammerstein models have been proven to be useful are in identification of electrically simulated muscles and power amplifiers

Block diagram of the Hammerstein model structure2)Wiener modelsare similar to Hammerstein models. This model structure consist of a linear transfer function followed by the static nonlinearity on the output of the system. The output nonlinearity might for instance represent sensor nonlinearity but also other nonlinearities in the system that take effect at the output. This model structure has also proven useful through several practical applications such as chemical processes. It is extensively used in biomedical applications

3)Wiener-Hammerstein models are combinations of the Hammerstein and the Wienermodel, where the system is described by two linear models enclosing a static nonlinear function. 4)Hammerstein-Wiener models represent another combinatorial structure of the Wiener and Hammerstein model. In the Hammerstein-Wiener structure, the nonlinearities are represented on the input and the output and the dynamics can be described by a linear transfer function. The identification algorithm used for the model structure is performed using the prediction error method.

Block diagram of the Hammerstein-Wiener model structureIdentification methods

Recursive and non-recursive identificationSystem identification can be done in a recursive or non-recursive manner. Recursive identification refers to identification algorithms that use information from previous steps for identification, whether non-recursive methods use all available data at once.

Recursive algorithms can therefore be used on-line and can compensate for changes in the dynamics of the process. In chemical processes for instance, where the dynamics of the system can change significantly based on the temperature, the use of recursive identification can beery useful. Nonlinear System identification10Nonlinear system identification is the task of determining or estimating a system's input-output relationship based on (possibly noisy) output measurements,

where the error e(t) is noise, disturbance, or another source of error in the measurement process.

Structure Selection25NARX Model

NARMAX Model

Volterra Series

Polynomial Model

Optimization Algorithms26Steepest descent

Conjugate gradient

Levenberg Marquardt

Gauss Newton

Neural Network based System Identification27Its universal approximation property Why NN?Steps InvolvedFirst, the nonlinear model structure is to be chosenTraining algorithm that modifies the weights of the neural network to minimize the error criterion

28Mathematical Model of the Neuron(Linear Synaptic Operation)

v[]vf[]vyf=1RyLinear mapping operation(from many to one)

1x2x10=xNonlinear mapping(from one to one)Synaptic operationSomatic OperationNeural Inputs)2()(Rtx

0w

1w

2waTaxwv=1-1v(Bias)29A Simple way to Explain the Adaptation RuleM11w02w01w00w12w22wnnw1x2xnx11x12x22x10=xv

-11[]vf+-dyny[]vfmawDeLearning rateerrorWeight updatingNeuralinputsny : Neural Outputyd: Desired Output30The conventional neural models are Highly simplified models of the biological neurons

Considers only the linear summation of the weighted neural inputs

Using this models, many neural morphologies, usually referred to as multilayered feedforward neural networks (MFNNs), have been reported in the literature. Some potential applications are: Pattern Recognition System Identification Function ApproximationNeuro-Control systems

Neural Network based System Identification31Error criterion Mean squared error between target and actual outputs averaged over all samplesOptimization AlgorithmSteepest descentConjugate gradientLevenberg MarquardtGauss Newton

Concepts of Neural Network Control32Artificial Neural Network (ANN), inspired by human brain, has been applied to almost all fields of engineering including control theory and applications. As far as control is concerned, we mainly use the universal function approximation property of an ANN. By virtue of this property, we can convert the nonlinear function learning process into a parametric learning process: learn a set of unknown parameters which are either linear or nonlinear in the ANN.New problem Consider the system

Where the nonlinear function is unknown. To the best of our knowledge, we know it is smooth, and well defined in a function space, say , with being a compact set. The control task is to force the state x to follow up a given reference trajectory xrasymptotically.

Three Layer MLP33Numerous types of ANN have been proposed such as MLP network, RBF network, B-spline network, Hopfield network, wavelet network, etc.

A 3 layer MLP with sigmoid nonlinear activation functions can approximate a smooth nonlinear function in a compact set with arbitrary precision.

Function Approximation by ANN 34The two main characteristics associated with MLP are the universal approximation ability and back propagation learning.There are three milestones in ANN history. In 1950s, the single layer perceptrons had been proposed to mimic the nerve system. However the linear feature made it difficult to generalize for nonlinear systems. The MLP were proposed but it was unclear how to carry out parametric learning in a highly nonlinear structure. The learning problem was solved in 1986 by the back-propagation learning algorithm, presented in well known book Parallel Distributed Processing by Rumelhart, Hinton and Williams. In 1991, three proofs were given to show the function approximation property with MLP. The 3rd milestone is particular importance to control researchers, as they can use ANN to address any nonlinear problems without worrying about the theoretical foundation.Back Propagation Algorithm(BPA)35BPA is a gradient decent method aiming at minimizing a quadratic error criterion

Let us derive the parametric learning law (BPA). First consider output layer weights . By using the chain rule, we obtain,

The convergence property can be derived as Back Propagation Algorithm(BPA)36Next consider the hidden layer weights . By using the chain rule, and viewing the output layer weights as tuned parameters (fixed), then we obtain

Back Propagation Algorithm(BPA)37It would be difficult to use the BP algorithm as an on-line learning mechanism, due to several reasons. First, it is easy to to stuck at local minima because of the highly nonlinear relationship in the parametric space arising from the inclusion of the hidden layer. Second, the computation is time-demanding, especially when the number of hidden layers is large. Third, it is discrete in nature, thus is sensitive to step-size(learning gain ). Above all, BP algorithm belongs to the category of supervised learning Neural Network Control38The NN-based control structure presents the following advantages in comparison with the traditional ones: the controller design does not rely on a mathematical model; the drive is completely self-commissioned and does not require any tuning procedure; the on-line training of the NN-based speed controller makes the drive insensitive to parameter variations; the transient state position tracking error is reduced due to the presence of the NN-based position controller in place of the P position controller. ANN Control System Configuration39

Direct NN control directly learns the controller parameters.

1. Direct NN control40

2. Indirect NN control: learns the plant parameters which are used to construct the controller. It is difficult to analyze the closed-loop stability of indirect NNC. NNs41NNs are able to emulate any unknown non-linear system by presenting a suitable set of input-output patterns generated by the plant to be modeled. The NNs learning capabilities are fully utilized only if weights and biases are on-line updated. This adaptive property of NN-based control makes the drive system robust to load disturbances and parameter variations. NEURAL NETWORKS FOR DYNAMICAL SYSTEM42 The feed forward NNWs, discussed in the previous section, can give only static inputoutput nonlinear (or linear) mapping, i.e., the output remains fixed at any instant with the fixed input at that instant. In many applications, a NNW is required to be dynamic; that is, it should be able to emulate a dynamic system with temporal behavior, such as identification of a machine model, control and estimation of flux, speed, etc. Such a network has storage property like that of a capacitor or inductor.42Recurrent Network43 A recurrent neural network (RNN) normally uses feedback from the output layer to an earlier layer, and is often defined as a feedback network. Fig. 5(a) shows the general structure of a two-layer real-time RNN The feedback network with time delays ( ) shown can emulate a dynamical system. The output in this case not only depends on the present input, but also prior inputs, thus giving temporal behavior of the network. If, for example, the input is a step function, the response will reverberate in time domain until steady-state condition is reached at the output. The network can emulate nonlinear differential equations that are characteristics of a nonlinear dynamical system. Of course, if TFs of the neurons are linear, it will represent linear system. Such a network can be trained by dynamical Back propagation algorithm,

43

Fig. 5. (a) Structure of real-time recurrent network (RNN).4444

Fig. 5. (b) Block diagram for training45the desired time-domain output from the reference dynamical system (plant) can be used step-by-step to force the ANN (or RNN) output to track by tuning the weights dynamically sample-by-sample. 4546 As an example, consider a one-input one-output RNN which is desired to emulate a series nonlinear R-L-C circuit [plant in Fig. 5(b)].

A step voltage signal x(k) is applied to the plant and the RNN simultaneously. The current response y(k+1) of the plant is the target signal, and it is used to tune the RNN weights. Then, the RNN will emulate the R-L-C circuit model. On the same principle, the RNN can emulate a complex dynamical system. An example application in power electronic system for adaptive flux estimation will be discussed later using the EKF algorithm.46

4747

Fig. 6. Time-delayed neural network with tapped delay line.4848Fig. 7. Neural network with time-delayed input and output.

4949 Static NNW With Time-Delayed Input and Feedback50The structure of the NNW is shown in Fig. 7, where there are time delayed inputs, as well as time-delayed outputs as feedback signals. In this case, the NNW is required to emulate the dynamical system given byThe NNW can be trained from the inputoutput temporal data of the plant by dynamic back propagation method. As mentioned before, the training data can be generated experimentally from the plant or from simulation result if mathematical plant model is available. If the plant parameters vary, the NNW model generated by offline training method is not valid. In such a case, online training of NNW with adaptive weights is essential. It should be mentioned here that the structure of the NNW will depend on the nature of the dynamical system which is required to be emulated.

50

Fig. 8. Training of inverse dynamic model of plant.5151C. Neural Network Control of Dynamical System52The control of a dynamical system, such as induction motor vector drive, by an AI technique is normally defined as intelligent control. Although all the branches of AI have been used for intelligent control, only NNW-based control will be covered in this section.52 Inverse Dynamics-Based Adaptive Control53 After satisfactory training and testing, the NNW represents the inverse dynamical model of the plant. This NNW-based inverse model can be placed in series as a controller with the actual plant so that the plant forward dynamics is totally

The identification of forward dynamical model of plant has been discussed so far. It is also possible to identify the inverse plant model by training. In this case, the plant response data y(K) is impressed as the input of the NNW, and its calculated output is compared with the plant input which is the target data. The resulting error (K) trains the network as shown so that (K) falls to the acceptable minimum value. 53

Fig. 9. Inverse dynamic model-based adaptive control of a plant.5454Model Referencing Adaptive Control (MRAC)55 The plant output is desired to track the dynamic response of the reference model. The reference model can be represented by dynamical equation(s) and solved in real-time by a DSP. The error signal (K) between the actual plant output and the reference model output trains the NNW controller online so as to make the tracking error zero.

I. MRAC by neural network(Direct method)

II. MRAC by neural network(Indirect method)The plant with the NN has the response which is identical to that of the ref. model. Problem of direct method:plant lies between controller and error &there is no way to propagate error backward in controller by error back propagation training. Problem is solved by indirect method. NN identification model is first generated to emulate forward model of the plant. This model is then placed in series with the NN controller (instead of actual plant) to track ref. model. The tuning of NN controller is now convenient thro NN model.55

Fig. 11. Training of neural network for emulation of actual controller.5656Nonlinear ARMA(NARMAX)57ARMAX model:

If we assume the system to be noise free: ARMA

A wide range of nonlinear systems can be represented as:NARMAX(when outputs assumed to be error free: NARX)Nonlinear System Modelling58Using NNs we can develop approximation to the nonlinear mapping of the NARX modelAny Class of NNs that is capable of nonlinear mapping approximations can be used to learn the function f(.)According to Narendra & Parthasarathy a NL system can be identified as a member of the following models:

M1:

M2:

M3

M4Function Approximation by ANN 59The two main characteristics associated with MLP are the universal approximation ability and back propagation learning.There are three milestones in ANN history. In 1950s, the single layer perceptrons had been proposed to mimic the nerve system. However the linear feature made it difficult to generalize for nonlinear systems. The MLP were proposed but it was unclear how to carry out parametric learning in a highly nonlinear structure. The learning problem was solved in 1986 by the back-propagation learning algorithm, presented in well known book Parallel Distributed Processing by Rumelhart, Hinton and Williams. In 1991, three proofs were given to show the function approximation property with MLP. The 3rd milestone is particular importance to control researchers, as they can use ANN to address any nonlinear problems without worrying about the theoretical foundation.60Identification Using Neural NetworkMost useful approaches to non-linear system identification is based on NARMAX modeling.

There are two structures for this identification

Parallel configuration

Parallel-Series configurationThe use of neural networks emerges as a feasible solution for nonlinear system identification

The universal approximation properties make them a useful tool for modeling nonlinear systems.

The neural networks such as MLP and RBF give successful results with Evolutionary algorithm. Identification of Nonlinear System using MLPNN61

Parallel-Series NARX SI approach using RBFNN 62

Block diagram representation of nonlinear system identification and control using artificial neural networks6363Identification of nonlinear plants using neural networks

Nonlinear PlantTDLTDLNeural Networkz-1-+

6464Indirect adaptive control using neural networksNeuralNetworkIdentifierz-1Reference ModelTDLTDLNonlinearPlantTDLTDLNeuralNetworkController-+-+

6565DC Motor Model The DC motor dynamics are given by the following two equations:

66ANN can be trained to emulate the unknown nonlinear plant dynamics by presenting a suitable set of input/output patterns generated by the plant. ANN based identification and control topology using MRAC platform for trajectory control of a DC motor is presented.Application to a DC motor problem66wherep (t) - rotor speed Vt(t) - terminal voltageia(t) - armature currentTL(t) - load torqueJ - rotor inertiaK - torque & back emf constantD - damping constantRa -armature resistanceLa - armature inductance

The load torque TL(t) can be expressed as

TL(t) = (p) 6767Discrete time DC motor modelTL(t) = where is a constant.The discrete time model speed equationgoverning the system dynamics is given by

p(k+1) = p(k) + p(k-1) + [sign(p(k) )] p2(k) + [sign(p(k) )] p2(k-1) + Vt(k)

p2(t) [sign(p(t))]

6868Identification of DC motor modelThe speed equation can be manipulated to the formVt(k) = g [p(k+1) , p(k) , p(k-1)]

where the function g[.] is given by

g[p(k+1), p(k), p(k-1)] = {p(k+1) - p(k) - p(k-1) - [sign(p(k) )] p2(k) - [sign(p(k) )] p2(k-1)}/

69and is assumed to be unknown. An ANN is trained to emulate the unknown function g[.] 69However as p(k+1) cannot be readily available the voltage equation is again manipulated as given below

= N [p(k) , p(k-1) , p(k-2)]

7070The structure of the ANN identifierz-1z-1DC MotorNeural networkIdentifierz-1+-

7171Trajectory control of DC motor using trained ANN The objective of the control system is to drive the motor so that its speed p(k), follows a pre-specified trajectory, m(k).

The following second order reference model ischosenm(k+1) = 0.6 m(k) + 0.2 m(k-1) + r(k)

For a given desired sequence {m(k)}, the corresponding control sequence {r(k)} can be calculated using above relation.727273ANN structure showing both identification & control of DC motor73Reference modelANNIdentifierANNControllerDC Motorz-1z-1z-1z-1z-1

++++--

7374Tracking performance of a sinusoidal reference track

747475

NEURAL NETWORK CONTROLB. Subudhi and S.S. Ge, Sliding mode Control and observer based slip ratio control of Electric and Hybrid Electric Vehicles, IEEE Trans. on Intelligent Transportation System, vol.13, no.4, pp.1617-1626,201276

Performance of ASMC and NN on a slippery road. (a) Braking torque. (b) Slip. (c) Estimation error. (d) Required voltage77Identification of nonlinear plants using neural networks

77Nonlinear PlantTDLTDLNeural Networkz-1-+

7778Indirect adaptive control using neural networks78NeuralNetworkIdentifierz-1Reference ModelTDLTDLNonlinearPlantTDLTDLNeuralNetworkController-+-+

7879Case Study:Application to a DC motor problemANN can be trained to emulate the unknown nonlinear plant dynamics by presenting a suitable set of input/output patterns generated by the plant. ANN based identification and control topology using MRAC platform for trajectory control of a DC motor is presented.7980DC Motor Model The DC motor dynamics are given by the following two equations:

8081wherep (t) - rotor speed Vt(t) - terminal voltageia(t) - armature currentTL(t) - load torqueJ - rotor inertiaK - torque & back emf constantD - damping constantRa -armature resistanceLa - armature inductance

The load torque TL(t) can be expressed as

TL(t) = (p) 8182Trajectory control of DC motor using trained ANNThe objective of the control system is todrive the motor so that its speed p(k),follows a pre-specified trajectory, m(k). Thefollowing second order reference model ischosenm(k+1) = 0.6 m(k) + 0.2 m(k-1) + r(k)For a given desired sequence {m(k)}, thecorresponding control sequence {r(k)} can becalculated using above relation.8283The speed at k+1th time step can be predictedfrom = 0.6 p(k) + 0.2 p(k-1) + r(k)

This result can be fed to the ANN to estimate thecontrol input at kth time step using

The matrix corresponds to the referencemodel coefficients [0.6 0.2]

= N [ , p(k) , p(k-1)]

83THANK YOU84