system control notes

165
Additional Lecture Notes for the course SC4026/EE4C04 Introduction modeling and control Ton J.J. van den Boom September 7, 2015 Delft University of Technology Delft Delft Center for Systems and Control Delft University of Technology Mekelweg 2, NL-2628 CD Delft, The Netherlands Tel. (+31) 15 2784052 Fax. (+31) 15 2786679 Email: [email protected]

Upload: jimtso13

Post on 13-Apr-2016

25 views

Category:

Documents


0 download

DESCRIPTION

System Control Notes

TRANSCRIPT

Page 1: System Control Notes

Additional Lecture Notes for the course SC4026/EE4C04

Introduction modeling and control

Ton J.J. van den Boom

September 7, 2015

Delft University of Technology

DelftDelft Center for Systems and ControlDelft University of TechnologyMekelweg 2, NL-2628 CD Delft, The NetherlandsTel. (+31) 15 2784052 Fax. (+31) 15 2786679Email: [email protected]

Page 2: System Control Notes

2

Page 3: System Control Notes

Contents

Preface 5

1 Signals and Systems 71.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2 Modeling of dynamical systems 172.1 Domains of dynamical systems . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 Examples of modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3 Input-output models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322.4 State systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3 Analysis of first-order and second-order systems 433.1 First-order systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.2 Second-order systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

4 General system analysis 614.1 Transfer functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614.2 Time responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634.3 Time response using Laplace transform . . . . . . . . . . . . . . . . . . . . 724.4 Impulse response model: convolution . . . . . . . . . . . . . . . . . . . . . 764.5 Analysis of state systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.6 Relation between various system descriptions . . . . . . . . . . . . . . . . . 884.7 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

5 Nonlinear dynamical systems 1035.1 Modeling of nonlinear dynamical systems . . . . . . . . . . . . . . . . . . . 1035.2 Steady state behavior and linearization . . . . . . . . . . . . . . . . . . . . 1075.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

3

Page 4: System Control Notes

4 CONTENTS

6 An introduction to feedback control 1136.1 Block diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136.2 Control configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226.3 Steady state tracking and system type . . . . . . . . . . . . . . . . . . . . 1256.4 PID control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1336.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

Appendix A: The inverse of a matrix 143

Appendix B: Laplace transforms 145

Appendix C: Answer to exercises 147

Index 163

References 165

Page 5: System Control Notes

Preface

Engineers have always been interested in the dynamic phenomena they observe when study-ing physical systems. Mechanical engineers are interested in the interaction between forcesand motion, while electrical engineers want to know more about the relation betweencurrent and voltage. To facilitate their study they utilize the concept of system, a math-ematical abstraction that is devised to serve as a model for a dynamic phenomenon. Itrepresents the dynamic phenomenon in terms of mathematical relations among the inputand the output of the system, usually called the signals of the system. A system is, ofcourse, not limited to modeling only physical dynamic phenomena; the concept is equallyapplicable to abstract dynamic phenomena such as those encountered in economics or othersocial sciences.The motions of the planets, the weather system, the stock market, a simple chemicalreaction, and the oscillating air in a trumpet are all examples of dynamic systems, inwhich some phenomena evolve in time.The main maxim of science is its ability to relate cause and effect. On the basis of thelaws of gravity, for example, astronomical events such as eclipses and the appearancesof comets can be predicted thousands of years in advance. Other natural phenomena,however, are so complex that they appear to be much more difficult to predict. Althoughthe movements of the atmosphere (wind), for example, obey the laws of physics just asmuch as the movements of the planets do, long term weather prediction is still ratherproblematic.The objective of this course is to present an introduction in the modeling, analysis, andcontrol of dynamical systems. In Chapter 1 we discuss some basic concepts of signals andsystems. In Chapter 2 we study the modeling of linear dynamic systems in the domains ofmechanical, electrical, electromechanical, and fluid/heat flow systems. Chapter 3 analyzesfirst-order and second-order systems. Chapter 4 proceeds the analysis of Chapter 3, butnow for general (and higher-order) systems. In Chapter 5 we discuss the extension tononlinear dynamical systems and introduce the concept of linearization. In Chapter 6 anintroduction to feedback control is given.

Acknowledgements

The author likes to thank Peter Heuberger, Bart De Schutter, Nicolas Weiss, MartijnLeskens and Rufus Fraanje for the fruitful discussions on the topic of modeling and controland for their comments on the concept version of these lecture notes.

5

Page 6: System Control Notes

6 Preface

Page 7: System Control Notes

Chapter 1

Signals and Systems

A signal is an indicator of a phenomenon, arising in some environment, that may bedescribed quantitatively. Examples of signals are mechanical signals such as forces, dis-placement, and velocity, or electrical signals such as voltages and currents in an electricalcircuit.Engineers and physicists have utilized the concept of a system to facilitate the study ofthe interaction between forces and matter for many years. A system is a mathematicalabstraction that is devised to describe a part of the environment the properties of which wewant to study. It describes the relation between certain phenomena that can occur in thisenvironment. A dynamical system describes such relations with respect to time. A me-chanical setup is a typical example of a dynamical system, because the forces, accelerations,velocities and positions that exist within the system are related in time.We will now more formally define the terms signal and system.

1.1 Signals

In the context of this course, a signal is an information carrier of a (physical) phenomenon.Since any signal is always one out of a collection of several of many possible signals, signalsmay mathematically be represented as elements of a set, called the signals set. In thissection we introduce various kinds of signals.We use the symbol R to denote the set of all real numbers, C to denote the complexnumbers, and Z to denote all integers. The symbol N is used to denote all positive integersincluding zero. We will use R+ to denote all positive real numbers including zero.The signals we are interested in are functions of a variable that is usually time. The domainof a signal is a subset T of the real line R and is called the time axis. The signal takesvalues in a set W, called the signal space. The formal definition of a signal is as follows.

Definition 1.1 (Signal) Let W be a set and suppose that T is a subset of the reals R.Then, any function x : T → W is called a signal with signal axis T and signal space W.

Example 1.1 (Signals)

7

Page 8: System Control Notes

8 CHAPTER 1. SIGNALS AND SYSTEMS

(a) Mechanical signal. The velocity of a mass is a time signal with signal axis T =(−∞,∞) and signal space W = R.

(b) Electrical signal. The voltage across a capacitor is a time signal with signal axisT = [t0,∞) (measurement of voltage starts at time t0) and signal space W = R.

(c) Hydraulic signal. The oil pressure difference across a fluid resistance in a hydraulicsystem is a signal.

(d) Thermal signal. The heat flow through a wall in a thermal system is a signal.

What is not a signal? For example, the value of a resistor in an electric circuit is usuallyconstant, and therefore considered not to be a signal. The same holds for other constantsystem parameters, such as the mass in a mechanical system or the thermal capacity of aroom in a heat-flow system.

Elementary signals

We now give some elementary signals

The rectangular function: The rectangular function δT : R → R for a given scalarT > 0 is defined as:

δT (t) =

1/T for 0 ≤ t ≤ T

0 elsewhere(1.1)

δT (t)

0 t →

1/T

T

Figure 1.1: Rectangular function

The unit impulse function: The unit impulse function δ : R → R is defined as:

δ(t) = limT→0

δT (t)

where δT (t) as defined above is a block function with constant amplitude 1/T andduration T . We find that for T → 0, the amplitude will go to infinity where theduration approaches zero. The area under the pulse remains equal to 1. This leads

Page 9: System Control Notes

1.1. SIGNALS 9

δ(t)

0 t →

Figure 1.2: Unit impulse function

to the following definition of the unit impulse function:

δ(t) = 0 for t 6= 0∫ ∞

−∞δ(t)dτ = 1

(1.2)

Note that the amplitude for t = 0 is not defined, but the integral (area under thepulse) is also equal to 1.

The unit impulse function is often also referred to as the Dirac delta function. Animportant property of the unit impulse function is that it can filter out values of afunction f through integration:

∫ ∞

−∞f(t)δ(t) dt = f(0) (1.3)

and∫ ∞

−∞f(t)δ(t− τ) dt = f(τ) (1.4)

us(t)

0

1

t →

Figure 1.3: Unit step function

The unit step function: The unit step function us : R → R is defined as:

us(t) =

0 for t < 0

1 for t ≥ 0(1.5)

Page 10: System Control Notes

10 CHAPTER 1. SIGNALS AND SYSTEMS

Note that the derivative of the unit step function is the unit impulse function, soδ(t) = d us(t)

d t.

The unit ramp function: The unit ramp function ur : R → R is defined to be a linearlyincreasing function of time with a unit increment:

ur(t)

0 t →

Figure 1.4: Unit ramp function

ur(t) =

0 for t ≤ 0

t for t > 0(1.6)

Note that the derivative of the unit ramp function is the unit step function, sous(t) =

d ur(t)d t

.

up(t)

0 t →

Figure 1.5: Unit parabolic function

The unit parabolic function: The unit parabolic function up : R → R is defined to bea parabolically increasing function of time:

up(t) =

0 for t ≤ 0t2

2for t > 0

(1.7)

Note that the derivative of the unit parabolic function is the unit ramp function, sour(t) =

d up(t)d t

.

Page 11: System Control Notes

1.1. SIGNALS 11

Harmonic functions: The well-known sine function sin : R → R and cosine functioncos : R → R, can be written as exponentials

cos(ωt) =ejωt + e−jωt

2

sin(ωt) =ejωt − e−jωt

2j

(1.8)

with j2 = −1.

sin(t)t →

π/2

cos(t)

t →

Figure 1.6: Harmonic functions

Damped harmonic functions A damped harmonic function is a harmonic function,multiplied by an exponential function:

eσt cos(ωt) =e(σ+jω)t + e(σ−jω)t

2

eσt sin(ωt) =e(σ+jω)t − e(σ−jω)t

2j

(1.9)

eσt cos(ωt)

t →

Figure 1.7: Damped harmonic functions for σ < 0

Page 12: System Control Notes

12 CHAPTER 1. SIGNALS AND SYSTEMS

The impulse, step, ramp, and parabolic are often called singularity functions .

Finally we introduce a single dot to denote the time derivative of a signal, so

y(t) =d y(t)

d t

and we use a double dot to denote the second time derivative of a signal, so

y(t) =d2 y(t)

d t2

1.2 Systems

In this section the concept of a system is defined.

Definition 1.2 (System) A system is a part separated from the environment by a real orimaginary boundary, that causes certain signals that exist within the boundary to be related.

Definition 1.3 (Dynamical system [1]) A dynamical system is a system whose behav-ior changes over time, often in response to external stimulation or forcing.

Definition 1.4 (Inputs [7]) An input is a system variable that is independently pre-scribed, or defined, by the system’s environment. The value of the input is independentof the system behavior or response. Inputs define the external excitation of the system andcan be quantities such as the external wind force acting on a tall building or the rainfallforming the input flow into a reservoir system. A system may have more than one input.

Definition 1.5 (Outputs [7]) An output is defined as any system variable of interest.It may be a variable measured at the interface with the environment or a variable that isinternal to the system and does not directly interact with the environment.

Definition 1.6 (Input-Output system) A system with inputs and outputs is called aninput-output system. The outputs of this type of system depend on the initial state of thesystem and the external inputs.

Definition 1.7 (Autonomous system) A system without external inputs is called anautonomous system. The behavior of this type of system depends entirely on the initialstate of the system.

Example 1.2 (Systems)

(a) Moving car. When the position of the throttle pedal (input) of a car is changed, thepower developed by the motor will change and the forward speed (output) increasesor decreases.

Page 13: System Control Notes

1.2. SYSTEMS 13

(b) Steered ship. When the steersman of a ship changes the position of the wheel (input)to a new position, the heading of the ship (output) changes because of hydrodynamicside forces acting on the newly positioned rudder.

(c) Mass-damper-spring system. A mechanical system with a number of masses thatare connected by springs and dampers to each other and without any outside forcesacting on it is an autonomous system.

(d) Wafer stage for lithography. A example of a more complex system is a waferstepper, with a positioning mechanism that is used in chip manufacturing processesfor accurate positioning (outputs of the system) of the silicon wafer on which the chipsare to be produced. The wafer can be accurately moved (stepped) in three degrees offreedom (3DOF) by manipulating the currents of linear motors (inputs of the system).

(e) A municipal solid waste combustion plant. An example of a large-scale systemis a municipal solid waste combustion plant in which household waste is incineratedfor the reduction of the amount of waste and for the production of energy. The inputof the system is the amount a waste that is put into the oven, the output of the systemis the amount of energy that is produced.

System properties

In the remainder of this section we will introduce some basic system properties.

Definition 1.8 (Dynamical vs memoryless) A system is said to be memoryless if itsoutput at a given time is dependent only on the input at that same time.

For example the system

y(t) =(

2u(t)− u2(t))2

is memoryless, as the value y(t) at any particular time t only depends on the input u(t) atthat time. A physical example of a memoryless system is a resistor in an electric circuit.Let i(t) be the input of the system and v(t) the output, then the input-output relationshipis given by

v(t) = R i(t)

where R is the resistance.An example of a system with memory is a capacitor, where the voltage v(t) is the integralof the current i(t), so

v(t) =1

C

∫ t

−∞i(τ) dτ

Page 14: System Control Notes

14 CHAPTER 1. SIGNALS AND SYSTEMS

Definition 1.9 (Causality) A system is causal if the output at any time depends onlyon values of the input at present time and in the past.

All physical systems in the real world are causal because they cannot anticipate on thefuture. Nonetheless, there are important applications where causality is not required. Forexample if we have recorded an audio signal in a computer’s memory, we can process itlater off-line. We can then use non-causal filtering by allowing a delay in the input signal,and as such implement a system that is theoretically non-causal.Note that all memoryless systems are causal, since the output responds only to the currentvalue of the input.

Definition 1.10 (Linearity) Let y1 be the output of the system to the input u1, and lety2 be the output to the input u2. A system is said to be linear if and only if it satisfies thefollowing properties:

1. the input u1(t) + u2(t) will give an output y1(t) + y2(t).

2. the input αu1(t) will give an output α y1(t) for any (complex) constant α.

An example of a linear system is an inductor, where the current i(t) is the integral of thevoltage v(t), so

i(t) =1

L

∫ t

−∞v(τ) dτ

An example of a non-linear system is the following system with input u and output y:

y(t) = u2(t)

Definition 1.11 (Time-invariance) A system is said to be time-invariant if the behaviorand characteristics are fixed over time.

An input-output system is time-invariant if and only if a time-shift τ in the input leads tothe same time-shift in the output.

u(t) =⇒ y(t) , t ∈ R =⇒ u(t− τ) =⇒ y(t− τ) , t ∈ R

In other words, if the input signal u(t) produces an output y(t) then any time shiftedinput, u(t− τ), results in a time-shifted output y(t− τ). An example of a time-invariantsystem is a mass-spring system with a mass and a spring-constant that do not vary in time.

Definition 1.12 (Stability) A system is said to be stable if a bounded input (i.e. if itsmagnitude does not grow without bound) gives a bounded output and therefore will notdiverge.

Page 15: System Control Notes

1.3. EXERCISES 15

An example of a stable biological system is a predator-prey model in which the populationof prey and predators are in balance and a small change of one of the populations will onlylead to a temporary deviation of the equilibrium point and does not destabilize the system.A common example of an unstable system is illustrated by someone pointing the micro-phone close at a speaker; a loud high-pitched tone results.An interesting example in the light of stability is a bicycle. If we consider the tilt positionwith respect to the vertical axis, the system’s stability depends on the forward velocity ofthe bicycle. If the velocity is zero or very low, a small disturbance force on the handle barwill immediately cause instability and the bicycle will fall. However, if the forward velocityof the bicycle is high enough, the bicycle is stable, and a small disturbance force on thehandle bar will not destabilize the bicycle.Finally we mention the derivative property for linear time-invariant systems:

Property 1 (Derivative property for linear time-invariant systems)Let y1 be the output of the system to the input u1 for a linear time-invariant system,then the derivative of the input u2(t) = d u1(t)

dtwill result in the derivative of the output

y2(t) =d y1(t)

dt. Furthermore, the integral of the input u3(t) =

∫u1(τ) dτ will result in the

integral of the output y3(t) =∫y1(τ) dτ .

1.3 Exercises

Exercise 1. Signals

a) Show that the unit rectangular function can be written as a sum of scaled unit stepfunctions.

Exercise 2. Plots of signals

Plot the signals a–c.

a) δT1(t)− 2 δT2

(t− T1), for T1 = 1 and T2 = 2, t ∈ R.

b) ur(t)− ur(t− 1)− us(t− 4), for t ∈ R.

c) up(t) us(1− t) + us(t− 1), for t ∈ R.

Exercise 3. Derivative of signals

Compute the derivative of the signals a–c of Exercise 2.

Exercise 4. System properties

Are the systems a–d memoryless, linear, time-invariant and/or causal.

a) 6 y(t) = 4 u(t) + 3 e−t .

Page 16: System Control Notes

16 CHAPTER 1. SIGNALS AND SYSTEMS

b) y(t) = 0.1 u(t) .

c) y(t) = 6 y(t− 2) + 3 u(t) + 2 .

d) y(t) = sin(u(t)).

Page 17: System Control Notes

Chapter 2

Modeling of dynamical systems

Real physical systems, which engineers must design, analyze, and understand, are usuallyvery complex. We therefore formulate a conceptual model made up of basic building-blocks. These blocks are idealizations of the essential physical phenomena occurring inreal systems. An adequate model of a particular physical device or system will behaveapproximately like the real system, and the best system model is the simplest one whichyields the information necessary for the engineering job.In this chapter we study the modeling of linear dynamic systems in various domains:

1. Mechanical domain

2. Electrical domain

3. Electromechanical domain

4. Fluid flow domain

5. Heat flow domain

For these domains we aim at deriving linear differential equations for physical systems.

2.1 Domains of dynamical systems

In this section we study some basic tools for the modeling of dynamical systems in variousdomains. For each domain we define the so called basic signals, that described the phe-nomena in that particular domain. Furthermore, for each domain the dynamical relationsbetween the basic signals can be described by a set of basic elements, which can be seenas the building blocks of the system.

1. Mechanical systemsWe can distinguish between translational and rotational mechanical systems. Intranslational mechanical systems we consider motions that are restricted to trans-lation along a line. In rotational mechanical systems we consider motions that arerestricted to rotation around an axis.

17

Page 18: System Control Notes

18 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

(a) Translational mechanical systemsBasic signals are force (f) and position (x) (together with its derivatives velocityv = x and acceleration a = x).There are three basic system elements: mass (m), damper (b), and spring (k).The relations between the basic signals and the basic elements for linear systemsare summarized in Figure 2.1

- f

x

mass f = mx

- f f

x1

x2

damper f = b (x1 − x2)

- f f

x1

x2

spring f = k (x1 − x2)

Figure 2.1: Translational mechanical systems

For an interconnected translational mechanical system we derive one differentialequation for each mass in the system.

(b) Rotational mechanical systemsBasic signals are torque (τ) and angle (θ) (together with its derivatives angularvelocity ω = θ, and angular acceleration α = θ).There are three basic system elements: inertia (J), rotational damper (b), androtational spring (k). The relations between the basic signals and the basicelements for linear systems are summarized in Figure 2.2.

θ

τ

inertia τ = J θ

Page 19: System Control Notes

2.1. DOMAINS OF DYNAMICAL SYSTEMS 19

θ1 θ2τ τ rotational damper τ = b (θ1 − θ2)

θ1 θ2τ τ rotational spring τ = k (θ1 − θ2)

Figure 2.2: Rotational mechanical systems

For an interconnected rotational mechanical system we derive one differentialequation for each inertia in the system.

2. Electrical systemsBasic signals are voltage (v) and current (i).There are three basic system elements: resistor (R), capacitor (C), and inductor (L).The relations between the basic signals and the basic elements for linear systems aresummarized in Figure 2.3

e

e

?

v2

v1

i resistor v2 − v1 = R i

e

e

?

v2

v1

i capacitor i = C (v2 − v1)

e

e

?

v2

v1

i inductor v2 − v1 = Ld i

d t

Figure 2.3: Electrical systems

For an electrical network we derive one differential equation for each inductor in thesystem, and one differential equation for each capacitor in the system.

Page 20: System Control Notes

20 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

Remark 2.1 Note that in electrical systems we use didt

for derivative of the current

rather than i. The two dots on top of each other may lead to confusion.

3. Electromechanical systemsAlso in electromechanical systems we can distinguish between translational en rota-tional electromechanical systems. An example of a translational electromechanicalsystem is a loudspeaker. An example of a rotational electromechanical system isan DC-motor. Both systems contain an electrical part as well as a mechanical part.Both can be modeled separately using the basic elements discussed before. The partsare connected by means of a transducer. A transducer transforms energy from onephysical domain into another, in this case electrical energy into mechanical energyand vice versa.

(a) Translational electromechanical systemsThe basic signals in a translational electromechanical system are force (f), ve-locity (v = x), voltage (e) and current (i). The basic system element is thetransducer with transduction ratio or electromechanical coupling constant Kp.(see Figure 2.4). For a translational electromechanical system we derive one

uuuuu

uuuuu

e

e

-i

-f

e2

e1

x

transducerf = −Kp i

e2 − e1 = Kp x

Figure 2.4: Translational electromechanical system

differential equation for each inductor/capacitor in the electrical part of thesystem, and one differential equation for each mass in the mechanical part ofthe system.

(b) Rotational electromechanical systemsThe basic signals in a rotational electromechanical system are torque (τ), angularvelocity (ω = θ), voltage (e) and current (i). The basic system element is thetransducer with transduction ratio or electromechanical coupling constant Kr.(see Figure 2.5).

For a rotational electromechanical system we derive one differential equation foreach inductor/capacitor in the electrical part of the system, and one differentialequation for each inertia in the mechanical part of the system.

Remark 2.2 Note that in electromechanical systems we use e for voltage instead ofv. We do this to avoid confusion with the velocity v.

Page 21: System Control Notes

2.1. DOMAINS OF DYNAMICAL SYSTEMS 21

e

e

e

e

-ie2

e1

τ, θ

transducerτ = −Kr i

e2 − e1 = Kr θ

Figure 2.5: Rotational electromechanical system

4. Heat flow systemsBasic signals are temperature (T ) and heat energy flow (q).There are two basic system elements: thermal capacitor (C), and thermal resistor(R). The relations between the basic signals and the basic elements for linear systemsare summarized in Figure 2.6.

-q

T

thermal capacitor T =1

Cq

-q

T2 T1

thermal resistor q =1

R(T2 − T1)

Figure 2.6: Heat flow systems

For an interconnected heat flow system we derive one differential equation for eachcapacitor in the system.

5. Fluid flow systemsBasic signals are fluid pressure (p) and fluid mass flow rate (w).There are two basic system elements: fluid capacitor (C), and fluid resistor (R).The relations between the basic signals and the basic elements for linear systems aresummarized in Figure 2.7.

For an interconnected fluid flow system we derive one differential equation for eachfluid capacitor in the system.

Often a fluid capacitor appear in the form of a vessel that contains the fluid. Thedifference between the pressure p at the bottom of the vessel and the outside pressure

Page 22: System Control Notes

22 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

-w

p

fluid capacitor p =1

Cw

-w

p1 p2

fluid resistor w =1

R(p1 − p2)

Figure 2.7: Fluid flow systems

p0 is related to the level h of the fluid in the vessel by the linear equation

p− p0 = ρ g h (2.1)

where ρ is the mass density of the fluid and g is the gravitational constant.

Remark 2.3 Note that in these lecture notes we only consider transducers that transformmechanical energy into electrical energy and vice versa. We could have discussed conversionbetween any combination of physical domains, e.g. thermo-mechanical systems, thermo-electrical systems, but for the sake of simplicity we have limited the discussion to themechanical and electrical domains.

2.2 Examples of modeling

1. Translational mechanical systems

m1

b1k1

-x1

m2

b2k2

b3

-fe

-x2

Figure 2.8: Example of a translational mechanical system

Given 2 carts in the configuration of Figure 2.8. Cart 1 with mass m1 is connectedto a wall by linear spring with spring constant k1, and to cart 2 by a spring withspring constant k2 and a damper with damping constant b3. Cart 2 with mass m2 isdriven by an external force fe. Friction with the ground causes a damping force withdamping constant b1 for cart 1 and damping constant b2 for cart 2. Our task is to

Page 23: System Control Notes

2.2. EXAMPLES OF MODELING 23

derive the differential equations for this system.

First we note that we will have two differential equations, one for each mass. Let usfirst concentrate on the first mass m1:Newton’s law tells us that

m1 x1 =∑

i

f1,i

where f1,i are all forces acting on mass m1, see Figure 2.9.

m1

x1

b1 x1

k1 x1

x2

-

-

b3 (x2 − x1)

k2 (x2 − x1)

Figure 2.9: Forces acting on cart 1

We can distinguish four forces:

(a) The force due to the spring between cart 1 and the wall is equal to

f1,k1 = k1 (0− x1) = −k1 x1

(b) The force due to the spring between cart 1 and cart 2 is equal to

f1,k2 = k2 (x2 − x1)

(c) The force due to the damper between cart 1 and cart 2 is equal to

f1,b3 = b3 (x2 − x1)

(d) The force due to the friction is equal to

f1,b1 = b1 (0− x1) = −b1 x1

and so the first differential equation becomes

m1 x1 = −b1 x1 + b3 (x2 − x1) + k2 (x2 − x1)− k1 x1

For the second mass m2 we derive with Newton’s law:

m2 x2 =∑

i

f2,i

where f2,i are all forces acting on mass m2, see Figure 2.10.

We can distinguish four forces:

Page 24: System Control Notes

24 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

x1

m2

b3 (x2 − x1)

k2 (x2 − x1)

x2

b2 x2

-fe

Figure 2.10: Forces acting on cart 2

(a) The force due to the spring between cart 2 and cart 1 is equal to

f2,k1 = k2 (x1 − x2)

(b) The force due to the damper between cart 2 and cart 1 is equal to

f2,b3 = b3 (x1 − x2)

(c) The force due to the friction is equal to

f2,b2 = b2 (0− x2) = −b2 x2

(d) The external force fe

and so the second differential equation becomes

m2 x2 = −b2 x2 + k2 (x1 − x2) + fe + b3 (x1 − x2)

So summarizing, the two differential equations describing the motion of the two-cartsystem are as follows:

m1 x1 = −b1 x1 + b3 (x2 − x1) + k2 (x2 − x1)− k1 x1

m2 x2 = −b2 x2 + b3 (x1 − x2) + k2 (x1 − x2) + fe(2.2)

2. Rotational mechanical systemsGiven an inertia J in the configuration of Figure 2.11. The inertia is lined up witha rotational damper with damping constant b1 and a friction with damping constantb2. An external angular velocity θ1 is exciting the system. Our task is to derive thedifferential equations for this system.

Page 25: System Control Notes

2.2. EXAMPLES OF MODELING 25

τ2τ1

θ1 θ2b1

Jb2

Figure 2.11: Rotational mechanical systems

Using Newton’s law for rotation we obtain:

J θ2 = τ1 + τ2

where τ1 and τ2 are all torques acting on inertia J , see Figure 2.11. For the firstdamper we find:

τ1 = b1 (θ1 − θ2)

For the second rotational damper we find:

τ2 = b2 (0− θ2) = −b2 θ2

Combining these results gives us

J θ2 = τ1 + τ2

= b1 (θ1 − θ2)− b2 θ2

= b1 θ1 − (b1 + b2) θ2

and so we obtain the differential equation describing the dynamics between θ1 andθ2 as follows:

J θ2 = b1 θ1 − (b1 + b2) θ2

3. Electrical system The electrical circuit of Figure 2.12 consists of an inductor, aresistor and a capacitor in series, where L is the induction, R the resistance and Cthe capacity. Often we assume one of the voltages to be zero, in this example wechoose v4 = 0.

First note that the current i is the same for three elements (induction, resistance andcapacitor). For the inductor we find:

Ld i

d t= v1 − v2

For the capacitor we find

d (v3 − v4)

d t=

d v3d t

=1

Ci

Page 26: System Control Notes

26 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

r

R v3r

r

C

v4

e r

Lv1 v2

e iv4

Figure 2.12: Example of an electrical systems

For the resistor we find

v2 − v3 = Ri =⇒ v2 = v3 +R i

Substitution into the first equation gives us:

Ld i

d t= v1 − v3 − R i

and so we obtain the differential equations describing the dynamics of the electricalnetwork as follows:

Ld i

d t= v1 − v3 − R i

d v3d t

=1

Ci

(2.3)

4. Translational electromechanical system

Consider the system of Figure 2.13. A current i through a coil causes a force ft ona permanent magnet, resulting in a displacement x. The voltages in the circuit aree1, e2, e3, and e4.

e rL

rR

e1 e2 e3

ee4

m

uuuuu

uuuuu Kp

k

-i

ftfs

x

Figure 2.13: Translational electromechanical system

The electromechanical coupling constant is Kp. This system consists of two parts,the mechanical and the electrical part.

Page 27: System Control Notes

2.2. EXAMPLES OF MODELING 27

We start with the electrical part. Note that the current i is the same through allelectrical elements. For the inductor we find:

e1 − e2 = Ld i

d t

For the resistor we find

e2 − e3 = Ri

For the voltage difference at the input of the system we find:

e1 − e4 = (e1 − e2) + (e2 − e3) + (e3 − e4)

= Ld i

d t+R i+ (e3 − e4)

Next we consider the mechanical part:

md2 x

d t2= ft + fs

= ft − k x

The last step is to connect the two systems by the transducer using the relations

ft = −Kp i

e3 − e4 = Kpd x

d t

Substitution gives us:

e1 − e4 = Ld i

d t+R i+ (e3 − e4)

= Ld i

d t+R i+Kp

d x

d t

md2 x

d t2= ft − k x

= −Kp i− k x

So summarizing, the two differential equations describing the dynamics of the trans-lational electromechanical system are as follows:

Ld i

d t= e1 − e4 −R i−Kp

d x

d t

md2 x

d t2= −Kp i− k x

(2.4)

Page 28: System Control Notes

28 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

e

e

r

r

e2

e1

Jτext, θ τt

Kr

6R i

Figure 2.14: Rotational electromechanical system

5. Rotational electromechanical system

Figure 2.14 shows a dynamo, which translates mechanical energy into electrical en-ergy. On the mechanical side we have the torque τ and angular velocity θ, whichresults in a current i and voltage e2−e1 on the electrical side. We assume the dynamohas inertia J , and electromechanical coupling constant Kr. On the electrical side thecircuit is closed by a resistor R.This system consists of two parts, the mechanical and the electrical part. We startwith the mechanical part. (First note that the angular velocity θ1 of the inertia andtransducer are both the same.) There are two torques acting on inertia J , namelythe external torque τext and the transducer torque −τt. Newton’s law for rotationalsystems gives us:

Jd2 θ

d t2= τext − τt

Next we concentrate on the electrical part: We find the relation

e1 − e2 = R i

The last step is to connect the two systems by the transducer using the relations

τt = −Kr i

e2 − e1 = Krd θ

d t

Substitution gives us:

Jd2 θ

d t2= τext − τt

= τext +Kr i

= τext +Kr

R(e1 − e2)

= τext −K2

r

R

d θ

d t

Page 29: System Control Notes

2.2. EXAMPLES OF MODELING 29

So summarizing, the differential equation describing the dynamics of the rotationalelectromechanical system is as follows:

Jd2 θ

d t2= τext −

K2r

R

d θ

d t(2.5)

6. Water flow system

Given is a system with two water vessels as in Figure 2.15.

-win

wmed

-

A2 h2

p2R2

A1h1

p1R1 wout-

p0

Figure 2.15: Example of a water flow system

Water runs into the left vessel from a source with mass flow win, from the left vesselthrough a restriction with restriction constant R1 into the right vessel with mass flowwmed, and through a restriction with restriction constant R2 out of the right vesselwith mass flow wout. The pressures at the bottom of the water vessels are denotedas p1 and p2. The outside pressure is p0. The areas of the left and rights vessels areA1 and A2, and the water levels are denoted by h1 and h2. Our task is to derive thedifferential equations for this system.

First we consider the left vessel:The net flow into the left vessel is w1 = win −wmed. The fluid capacitance of the leftvessel is given by

C1 =A1

g

The change in pressure p1 is now given by:

p1 =1

C1

w1 =g

A1

(win − wmed)

For the flow wmed we find:

wmed =1

R1(p1 − p2)

Page 30: System Control Notes

30 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

Substitution of wmed into the previous equation we obtain:

p1 =g

A1win −

g

A1R1(p1 − p2)

=g

A1

win −g

A1R1

(p1 − p0) +g

A1R1

(p2 − p0)

The derivation for the right vessel is similar:The net flow into the right level is w2 = wmed − wout. The fluid capacitance of theright vessel is given by

C2 =A2

g

The change in pressure p2 is now given by:

p2 =1

C2

w2 =g

A2

(wmed − wout)

For the flow wout we find:

wout =1

R2(p2 − p0)

Substitution of wmed and wout into the previous equation yields

p2 =g

A2R1(p1 − p2)−

g

A2R2(p2 − p0)

=g

A2R1

(p1 − p0)−g

A2R1

(p2 − p0)−g

A2R2

(p2 − p0)

=g

A2R1(p1 − p0)−

g(R1 +R2)

A2R1R2(p2 − p0)

So summarizing, the two differential equations describing the dynamics of the 2 vesselsystem are as follows:

p1 =g

A1win −

g

A1R1(p1 − p0) +

g

A1R1(p2 − p0)

p2 =g

A2R1(p1 − p0)−

g(R1 +R2)

A2R1R2(p2 − p0)

Now we can use the relation between the fluid levels h1, h2 and p1, p2:

p1 − p0 = ρgh1 , p2 − p0 = ρgh2.

This gives us

p1 = ρgh1 , p2 = ρgh2,

Page 31: System Control Notes

2.2. EXAMPLES OF MODELING 31

and we can rewrite the equations as

h1 =1

ρA1win −

g

A1R1h1 +

g

A1R1h2

h2 =g

A2R1

h1 −g(R1 +R2)

A2R1R2

h2.

(2.6)

7. Heat flow system

In the system of Figure 2.16, a heat flow qin is entering a wall with temperatureTw and heat capacity cw. From the wall a heat flow qout is entering the room withtemperature T0 and heat capacity c0. The thermal resistance from the wall to theroom is equal to Rw. Our task is to derive the differential equations for this system.

-qin

Rw

cw

Tw

-qout

co

To

Figure 2.16: Example of a heat flow system

First we consider the wall temperature:The net heat flow into the wall is q1 = qin− qout. The change in wall temperature Tw

is now given by:

Tw =1

cwq1 =

1

cw(qin − qout)

For the heat flow qout we find:

qout =1

Rw(Tw − T0)

Substitution of qout results in:

Tw =1

cwqin +

1

cw Rw(T0 − Tw)

The derivation for the room temperature is similar:The net heat flow into the room is qout. The change in room temperature T0 is nowgiven by:

T0 =1

c0qout

Page 32: System Control Notes

32 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

Substitution of qout into this equation we obtain:

T0 =1

c0Rw(Tw − T0)

So summarizing, the two differential equations describing the dynamics of the heatflow system are as follows:

Tw =1

cwqin +

1

cw Rw(T0 − Tw)

T0 =1

c0Rw(Tw − T0)

(2.7)

2.3 Input-output models

In the previous sections we showed that for a linear dynamical system we can derive aset of differential equations, that describe the dynamics of the system. In many cases weobtain more than one differential equation, and the relation between the input variableand the output variable are not directly given, but we need one or more auxiliary vari-ables. For example, the two differential equations of (2.2), describing the dynamics of atranslational mechanical system, use three variables x1, x2, and fe. If we define the signalfe as the input of the system, and we consider the signal x2 as the output of the system,then x1 can be seen as an auxiliary variable. It would be nice if we could eliminate thevariable x1 from the equations and have a direct relation between fe and x2, in a so-calledinput-output differential equation. For linear systems, this elimination can be done easilyusing the Laplace transformation.

Let x(t), t ≥ 0 be a signal, then the Laplace transform is defined as follows:

X(s) = Lx(t) =

∫ ∞

−∞x(τ) e−s τ d τ (2.8)

where X(s) is called the Laplace transform of x(t), or X(s) = Lx(t). The complexvariable s ∈ C is called the Laplace variable. Table 2.1 gives the Laplace transforms ofsome common signals. A more extensive list of Laplace transforms is given in Appendix B.

The Laplace transformation is a linear operation and so it has the following property

Lαf1(t) + βf2(t) = αLf1(t)+ βLf2(t) for α, β ∈ C (2.9)

Furthermore if the system is initially-at-rest at time t = 0 (which means that outputy(0) = 0 and all its derivatives are zero as well), then

Ldn y(t)

d tn

= sn Y (s) (2.10)

Page 33: System Control Notes

2.3. INPUT-OUTPUT MODELS 33

time function Laplace transform

Dirac pulse δ(t) 1

Unit step us(t)1

s

Ramp ur(t)1

s2

Parabolic up(t)1

s3

Exponential e−atus(t)1

s+ a

Sinusoid sinω(t)us(t)ω

s2 + ω2

Table 2.1: The Laplace transforms of some common signals

Now let us consider the linear differential equation

dn y(t)

d tn+ a1

dn−1 y(t)

d tn−1+ . . .+ an−1

d y(t)

d t+ any(t)

= b0dm u(t)

d tm+ b1

dm−1 u(t)

d tm−1+ . . .+ bm−1

d u(t)

d t+ bmu(t)

Let U(s) = Lu(t) and Y (s) = Ly(t). The Laplace transformation gives us

snY (s) + a1sn−1Y (s) + . . .+ an−1sY (s) + anY (s)

= b0smU(s) + b1s

m−1U(s) + . . .+ bm−1sU(s) + bmU(s)

and using the linearity property we obtain

(

sn + a1sn−1 + . . .+ an−1s+ an

)

Y (s) =(

b0sm + b1s

m−1 + . . .+ bm−1s+ bm

)

U(s)

(2.11)

In the following examples we will show how we can rewrite a system of differential equationsinto a single input-output differential equation.

1. Translational mechanical systemIn (2.2) two differential equations were given, describing the dynamics of a transla-tional mechanical system:

m1 x1(t) = −b1 x1(t) + b3 (x2(t)− x1(t))− k1 x1(t) + k2 (x2(t)− x1(t))m2 x2(t) = −b2 x2(t) + b3 (x1(t)− x2(t)) + k2 (x1(t)− x2(t)) + fe(t)

Page 34: System Control Notes

34 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

Our aim is to eliminate the variable x1 and to obtain an input-output differentialequation with input variable fe and output variable x2. Let X1(s) = Lx1(t), LetX2(s) = Lx2(t) and Fe(s) = Lfe(t). Equation (2.2) can now be written as

s2m1 X1(s) = −s b1 X1(s) + s b3 (X2(s)−X1(s))− k1X1(t) + k2 (X2(s)−X1(s))(2.12)

s2m2 X2(s) = −s b2 X2(s) + s b3 (X1(s)−X2(s)) + k2 (X1(t)−X2(t)) + Fe(s)(2.13)

which gives(

s2m1 + s (b1 + b3) + k1 + k2

)

X1(s) =(

s b3 + k2

)

X2(s) (2.14)(

s2m2 + s (b2 + b3) + k2

)

X2(s) =(

s b3 + k2

)

X1(s) + Fe(s) (2.15)

From equation (2.14) we obtain:

X1(s) =s b3 + k2

s2m1 + s (b1 + b3) + k1 + k2X2(s) (2.16)

Substitution of (2.16) into (2.15) gives us:(

s2m2+s (b2 + b3) + k2

)

X2(s)

=(

s b3 + k2

) s b3 + k2s2m1 + s (b1 + b3) + k1 + k2

X2(s) + Fe(s) (2.17)

or(

s2m1+s (b1 + b3) + k1 + k2

)(

s2m2 + s (b2 + b3) + k2

)

X2(s)

=(

s b3 + k2

)(

s b3 + k2

)

X2(s)

+(

s2m1 + s (b1 + b3) + k1 + k2

)

Fe(s) (2.18)

This leads to(

s4m1m2 + s3 (m1b3 +m2b3 +m1b2 +m2b1) + s2(m1k2 +m2k1 +m2k2

+ b1b2 + b1b3 + b2b3) + s(k1b3 + k1b2 + k2b2 + k2b1) + (k1k2))

X2(s)

=(

s2m1 + s (b3 − b1) + k1 + k2

)

Fe(s) (2.19)

We can rewrite this as the input-output differential equation:

m1m2d4 x2(t)

d t4+ (m1b3 +m2b3 +m1b2 +m2b1)

d3 x2(t)

d t3+ (m1k2 +m2k1 +m2k2

+ b1b2 + b1b3 + b2b3)d2 x2(t)

d t2+ (k1b3 + k1b2 + k2b2 + k2b1)

d x2(t)

d t+ k1k2 x2(t)

= m1d2 fe(t)

d t2+ (b1 + b3)

d fe(t)

d t+ (k1 + k2)fe(t) (2.20)

Page 35: System Control Notes

2.3. INPUT-OUTPUT MODELS 35

2. Electrical systemIn (2.3) two differential equations were given, describing the dynamics of an electricalsystem:

Ld i(t)

d t= v1(t)− v3(t)− R i(t)

d v3(t)

d t=

1

Ci(t)

Our aim is to eliminate the variable v3 and to obtain an input-output differentialequation with input variable i and and output variable v1. Let V1(s) = Lv1(t),V3(s) = Lv3(t), and I(s) = Li(t). Equation (2.3) changes into

s L I(s) = V1(s)− V3(s)−R I(s)

s V3(s) = 1CI(s)

From the first equation we obtain

V3(s) = −s L I(s) + V1(s)− R I(s)

Substitution into the second equation gives us:

−s2 L I(s) + s V1(s)− sR I(s) =1

CI(s)

or

s V1(s) = s2 L I(s) + sR I(s) +1

CI(s)

We can rewrite this as the input-output differential equation:

d v1(t)

d t= L

d2 i(t)

d t2+R

d i(t)

d t+

1

Ci(t)

3. Translational electromechanical system In (2.4) two differential equations weregiven, describing the dynamics of a rotational electromechanical system:

Ld i(t)

d t= e1(t)− e4(t)− R i(t)−Kp

d x(t)d t

md2 x(t)

d t2= −Kp i(t)− k x(t)

Our aim is to eliminate the variable i and to obtain an input-output differentialequation with input variable fe and output variable x. Let E1(s) = Le1(t), LetE4(s) = Le4(t), X(s) = Lx(t) and I(s) = Li(t). Equation (2.4) changes into

sLI(s) = E1(s)− E4(s)− RI(s)− sKpX(s)

s2mX(s) = −Kp I(s)− k X(s)

Page 36: System Control Notes

36 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

From the second equation we obtain:

I(s) = −s2m

KpX(s)− k

KpX(s)

Substitution into the first equation gives us:

sL(−s2m

KpX(s)− k

KpX(s)) = E1(s)−E4(s)+R (s2

m

KpX(s)+

k

KpX(s))−sKpX(s)

or

−s3LmX(s)− s2RmX(s) + s (K2p −Lk)X(s)−RkX(s) = Kp(E1(t)−E4(t))

We can rewrite this as the input-output differential equation:

−Lmd3 x(t)

d t3− Rm

d2 x(t)

d t2+ (K2

p − Lk)d x(t)

d t− Rk x(t) = Kp(e1(t)− e4(t))

2.4 State systems

In this section we introduce the notion of state systems . We will define the concept of astate, and describe the behavior of a system using a state differential equation.One of the triumphs of Newton’s mechanics was the observation that the motion of theplanets could be predicted based on the current positions and velocities of all planets. Itwas not necessary to know the past motion. In general, the state of a dynamical systemis a collection of variables that completely characterizes the evolution of a system for thepurpose of predicting the future evolution. For a system of planets the state simply consistsof the positions and the velocities of the planets.

Definition 2.1 The state of a system is a collection of variables that summarize the pastof a system for the purpose of predicting the future.

Example 2.1 Consider the translational mechanical system of Section 2.2. The differen-tial equations were given by (2.2):

m1 p1(t) = −(b1 + b3) p1(t) + b3 p2(t)− (k1 + k2) p1(t) + k2 p2(t)

m2 p2(t) = b3 p1(t)− (b2 + b3) p2(t) + k2 p1(t)− k2 p2(t) + fe(t)

where we use the variable p1 and p2 (instead of x1 and x2) for the positions of mass 1 and2. In the sequel we will use the variable x for the state vector. In mechanical systems thestate consists of the velocities and the positions of each mass in the system. In this casewe have two masses, and therefore two velocities and two positions. This gives us a statevector

x(t) =

x1(t)x2(t)x3(t)x4(t)

=

p1(t)p1(t)p2(t)p2(t)

Page 37: System Control Notes

2.4. STATE SYSTEMS 37

For the derivative of x1 we can write:

x1(t) = p1(t)

= −(b1 + b3)

m1

p1(t) +b3m1

p2(t)−(k1 + k2)

m1

p1(t) +k2m1

p2(t)

= −(b1 + b3)

m1x1(t) +

b3m1

x3(t)−(k1 + k2)

m1x2(t) +

k2m1

x4(t)

The derivative of x2 is straightforward:

x2(t) = p1(t) = x1(t)

For the derivative of x3 we can write:

x3(t) = p2(t)

=b3m2

p1(t)−(b2 + b3)

m2

p2(t) +k2m2

p1(t)−k2m2

p2(t) +1

m2

fe(t)

=b3m2

x1(t)−(b2 + b3)

m2x3(t) +

k2m2

x2(t)−k2m2

x4(t) +1

m2fe(t)

and the derivative of x4 becomes

x4(t) = p2(t) = x3(t)

If we define u(t) = fe(t) as the input of the system, we have the equations

x1(t) = −(b1 + b3)

m1x1(t) +

b3m1

x3(t)−(k1 + k2)

m1x2(t) +

k2m1

x4(t)

x2(t) = p1(t) = x1(t)

x3(t) =b3m2

x1(t)−(b2 + b3)

m2x3(t) +

k2m2

x2(t)−k2m2

x4(t) +1

m2u(t)

x4(t) = p2(t) = x3(t)

In matrix notation this becomes:

x(t) =

− (b1+b3)m1

− (k1+k2)m1

b3m1

k2m1

1 0 0 0b3m2

k2m2

− (b2+b3)m2

− k2m2

0 0 1 0

x(t) +

001m2

0

u(t)

where the derivatives x(t) are taken elementwise, so

x(t) =

d x1(t)/d td x2(t)/d td x3(t)/d td x4(t)/d t

=

x1(t)x2(t)x3(t)x4(t)

Page 38: System Control Notes

38 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

By defining two matrices A and B:

A =

− (b1+b3)m1

− (k1+k2)m1

b3m1

k2m1

1 0 0 0b3m2

k2m2

− (b2+b3)m2

− k2m2

0 0 1 0

B =

001m2

0

we can write the system as

x(t) = Ax(t) +B u(t)

If we choose the first position to be the output of the system, so y(t) = p1(t) = x2(t), weobtain the output equation

y(t) =[0 1 0 0

]x(t) + 0 u(t)

= C x(t) +Du(t)

where the matrices C and D are defined by

C =[0 1 0 0

]D = 0

We have seen in this example that state variables were gathered in a vector x ∈ Rn, which is

called the state vector. In general every linear time-invariant (LTI) state system (with oneinput and one output) can be represented by the system of first-order differential equations

x1(t) = a11 x1(t) + a12 x2(t) + . . . a1n xn(t) + b1 u(t)

x2(t) = a21 x1(t) + a22 x2(t) + . . . a2n xn(t) + b2 u(t)

......

xn(t) = an1 x1(t) + an2 x2(t) + . . . ann xn(t) + bn u(t)

y(t) = c1 x1(t) + c2 x2(t) + . . . cn xn(t) + d u(t)

These equations can be written in matrix form as

x1(t)x2(t)...

x1(t)

=

a11 a12 · · · a1na21 a22 · · · a2n...

......

an1 an2 · · · ann

x1(t)x2(t)...

xn(t)

+

b1b2...bn

u(t)

y(t) =[c1 c2 · · · cn

]

x1(t)x2(t)...

xn(t)

+ d u(t)

Page 39: System Control Notes

2.4. STATE SYSTEMS 39

which may be summarized as

x(t)= Ax(t) +B u(t) ,y(t) = C x(t) +Du(t),

(2.21)

where A ∈ Rn×n, B ∈ Rn×1, C ∈ R1×n, and D ∈ R are constant and the derivative x(t) istaken elementwise, so

x(t) =

d x1(t)/d td x2(t)/d t

...d xn(t)/d t

=

x1(t)x2(t)...

xn(t)

The input signal of the system is represented by u and the output signal by y.

Example 2.2 Consider the electrical system of Section 2.2 where we set v4 = 0. Thedifferential equations were given by (2.3):

d i(t)

d t=

1

L(v1(t)− v3(t))−

R

Li(t)

d v3(t)

d t=

1

Ci(t)

In general, in electrical systems the state vector will consist of the currents through theinductors and the voltages over the capacitors. In this particular case we have one inductorand one capacitor. This gives us a state-vector

x(t) =

[i(t)v3(t)

]

=

[x1(t)x2(t)

]

If we define u(t) = v1(t) as the input of the system, we can write the equations

x1(t) =d i(t)

d t= −R

Lx1(t)−

1

Lx2(t) +

1

Lu(t)

x2(t) =d v3(t)

d t=

1

Cx1(t)

or in the matrix notation:

x(t) =

−R

L− 1

L

1

C0

x(t) +

1

L

0

u(t)

If we choose the current as the output of the system, so y(t) = i(t), we obtain the outputequation

y(t) =[1 0

]x(t) + 0 u(t)

Page 40: System Control Notes

40 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

This means that the matrices A, B, C and D for this example become

A =

−R

L− 1

L

1

C0

B =

1

L

0

C =[1 0

]D = 0

Table 2.2 indicates how to choose the states during the modeling phase for various physicaldomains.

Physical domain States

Translational mechanical system velocity of each massposition of each mass

Rotational mechanical system angular velocity of each inertiaangular position of each inertia

Electrical system voltage over each capacitorcurrent through each inductor

Translational electromechanical system velocity of each massposition of each massvoltage over each capacitorcurrent through each inductor

Rotational electromechanical system angular velocity of each inertiaangular position of each inertiavoltage over each capacitorcurrent through each inductor

Fluid flow system fluid pressure at the bottom of each vessel(or fluid level of each vessel)

Heat flow system temperature in each room

Table 2.2: How to choose the states during modeling

2.5 Exercises

Exercise 1. Modeling of a linear mechanical system

The system in Figure 2.17 a tractor with mass m1 is pulling, via a linear spring with springconstant k, a trailer with mass m2. The position of the tractor is x1, the position of thetrailer is x2. The surface friction of the tractor is fc,1 = c1x1 and the surface friction of thetrailer is fc,2 = c2x2. The motor of the tractor produces a force ft.Now perform the following tasks:

1. Give the differential equations of this system.

Page 41: System Control Notes

2.5. EXERCISES 41

--

x2x1

c2 c1

m2

m1k -ft

Figure 2.17: Tractor with trailer

2. Determine the input-output differential equation with input ft(t) and output x2(t)using the result of answer 1.

3. Describe the system as a state system.

Exercise 2. Modeling of a linear electrical system

e

e

r

r

r

r

v1

v2

-i1

?i2C

?i3L

Figure 2.18: LC-network

An electrical system (see Figure 2.18) consists of a linear capacitor and a linear inductor.Assume v2 = 0.Now perform the following tasks:

1. Give the differential equations of this system.

2. Determine the input-output differential equation with input i1(t) and output v1(t)using the result of answer 1.

3. Describe the system as a state system.

Page 42: System Control Notes

42 CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

Page 43: System Control Notes

Chapter 3

Analysis of first-order andsecond-order systems

3.1 First-order systems

First-order systems can be described by a state system with only one state or by a singlefirst-order differential equation. For example, braking of a car, the discharge of an elec-tronic camera flash, the flow in a fluid vessel, and the cooling of a cup of tea may all beapproximated by a first-order differential equation which may be written in a standardform as

y(t) + σy(t) = f(t) (3.1)

where the system is defined by the single parameter σ = 1/τ , where τ is the system timeconstant, and f(t) = b0 u(t) + b1 u(t) is a forcing function, which depends on the input uand the parameters b0, b1 ∈ R.

Example 3.1 (First-order systems)

Consider the RC-circuit of Figure 3.1.a with a resistor with resistance R and a capacitorwith capacitance C. The relation between the voltage v(t) = v1(t) − v2(t) and the currenti(t) is given by the first-order differential equation

v(t) +1

RCv(t) =

1

Ci(t) .

So with i(t) the input and v(t) the output of the system, we find that the time constant forthis system is τ = RC and the forcing function is f(t) = 1

Ci(t).

For the water vessel in Figure 3.1.b with area A, and fluid resistance R we find thatthe relation between the inflow win(t) and level h(t) is given by the first-order differentialequation

h(t) +g

ARh(t) =

1

Aρwin(t) .

43

Page 44: System Control Notes

44 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS

e

e

r

r

r

r

v1

v2

-i

C R

(a) RC network

-win

Ah

R

wout-

(b) Water vessel

ω

T

Jb

(c) Rotational mechanical system

Figure 3.1: Examples of first-order systems

Let win(t) be the input and h(t) be the output of the system, then we find that the timeconstant for this system is τ = AR/g and the forcing function is f(t) = 1

Aρwin(t).

Finally, the rotational mechanical system in Figure 3.1.c with inertia J , and dampingconstant b we find the relation between the external torque T (t) and rotational velocity ω(t)is given by the first-order differential equation

ω(t) +b

Jω(t) =

1

JT (t) .

So with T (t) the input and ω(t) the output of the system, we find that the time constantfor this system is τ = J/b and the forcing function is f(t) = 1

JT (t).

First-order state space systems can be described by the equations

x(t)= a x(t) + b u(t) ,y(t) = c x(t) + d u(t),

(3.2)

where the state x(t) ∈ R is a scalar signal and the system matrices a, b, c and d are scalarconstants. If we rewrite the second equation as

x(t) =1

cy(t)− d

cu(t)

we can derive

x(t) =1

cy(t)− d

cu(t)

and we can substitute these expressions for x(t) and x(t) into the first equation. Thisyields

1

cy(t)− d

cu(t) =

a

cy(t)− ad

cu(t) + b u(t) ,

and so we obtain the first-order equation:

y(t)− a y(t) = d u(t) + (bc− ad) u(t) ,

and we see that the time constant for the first-order state system (3.2) is τ = −1/a andthe forcing function is f(t) = d u(t) + (bc− ad) u(t).

Page 45: System Control Notes

3.1. FIRST-ORDER SYSTEMS 45

Unit step response

In practical situations we often encounter the unit step function as a forcing function, so

f(t) = us(t) =

0 for t < 01 for t ≥ 0

We will compute the step response y(s) = ys(t) for f(t) = us(t), when the system (3.1) isinitially at rest, so y(0) = 0.We first solve the homogeneous differential equation

yh(t) + σ yh(t) = 0

The homogeneous solution of this first-order system is given by:

yh(t) = e−σt (3.3)

Note that if yh(t) is a solution, then also β yh(t) = β e−σt for any β ∈ R is a solution.We now compute a particular solution, which means we find a solution for (3.1) withoutregarding the initial value y(0). Observing equation (3.1) for f(t) = 1 for t ≥ 0, we findthat yp(t) = 1/σ for t ≥ 0 satisfies the equation.For the final solution we combine the homogeneous solution and particular solution y(t) =yp(t) + β yh(t) = 1/σ + β e−σt and compute β by solving y(0) = 0, so β = −1/σ. The unitstep response for a first-order system is given by ys(t) = 1/σ(1− e−σt) for t ≥ 0. This canbe rewritten as:

ys(t) =1

σ(1− e−σt)us(t) (3.4)

The unit step response is given in Figure 3.2. We see that the response asymptotically

1/σ

ys

6

t -Figure 3.2: Unit step response for a first-order system

approach the steady-state value.

Unit impulse response

The derivative property (Property 1, Section 1.2) for linear time-invariant systems tells usthat if we take the derivative of the input as a new input, then the derivative of the output

Page 46: System Control Notes

46 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS

will be the new output. We know there holds

δ(t) =d us(t)

d t

and thus the impulse response yδ(t), resulting from an input f(t) = δ(t) is given by

yδ(t) =d ys(t)

d t= e−σt , t ≥ 0 (3.5)

The impulse response is given in Figure 3.3. We see that the response starts in yδ(0) = 1,and then decays asymptotically towards zero.

1

6

t -Figure 3.3: Impulse response for a first-order system

Stability of a first-order system

Let us consider the homogeneous solution for a first-order system and see how the systemresponds to some initial energy stored within it in the absence of an input signal (so theparticular solution yp = 0). Then from (3.3) we see that the response will always have theform

yh(t) = e−σt

Typical responses for yh(t) are given in Figure 3.4. If yh(t) decays to zero as t approachesinfinity, the system is said to be stable (see Figure 3.4.a). If on the other hand, yh(t)increases without limit as t becomes large, the system is unstable (see Figure 3.4.b). Afirst-order system is stable if σ > 0 and unstable if σ < 0. If the magnitude of σ is equalto zero, yh(t) becomes constant as shown in Figure 3.4.c. Such a system is said to bemarginally stable.

Example 3.2 Consider the RC-circuit of Figure 3.1.a with R = 2 Ω and C = 1/4 F, andlet the input current follow a step function, so i(t) = us(t) A. The differential equation isgiven by

v(t) + 2 v(t) = 4 i(t)

Page 47: System Control Notes

3.2. SECOND-ORDER SYSTEMS 47

σ > 0

t →(a)

σ < 0

t →(b)

σ = 0

t →(c)

Figure 3.4: Stability of first-order systems

and so the forcing function is f(t) = 4 i(t) = 4 us(t). From Equation (3.4) we know thatfor a first-order system the unit step response is given by

ys(t) = 1/σ(1− e−σt)us(t) = 0.5(1− e−2t)us(t)

In our case we have f(t) = 4 us(t) and so the step response of this RC-circuit is given by

v(t) = 4 ys(t) V = 2(1− e−2t)us(t) V

3.2 Second-order systems

Second-order systems can be described by a state system with two state or by a singlesecond-order differential equation. Physical second-order systems contain two independentelements in which energy can be stored. For example in mechanical systems the kineticenergy in a mass can be exchanged with the potential energy of a spring, and in electricalsystems the energy between capacitors and inductors can be exchanged. Second-ordersystems can be written in a standard form as

y(t) + a1 y(t) + a2y(t) = f(t) (3.6)

where the system is defined by the parameters a1 and a2 and f(t) is the forcing function.Often the forcing function has the form f(t) = b0u(t) + b1 u(t) + b2 and so it depends onthe input u(t) and the parameters b0, b1, b2 ∈ R.

Example 3.3 (Second-order systems)

Consider the RLC-circuit of Figure 3.5.a with inductance L, capacitance C and resistancevalue R. The relation between the voltage v(t) = v1(t)− v2(t) and the current i(t) is givenby the second-order differential equation

d2 v(t)

d t2+

1

RC

d v(t)

d t+

1

LCv(t) =

1

C

d i(t)

d t

Page 48: System Control Notes

48 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS

e

e

v1

v2

-i

C L R

(a) RLC network

r

θ

m

Fmg

mg sin θ

(b) Simple pendulum

m

k

b-y

-fe

(c) Mechanical system

Figure 3.5: Examples of second-order systems

so with i(t) as the input and v(t) as the output of the system, we find that the parameters

for this system are a1 = 1/RC and a2 = 1/LC, and the forcing function is f(t) = 1C

d i(t)d t

.

For the simple pendulum in Figure 3.5.b with mass m, length ℓ we find that the relationbetween the angle θ and an external force F is given by the second-order differential equation

θ(t) +g

ℓθ(t) =

1

mℓF (t)

where we used the approximation sin(θ) ≈ θ and cos(θ) = 1 for small θ. With F (t) as theinput and θ(t) as the output of the system, we find that the parameters for this system area1 = 0 and a2 = g/ℓ, and the forcing function is f(t) = 1

mℓF (t).

Finally, for the translation mechanical system in Figure 3.5.c with mass m, spring constantk, damping b we find the relation between the position y and an external force fe is givenby the second-order differential equation

m y(t) + b y(t) + k y(t) = fe(t)

With fe(t) as the input and y(t) as the output of the system, we find that the parametersfor this system a1 = b/m and a2 = k/m, and the forcing function is f(t) = fe(t)/m.

Second-order single-input single-output state space systems can be described by the equa-tions

[x1(t)x2(t)

]

=

[a11 a12a21 a22

] [x1(t)x2(t)

]

+

[b1b2

]

u(t) ,

y(t) =[c1 c2

][x1(t)x2(t)

]

+ d u(t),

(3.7)

Page 49: System Control Notes

3.2. SECOND-ORDER SYSTEMS 49

or

x1(t) = a11 x1(t) + a12 x2(t) + b1 u(t) ,

x2(t) = a21 x1(t) + a22 x2(t) + b2 u(t) ,

y(t) = c1 x1(t) + c2 x2(t) + d u(t),

(3.8)

where the state x(t) ∈ R2 is a vector with two entries and the system matrices areA ∈ R2×2,B ∈ R2×1, C ∈ R1×2 and D ∈ R.We will use the Laplace transformation to find the input-output description of this statesystem. Define X1(s) = Lx1(t), X2(s) = Lx2(t), Y (s) = Ly(t) and U(s) = Lu(t).Now (3.8) can be rewritten as

sX1(s) = a11 X1(s) + a12 X2(s) + b1 U(s) ,

sX2(s) = a21 X1(s) + a22 X2(s) + b2 U(s) ,

Y (s) = c1X1(s) + c2X2(s) + dU(s),

(3.9)

In matrix form this becomes

s I

[X1(s)X2(s)

]

=

[a11 a12a21 a22

] [X1(s)X2(s)

]

+

[b1b2

]

U(s) ,

Y (s)=[c1 c2

][X1(s)X2(s)

]

+ dU(s),

The first equation now becomes(

s I −[a11 a12a21 a22

]) [X1(s)X2(s)

]

=

[b1b2

]

U(s)

and so[X1(s)X2(s)

]

=

(

s I −[a11 a12a21 a22

])−1 [b1b2

]

U(s)

With(

s I −[a11 a12a21 a22

])−1

=

([s− a11 −a12−a21 s− a22

])−1

=1

(s− a11)(s− a22)− a12a21

([s− a22 a12a21 s− a11

])

we derive[X1(s)X2(s)

]

=1

(s− a11)(s− a22)− a12a21

([s− a22 a12a21 s− a11

])[b1b2

]

U(s)

=1

s2 + s(a11 + a22) + (a11a22 − a12a21)

([sb1 − a22b1 + a12b2sb2 + a21b1 − a11b2

])

U(s)

=1

s2 + sa1 + a2

([sb1 − a22b1 + a12b2sb2 + a21b1 − a11b2

])

U(s)

Page 50: System Control Notes

50 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS

with a1 = −(a11 + a22), a2 = (a11a22 − a12a21). Now it follows

Y (s) =[c1 c2

][X1(s)X2(s)

]

+ dU(s),

=1

s2 + sa1 + a2

[c1 c2

]([

sb1 − a22b1 + a12b2sb2 + a21b1 − a11b2

])

U(s) + dU(s)

=s(b1c1 + b2c2) + (a12b2c1 + a21b1c2 − a22b1c1 − a11b2c2)

s2 + sa1 + a2U(s) + dU(s)

=s2 b0 + s b1 + b2s2 + sa1 + a2

U(s)

with b0 = d, b1 = (b1c1+b2c2−a11d−a22d), and b2 = (a12b2c1+a21b1c2−a22b1c1−a11b2c2+a11a22d− a12a21d). We obtain

(s2 + sa1 + a2)Y (s) = (s2 b0 + s b1 + b2)U(s)

This leads to the differential equation

y(t) + a1 y(t) + a2y(t) = b0u(t) + b1 u(t) + b2u(t)

and so the forcing function is f(t) = b0u(t) + b1 u(t) + b2u(t).

Unit step response

Consider the second-order system

y(t) + a1 y(t) + a2 y(t) = f(t) (3.10)

If a1 ≥ 0 and a2 ≥ 0 we can define

ωn =√a2

ζ =a12ωn

where ζ is called the damping ratio and ωn is the undamped natural frequency . We obtaina new description of the second-order system:

y(t) + 2 ζ ωn y(t) + ω2ny(t) = f(t)

We will now study the unit step response for system (3.10) when the forcing function isa unit step function (f(t) = us(t)) and the system is initially at rest, so y(0) = 0 andy(0) = 0.We first solve the homogeneous equation

yh(t) + 2 ζ ωn yh(t) + ω2nyh(t) = 0

Page 51: System Control Notes

3.2. SECOND-ORDER SYSTEMS 51

Laplace transformation gives us

(s2 + s 2 ζ ωn + ω2n)Yh(s) = 0

or

(s− λ1)(s− λ2)Yh(s) = 0 (3.11)

where λ1 = ωn(−ζ +√

ζ2 − 1) ∈ C and λ2 = ωn(−ζ −√

ζ2 − 1) ∈ C are the roots of thisequation. Note that λ1 and λ2 are distinct if ζ 6= 1, and λ1 = λ2 for ζ = 1. We will lookat both cases.

The case (ζ 6= 1): Let us first consider the case of ζ 6= 1. For ζ 6= 1 we find λ1 6= λ2. Thehomogeneous equation now becomes

(s− λ1)(s− λ2)Yh(s) = 0

This means that either (s−λ1)Yh(s) = 0 or (s−λ2)Yh(s) = 0. Note that this is equivalentto computing the homogeneous solution of the first-order systems

yh1(t)− λ1 yh1(t) = 0 and yh2(t)− λ2 yh2(t) = 0

The solution for the first equation is yh1(t) = β1 eλ1 t, and for the second equation is

yh2(t) = β2 eλ2 t. This means that the overall homogenous solution for the second-order

differential equation is given by

yh(t) = yh1(t) + yh2(t) = β1eλ1 t + β2e

λ2 t (3.12)

We now compute a particular solution, which means we find a solution for (3.10) withoutregarding the initial value y(0). Observing equation (3.10) for f(t) = 1 for t ≥ 0, we findthat yp(t) = 1/ω2

n for t ≥ 0 satisfies the equation.Finally we combine the homogeneous solution and the particular solution y(t) = 1/ω2

n +β1e

λ1 t + β2eλ2 t and we compute β1 and β2 by solving y(0) = 0 and y(0) = 0. With

y(0) = 1/ω2n + β1 + β2 = 0

and with y(t) = β1λeλ1 t + β2λe

λ2 t, resulting in

y(0) = β1λ1 + β2λ2 = 0

we find

β1 =λ2

ω2n(λ1 − λ2)

, β2 =λ1

ω2n(λ2 − λ1)

This gives us the unit step response for a second-order system:

ys(t) =1

ω2n

(

1 +λ2

(λ1 − λ2)eλ1 t +

λ1

(λ2 − λ1)eλ2 t

)

us(t) , for t ≥ 0 (3.13)

Page 52: System Control Notes

52 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS

ζ = 0

0.1

0.3

0.707

1.0

2.0

5.0

1

t →

y s(t)−→

Figure 3.6: Step response of a second-order system for ωn = 1, and different values of ζ

Depending on ζ , the system is either overdamped, underdamped, or critically damped:

Overdamped system ζ > 1:If ζ > 1 the system is called overdamped, and the parameters λ1 and λ2 are both realnumbers (because ζ2 − 1 > 0) and so the step response is given by (3.13).

Underdamped system 0 ≤ ζ < 1:If 0 ≤ ζ < 1 the system is called underdamped, and the parameters λ1 and λ2 are bothcomplex numbers (because ζ2 − 1 < 0):

λ1 = ωn(−ζ + j√

1− ζ2) and λ2 = ωn(−ζ − j√

1− ζ2)

If we define σ = ζωn and ωd = ωn

1− ζ2, then we obtain

λ1 = −σ + jωd , λ2 = −σ − j ωd

We find

β1 =λ2

ω2n(λ1 − λ2)

=1

ω2n

(

−1

2+j

ζ

2√

1− ζ2

)

, β2 =λ1

ω2n(λ2 − λ1)

=1

ω2n

(

−1

2−j

ζ

2√

1− ζ2

)

With eλ1t = e−σt+jωdt = e−σt(cosωdt + j sinωdt) and eλ2t = e−σt−jωdt = e−σt(cosωdt −j sinωdt), this leads to the unit step response for the case ζ < 1:

ys(t) =1

ω2n

(

1− e−σ t cosωdt−ζ

1− ζ2e−σ t sinωdt

)

us(t) , for t ≥ 0 (3.14)

Critically damped system ζ = 1:If ζ = 1 the system is called critically damped. We find λ = λ1 = λ2 = −ωn. The

Page 53: System Control Notes

3.2. SECOND-ORDER SYSTEMS 53

homogeneous solutions yh1 = yh2 = eλ t become equal and we need an additional ho-mogeneous solution yh = t eλ t (See chapter 4). The complete solution is now given byy(t) = 1/ω2

n + β1eλ t + β2t e

λ t. We compute β1 and β2 by solving y(0) = 0 and y(0) = 0,and we obtain

β1 =−1

ω2n

, β2 =λ

ω2n

=−1

ωn

This leads to the following unit step response for the case ζ = 1:

ys(t) =1

ω2n

(

1− e−ωn t − ωnt e−ωn t

)

us(t) , for t ≥ 0 (3.15)

In Figure 3.6 the response for different values of ζ is given. After a dynamical phase theoutput will asymptotically approach the final value (except for ζ = 0). In the underdampedcase the output will oscillate, whereas the output for the overdamped case will not.

Unit impulse response

The derivative property for linear time-invariant systems of Section 1.2 tells us that if wetake the derivative of the input as a new input, then the derivative of the output will bethe new output. We know there holds

δ(t) =d us(t)

d t

and thus the impulse response yδ(t), resulting from an input f(t) = δ(t) is given by

yδ(t) =d ys(t)

d t, t ≥ 0

For the overdamped case (ζ > 1) this results in

yδ(t) =1

ω2n

( λ1λ2

(λ1 − λ2)eλ1 t +

λ1λ2

(λ2 − λ1)eλ2 t

)

=1

(λ1 − λ2)

(

eλ1 t − eλ2 t)

, for t ≥ 0 (3.16)

For the critically damped case (ζ = 1) this results in

yδ(t) = t e−ωn t , for t ≥ 0 (3.17)

For the underdamped system 0 ≤ ζ < 1) this results in

yδ(t) =1

ωde−σ t sinωdt , for t ≥ 0 (3.18)

The impulse response is given in Figure 3.7.

Page 54: System Control Notes

54 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS

ζ = 0

0.1

0.3

0.707

1.0

2.05.00

t →

y δ(t)−→

Figure 3.7: Impulse response of a second-order system for ωn = 1 and different values of ζ

Damping ratio, (un)damped natural frequency

In the previous paragraph we studied the underdamped second-order system

y(t) + 2 ζ ωn y(t) + ω2ny(t) = f(t)

with 0 ≤ ζ < 1. In this paragraph we will take a closer look at the influence of the dampingratio ζ , the undamped natural frequency ωn, the negative real part of the pole, denotedby σ = ωnζ , and the damped natural frequency ωd = ωn

1− ζ2 on the behavior of asecond-order system. The relation between σ, ωn, ζ , and ωd is illustrated in Figure 3.8.The desired behavior of a system is often expressed in specifications on the output responseon a step signal f(t) = ω2

n us(t). The response on this input is given by:

ys(t) = 1− e−σ t(

cosωdt+ζ

1− ζ2sinωdt

)

(3.19)

Interesting criteria in this response are the overshoot, the peak-time, the rise time, andthe settling time of the response.

The settling time ts is the time it takes for the output to enter and remain withina 1% band centered around its steady-state value.

The rise time tr is the time required for the output to rise from 10% to 90% of thesteady-state value.

The peak time tp is the time it takes for the response to reach its maximum value.

The overshoot Mp is the maximum value of the response minus the steady-statevalue of the response divided by the steady-state value of the response .

Page 55: System Control Notes

3.2. SECOND-ORDER SYSTEMS 55

ωd

σ

ωn

φ = arccos(ζ)

Im

6

Re -

Figure 3.8: The relation between σ, ωn, ωd, and ζ .

In Figure 3.9 the criteria are visualized.

1. Settling time of output signal on unit step reference signal:The settling time is defined as the time the output signal y(k) needs to settle within1% of its final value. When we consider the response of (3.19) we find that the outputsignal y(t) will be in the 1% band if

e−σ t ≤ 0.01

The settling time is therefore

ts = − log(0.01)

σ≈ 4.6

σ

This means that ts is inversely proportional to σ. Increasing σ will decrease thesettling time ts, and vice versa.

2. Rise time:The rise time is the time required for the output to rise from 10% to 90% of thesteady-state value. If we substitute τ = ωn t into (3.19) we find:

ys(τ) = 1− e−ζ τ(

cos(τ√

1− ζ2) +ζ

1− ζ2sin(τ

1− ζ2))

We can now compute functions τ1(ζ) and τ2(ζ), such that for any choice of ζ we find

ys(τ1(ζ)) = 0.1

ys(τ2(ζ)) = 0.9

Page 56: System Control Notes

56 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS

10.9

0.1

t →ts

tp

tr

Mp+1%−1%

Figure 3.9: The step response criteria of a second-order system

Define the function τr(ζ) = τ2(ζ)− τ1(ζ), then the rise time tr is is given by:

tr =τr(ζ)

ωn

In Figure 3.10 the function τr(ζ) is given for 0 ≤ ζ < 1.

1

2

3

0 0.25 0.50 0.75 1

1.8

τ r−→

ζ −→

Figure 3.10: The function τr(ζ) for 0 ≤ ζ < 1

Often the function τr(ζ) is approximated by the average 1.8, and the the rise time isgiven by

tr ≈1.8

ωn

Note that tr is inversely proportional to ωn. Increasing ωn will decrease the rise timetr, and vice versa.

Page 57: System Control Notes

3.2. SECOND-ORDER SYSTEMS 57

3. Peak time:To compute the peak time we set the derivative of the response from (3.19) to zero:

ys(t) = yδ(t) =ωn

1−ζ2e−σ t sinωdt

We find that ys(t) = 0 when sinωdt = 0 and so ωdt = kπ, k ∈ Z. The first peak inthe response will occur for k = 1, and so

tp =π

ωd

This means that tp is inversely proportional to ωd. Increasing ωd will decrease thepeak time tp, and vice versa.

4. Overshoot:We find the peak value can be found by substitution of t = tp = π

ωdinto (3.19):

ys(tp) = 1 + e−σ π

ωd

and so with limt→∞ ys(t) = 1 the overshoot is given by

Mp =ys(tp)− ys(∞)

ys(∞)= e

−σ πωd = e

− πζ√1−ζ2

0.25

0.5

0.75

1.00

0 0.25 0.50 0.75 1

Mp−→

ζ −→

Figure 3.11: The overshoot Mp as a function of damping ratio ζ .

We find that the overshoot only depends on the damping ratio ζ .In Figure 3.11 the relation between Mp and ζ is visualized.

Page 58: System Control Notes

58 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS

Re

Im

Figure 3.12: Response for various pole locations

Stability of a second-order system

For second-order systems the roots λ1, λ2 of the homogeneous equation can be either real-valued or complex. These roots are often called the poles of the system (See also Section4.1).For an overdamped system (ζ > 1) both poles λ1, λ2 are real and the homogeneous solutionis the sum of two exponentials:

yh(t) = β1eλ1 t + β2e

λ2 t

Similar to the first-order case, the yh(t) will approach zero for t if the both poles arenegative, and so an overdamped second-order system is stable if λ1 < 0 and λ2 < 0 andunstable if either λ1 > 0 or λ2 > 0 (or both).For an underdamped system (ζ < 1) both the poles λ1, λ2 are complex conjugated and thehomogeneous solution is a sinusoidal function

yh(t) = β1 e−σ t cosωdt + β2 e

−σ t sinωdt

Note that cosωdt and sinωdt are both bounded in absolute value by one. In order to haveyh(t) decaying to zero as t approaches infinity, e−σ t has to decay to zero, which is thecase for σ > 0. This means that an underdamped second-order system is stable if σ > 0and unstable if σ < 0. Note that σ = −Re(λ1) = −Re(λ2) and so the underdampedsecond-order system is stable if Re(λi) < 0.For a critically damped system (ζ = 1) both the poles are equal (λ1 = λ2) and real-valued.The homogeneous solution is given by

yh(t) = β1eλ t + β2t e

λ t

Page 59: System Control Notes

3.3. EXERCISES 59

Note that yh(t) decays to zero as t approaches infinity if eλ t decays to zero, which is thecase for λ < 0. This means that a critically damped second-order system is stable if λ < 0and unstable if λ > 0.Let λ1, λ2 be the roots of the homogeneous equation of a second-order system. In generalwe can say that the second-order system will be stable if there holds

Re(λi) < 0 , i = 1, 2

or, in other words, if the poles are in the left-half of the complex plane.The relation between various pole locations and the system response is sketched in Figure3.12. For a pole with a negative real part we see a decaying response, for a pole with apositive real part we have an increasing response. If the pole has an imaginary part, thereis an oscillation. The frequency of the oscillation grows with increasing imaginary part. Ifthere is a single pole is in the origin, the response will contain a step function. If the poleis on the imaginary axis (but not in the origin), the response is an oscillation with constantamplitude. For multiple poles on the imaginary axis, the response will be unstable.

Example 3.4 Consider the translation mechanical system in Figure 3.5.c with mass m = 2kg, spring constant k = 8 N/m, damping b = 4

√3 Ns/m, and let the external force be given

by fe(t) = 10 δ(t). The differential equation is given

y(t) + 2√3 y(t) + 4 y(t) = 0.5 fe(t)

and so the forcing function is f(t) = 0.5 fe(t) = 5 δ(t). We compute the undamped naturalfrequency ωn = 2, ζ = 0.5

√3, σ = ζωn =

√3, and ωd = ωn

1− ζ2 = 1. With ζ < 1 wehave an underdamped system and so for f(t) = δ(t) we find

yδ(t) =1

ωd

e−σ t sinωdt = e−√3 t sin t , for t ≥ 0

In our case we have f(t) = 5 δ(t) and so the step response of this mechanical system isgiven by

y(t) = 5 yδ(t) = 5 e−√3 t sin t , for t ≥ 0

3.3 Exercises

Exercise 1. Driving car

A car with mass m = 2 kg is driving with a velocity v(t) over a road. The surface frictionof the car is fc(t) = c v(t) = 6 v(t) and the motor of the car produces a force fm(t). Letthis force be equal to a unit step, so fm(t) = us(t). Compute the velocity v as a functionof time.

Page 60: System Control Notes

60 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS

Exercise 2. RLC-circuit

Consider the electrical circuit of Example 3.5.a with inductance L = 1 H. How do wechoose R and C such that ωn = 2 rad/s and ζ = 0.5 for this system.

Exercise 3. Damped natural frequency

Consider the second-order system:

y(t) +K y(t) + 3 y(t) = u(t)

For which values of K do we have ωd ≥ 1.5?

Exercise 4. Stability

Are the following systems stable?

a) y(t) + 3 y(t) = u(t)

b) − 4y(t)− 2 y(t) = 4u(t)

c) − y + 3y − y(t) = u(t)

d) 4y + 2y + y(t) = u(t)

Exercise 5. Damping ratio, (un)damped natural frequency, decay factor

Compute for the following systems the damping ratio ζ , undamped natural frequency ωn,damped natural frequency ωd, and the decay factor σ:

a) y(t) + 3y(t) + 9 y(t) = u(t)

b) 4y(t) + 8y(t) + y(t) = 4u(t)

c) y + 0.4y + y(t) = u(t)

d) y + 4y + 4y(t) = 3 u(t)

Exercise 6. Response criteria

Compute estimates of the settling time ts, rise time tr, peak time tp, and overshoot Mp foreach of the systems of Exercise 5.

Page 61: System Control Notes

Chapter 4

General system analysis

In this chapter we will analyze single-input single-output linear time-invariant systems ,which are described by the linear differential equation:

dn y(t)

d tn+ a1

dn−1 y(t)

d tn−1+ . . .+ an−1

d y(t)

d t+ any(t)

= b0dm u(t)

d tm+ b1

dm−1 u(t)

d tm−1+ . . .+ bm−1

d u(t)

d t+ bmu(t). (4.1)

where we assume in this chapter that m ≤ n. In this chapter we will generalize theresults of the first-order and second-order systems of Chapter 3 to higher order systems.To facilitate the discussion we will introduce the new concepts of transfer function andconvolution. Further we will consider the computation of the time response and show therelation between various equivalent system descriptions.

4.1 Transfer functions

In Section 2.3 we have introduced the Laplace transformation and claimed that if thesystem is initially-at-rest at time t = 0 (which means that output y(0) = 0 and all its

derivatives dn y(t)d tn

|t=0 = 0 as well), then

Ldn y(t)

d tn

= sn Y (s) (4.2)

With U(s) = Lu(t) and Y (s) = Ly(t), Equation (4.1) can be written as(

sn + a1sn−1 + . . .+ an−1s+ an

)

Y (s) =(

b0sm + b1s

m−1 + . . .+ bm−1s+ bm

)

U(s)

(4.3)

and if we define a(s) = sn+a1sn−1+. . .+an−1s+an and b(s) = b0s

m+b1sm−1+. . .+bm−1s+bm

we obtain

a(s)Y (s) = b(s)U(s) (4.4)

61

Page 62: System Control Notes

62 CHAPTER 4. GENERAL SYSTEM ANALYSIS

If we rewrite Equation (4.4) as

Y (s)

U(s)=

b(s)

a(s)= H(s) (4.5)

then H(s) is called the transfer function, which describes the relation between the Laplacetransform of the input signal and the Laplace transform of the output signal. Note that

H(s) =b(s)

a(s)=

b0 sm + b1 s

m−1 + . . .+ bm−2 s2 + bm−1 s+ bm

sn + a1 sn−1 + . . .+ an−2 s2 + an−1 s+ an(4.6)

The transfer function provides a complete representation of a linear system in the Laplacedomain. The order of the transfer function is defined as the order of the denominatorpolynomial and is therefore equal to n, the order of the differential equation of the system.

Type differential equation Transfer Function

Integrator y = u1

sDifferentiator y = u s

First-order system y + ay = u1

s+ a

Double integrator y = u1

s2

Damped oscillator y + 2ζωny + ω2ny = u

1

s2 + 2ζωns+ ω2n

Figure 4.1: Transfer functions for some common differential equations.

Since the a and b coefficients are real numbers and s is a complex number, the transferfunction H(s) will also be a complex number that can be represented in polar form or inmagnitude-and-phase form as

H(σ + jω) = M(σ + jω) ejφ(σ+jω) (4.7)

where

M(σ + jω) = |H(σ + jω)| (4.8)

is the magnitude of the transfer function and

φ(σ + jω) = ∠H(σ + jω) (4.9)

is the phase of the transfer function. Both M and φ are real-valued functions dependingon the variable s = σ + jω. With real-valued a and b coefficients we have the propertythat

M(σ − jω) = M(σ + jω) (4.10)

φ(σ − jω) = −φ(σ + jω) (4.11)

Page 63: System Control Notes

4.2. TIME RESPONSES 63

Poles and Zeros

Consider a linear system with the rational transfer function

H(s) =b(s)

a(s),

where a(s) = sn + a1 sn−1 + . . . + an−2 s

2 + an−1 s + an and b(s) = b0 sm + b1 s

m−1 +. . . + bm−2 s

2 + bm−1 s + bm. The roots of the polynomial a(s) are called the poles of thesystem, and the roots of b(s) are called the zeros of the system. As was already shownin Chapter 3, the poles pi of first-order and second-order systems correspond to the rootsof a(s) in the homogeneous equation, leading to the homogeneous solution y(t) = epi t.Zeros have a different interpretation. Let z be a zero of the system, so b(z) = 0 andtherefore H(z) = 0. Then for a input signal u(t) = ezt, it follows that the pure exponentialoutput is y(t) = H(z)ezt = 0. Zeros of the transfer function thus block transmission of thecorresponding exponential signals.

4.2 Time responses

In this section we will derive the time response of linear time-invariant systems of the form(4.1), which can be written as

dn y(t)

d tn+ a1

dn−1 y(t)

d tn−1+ . . .+ an−1

d y(t)

d t+ any(t) = f(t) (4.12)

where the forcing function f(t) = b0dm u(t)d tm

+b1dm−1 u(t)d tm−1 + . . .+bm−1

d u(t)d t

+bmu(t) is assumedto be known, and where we have initial conditions

dn−1 y(t)

d tn−1

∣∣∣∣t=0

= cn−1, . . . ,d y(t)

d t

∣∣∣∣t=0

= c1, y(0) = c0 (4.13)

The procedure to find the signal y(t) that is a solution of differential equation (4.12) andat the same time satisfies the initial conditions (4.13) consists of three parts

1. Compute the homogeneous solution yh(t), t ≥ 0 that satisfies the so called homoge-neous equation

dn y(t)

d tn+ a1

dn−1 y(t)

d tn−1+ . . .+ an−1

d y(t)

d t+ any(t) = 0 (4.14)

so equation (4.12) for f(t) = 0.

2. Compute a particular solution yp(t), t ≥ 0 that satisfies the differential equation(4.12) for f(t) = 4 (but without considering the initial conditions).

3. By combining the homogeneous solution and particular solution, find the final so-lution y(t) that is a solution of differential equation (4.12) and at the same timesatisfies the initial conditions (4.13) with correct initial conditions .

Page 64: System Control Notes

64 CHAPTER 4. GENERAL SYSTEM ANALYSIS

Homogeneous solution

In the previous section we already observed that solutions of the form eλt, for possibly-complex values of λ, play an important role in solving first-order and second-order differ-ential equations. The exponential function is one of the few functions that keep its shapeeven after differentiation. In order for the sum of multiple derivatives of a function to sumup to zero, the derivatives must cancel each other out and the only way for them to do sois for the derivatives to have the same form as the initial function. Thus, to solve

dn y(t)

d tn+ a1

dn−1 y(t)

d tn−1+ . . .+ an−1

d y(t)

d t+ any(t) = 0 (4.15)

we set y = eλt, leading to

λneλt + a1λn−1eλt + · · ·+ ane

λt = 0.

Division by eλt gives the nth-order polynomial equation

a(λ) = λn + a1λn−1 + · · ·+ an = 0.

This equation a(λ) = 0, is the characteristic equation and a is defined to be the character-istic polynomial.

The polynomial a(λ) is of nth order and so there are n roots (λ1, . . . , λn) of the characteris-tic polynomial. If all roots λi are distinct, we obtain n possible solutions eλi t (i = 1, . . . , n).Note that because of linearity, any linear combination of possible solutions will satisfy thehomogeneous equation (4.15). This gives us the most general form of the homogeneoussolution:

yh(t) = C1 eλ1 t + C2 e

λ2 t + . . .+ Cn eλn t

=n∑

i=1

Ci eλi t

with Ci ∈ C.

Example 4.1 Consider the homogeneous equation

d3 y(t)

d t3+ 10

d2 y(t)

d t2+ 31

d y(t)

d t+ 30y(t) = 0

The characteristic equation is:

λ3 + λ2 10 + λ 31 + 30 = (λ+ 2)(λ+ 3)(λ+ 5) = 0

We find three roots: λ1 = −2, λ2 = −3, λ3 = −5, and so the homogeneous solution becomes

yh(t) = C1 e−2 t + C2 e

−3 t + C3 e−5 t

with C1, C2, C3 ∈ R.

Page 65: System Control Notes

4.2. TIME RESPONSES 65

Example 4.2 Consider the homogeneous equation

d3 y(t)

d t3+ 4

d2 y(t)

d t2+ 6

d y(t)

d t+ 4y(t) = 0

with characteristic equation:

λ3 + λ2 4 + λ 6 + 4 = (λ+ 2)(λ+ 1 + j)(λ+ 1− j) = 0

We find three roots: λ1 = −2, λ2 = −1− j, λ3 = −1+ j, and so the homogeneous solutionbecomes

yh(t) = C1 e−2 t + C2 e

(−1−j) t + C3 e(−1+j) t

In this case we find C1 ∈ R and C2, C3 ∈ C. Moreover, to make sure that yh is real-valuedthe coefficients C2 and C3 will be complex conjugates of each other, so if C2 = γ+ jζ, thenC3 = γ − jζ with γ, ζ ∈ R. We derive

yh(t) = C1 e−2 t + (γ + jζ) e(−1−j) t + (γ − jζ) e(−1+j) t

= C1 e−2 t + (γ + jζ) e−t(cos t− j sin t) + (γ − jζ) e−t(cos t+ j sin t)

= C1 e−2 t + 2 γ e−t cos t + 2 ζ e−t sin t

C1, γ, ζ ∈ R.

Multiple roots:

If one of the roots λi has a multiplicity equal to m (with m > 1), then

y = tk eλit for k = 0, 1, . . . , m− 1

is a solution of the homogeneous equation. Applying this to all roots gives a collection ofn distinct and linearly independent solutions, where n is the order of the system.

Example 4.3 Consider the homogeneous equation

d4 y(t)

d t4+ 2

d3 y(t)

d t3+ 2

d2 y(t)

d t2+ 2

d y(t)

d t+ y(t) = 0

with characteristic equation:

λ4 + 2λ3 + 2λ2 + 2 λ+ 1 = (λ+ 1)2(λ+ j)(λ− j) = 0

We find three roots: λ1 = j, λ2 = −j, λ3 = −1 (multiplicity 2), and so the homogeneoussolution becomes

yh(t) = C1 ej t + C2 e

−j t + C3 e−t + C4 t e

−t

As in Example 4.2, the coefficients C1 = γ+ jζ and C2 = γ− jζ will be complex conjugatesof each other with γ, ζ ∈ R. We derive

yh(t) = C1 ej t + C2 e

−j t + C3 e−t + C4 t e

−t

= 2 γ cos t− 2 ζ sin t+ C3 e−t + C4 t e

−t

with γ, ζ, C3, C4 ∈ R.

Page 66: System Control Notes

66 CHAPTER 4. GENERAL SYSTEM ANALYSIS

Particular solution

To obtain the solution to the non-homogeneous equation (sometimes called inhomogeneousequation), we need to find a particular solution yp(t), t ≥ 0 that satisfies the differentialequation (4.12) (but without considering the initial condition).

Particular solution for an exponential function

We start by considering the particular solution of differential equation (4.12) for an expo-nential input u(t) = es t where s ∈ C. First note that the derivatives of the input functionis given by

d u(t)

d t= s es t,

d2 u(t)

d t2= s2 es t, . . . ,

dm u(t)

d tm= sm es t,

We observe that all the derivatives are scaled versions of the the exponential input es t.Therefore assume that y(t) has the same shape as u(t), so let

y(t) = C es t

for the same fixed complex value s. We can compute the derivatives of y:

d y(t)

d t= sC es t,

d2 y(t)

d t2= s2C es t, . . . ,

dn y(t)

d tn= snC es t

Substitution gives us:

snC es t + a1 sn−1C es t + . . .+ an−1 sC es t + an C es t

= b0 sm es t + b1 s

m−1 es t + . . .+ bm−1 s2 es t + bm es t. (4.16)

or

(sn + a1sn−1 + . . .+ an−1 s+ an)C es t

= (b0 sm + b1 s

m−1 + . . .+ bm−1 s2 + bm) e

s t. (4.17)

From this equation we can derive

C =(b0 s

m + b1 sm−1 + . . .+ bm−1 s+ bm)

(sn + a1sn−1 + . . .+ an−1 s+ an)= H(s)

where H is the transfer funstion of the system.We see that for an exponential input u(t) = es t where s ∈ C we obtain a particular solution

y(t) = H(s)es t, t ≥ 0 (4.18)

Using property (4.18) we can compute the particular solution for forcing functions that aresingularity, harmonic and damped harmonic functions.

Page 67: System Control Notes

4.2. TIME RESPONSES 67

Particular solution for a step function

If we consider u(t) = est, t ≥ 0 for s = 0, we find u(t) = est|s=0 = 1, t ≥ 0 and so theparticular solution for a step function is given by

y(t) = H(s)est∣∣s=0

=bman

, t ≥ 0 (4.19)

Particular solution for a ramp and parabolic function

Two input signals that are important to study, namely the ramp function ur(t) = t,t ≥ 0 and the parabolic function up(t) = t2/2, t ≥ 0, cannot be written in the form est.To compute the particular solution for a ramp or parabolic input function, we need theintegral property of linear time-invariant system, i.e. let ys(t) be the output of the systemto the step input us(t) = 1 for t ≥ 0, then the integral of the input ur(t) =

∫us(τ) dτ will

result in the integral of the output yr(t) =∫ys(τ) dτ , and one step further, a parabolic

input up(s) =∫ur(τ) dτ will result in the integral of the output yp(t) =

∫yr(τ) dτ .

Consider the system described by the differential equation

dn y(t)

d tn+ a1

dn−1 y(t)

d tn−1+ . . .+ an−1

d y(t)

d t+ any(t)

= b0dm u(t)

d tm+ b1

dm−1 u(t)

d tm−1+ . . .+ bm−1

d u(t)

d t+ bmu(t),

with the corresponding transfer function

H(s) =b0 s

m + b1 sm−1 + . . .+ bm−2 s

2 + bm−1 s+ bmsn + a1 sn−1 + . . .+ an−2 s2 + an−1 s+ an

From (4.19) we know that a particular solution for a step input is given by

ys(t) = H(0) =bman

for t ≥ 0

The particular solution yr(t) for a ramp input ur(t) = t, t ≥ 0 is given by

yr(t) =

ys(t) = c0 t+ c1 for t ≥ 0

where c0 = bm/an and c1 is a constant to be determined. Note that yr(t) = c0 and ur(t) = 1.Substitution into the differential equation gives us:

0 + an−1 c0 + an (c0 t+ c1) = bm−1 + bm t for t ≥ 0

We find:

c1 =bm−1 − an−1 c0

an=

anbm−1 − an−1bma2n

.

Page 68: System Control Notes

68 CHAPTER 4. GENERAL SYSTEM ANALYSIS

The parabolic response for this system for an parabolic input up(t) = t2/2, t ≥ 0 is givenby

yp(t) =

yr(t) = c0t2

2+ c1 t+ c2 , for t ≥ 0

where c2 is a constant to be determined. Substitution into the differential equation givesus:

0 + an−2 c0 + an−1 (c0 t+ c1) + an(c0, t2 + c1 t+ c2) = bm−2 + bm−1 t+ bm t2 for t ≥ 0

where c0 = bm/an and c1 = (bm−1 − an−1 c0)/an. We find:

c2 = (bm−2 − an−2 c0 − an−1c1)/an =a2nbm−2 − an−1anbm−1 − (anan−2 + a2n−1)bm

a3n.

Example 4.4 Consider the system

y(t) + 5y(t) + 6y(t) = u(t) + u(t)

We compute:

c0 = bm/an = 1/6

c1 = (bm−1 − an−1 c0)/an = (1− 5/6)/6 = 1/36

c2 = (bm−2 − an−2 c0 − an−1c1)/an = (0− 1/6− 5/36)/6 = −11/216

The particular solution of this system for a unit step input is given by

ys(t) = H(0) = 1/6 for for t ≥ 0

The particular solution of this system for a unit ramp input is given by

yr(t) =

ys(t) = 1/6 t+ 1/36 for t ≥ 0

The particular solution of this system for a unit parabolic input is given by

yp(t) =

yr(t) = 1/12 t2 + 1/36 t− 11/216 for t ≥ 0

Higher order singularity function

For a function

u(t) =1

k!tk

for k > 2 we can also compute the particular solution

y(t) = c01

n!tn + cn−1

1

(n− 1)!tn−1 + . . .+ cn−1t + cn

where the coefficients ci, i = 0, . . . , n are computed in an similar way.

Page 69: System Control Notes

4.2. TIME RESPONSES 69

Particular solution for a (damped) harmonic function

Consider the input signal u(t) = eσt cosωt with σ, ω ∈ R. We can rewrite this input interms of exponentials with a complex argument as follows:

u(t) = eσt cosωt = 1/2e(σ+jω)t + 1/2e(σ−jω)t

If we let s = σ + jω then the response to u(t) = eσ+jωt is equal to y(t) = H(σ + jω)eσ+jωt

and the response to u(t) = eσ−jωt is y(t) = H(σ−jω)eσ−jωt. By superposition, the responseof the cosine is equal to the sum of these two responses:

y(t) = 1/2H(σ + jω)e(σ+jω)t + 1/2H(σ − jω)e(σ−jω)t

Note that following (4.7) we can write

M(σ + jω) = |H(σ + jω)| and φ(σ + jω) = ∠H(σ + jω) (4.20)

then

H(σ + jω) = M(σ + jω) ejφ(σ+jω)

Using the similar properties as (4.10) and (4.11) this leads to

y(t) = 1/2H(σ + jω)e(σ+jω)t + 1/2H(σ − jω)e(σ−jω)t

= 1/2M(σ + jω) ejφ(σ+jω)e(σ+jω)t + 1/2M(σ + jω) e−jφ(σ+jω)e(σ−jω)t

= M(σ + jω) eσ t(

1/2ejωt+jφ(σ+jω) + 1/2e−jωt−jφ(σ+jω))

= M(σ + jω) eσ t cos(ωt+ φ(σ + jω))

Summary of particular solutions

We summarize the particular solutions for some singularity, exponential, and sinusoidalinput functions in Table 4.2.

Computation of the final response

Finally we solve the original problem (4.12) where we have initial conditions

dn−1 y(t)

d tn−1

∣∣∣∣t=0

= cn−1, . . . ,d y(t)

d t

∣∣∣∣t=0

= c1, y(0) = c0

The general solution to the linear differential equation is the sum of the general solutionof the related homogeneous equation and the particular solution. First note that thesignal y(t) = yp(t) + yh(t) satisfies (4.12) for any C1, . . . , Cn ∈ C. To compute the correctC1, . . . , Cn we have to make sure that the initial conditions are satisfied.

Page 70: System Control Notes

70 CHAPTER 4. GENERAL SYSTEM ANALYSIS

Input u(t) Output yp(t), t ≥ 0

1 H(0)

1n!tn c0

1n!tn + c1

1(n−1)!

tn−1 + . . .+ cn−1t+ cn

eσt H(σ) eσt

ejωt H(j ω) ejωt

cos(ωt) M(jω) cos(ωt+ φ(jω))

sin(ωt) M(jω) sin(ωt+ φ(jω))

eσt cos(ωt) M(σ + jω) eσ t cos(ωt+ φ(σ + jω))

eσt sin(ωt) M(σ + jω) eσ t sin(ωt+ φ(σ + jω))

Figure 4.2: Particular solutions

Example 4.5 Consider the system

y(t) + 4y(t) + 13y(t) = u(t)

with u(t) = e−3t for t ≥ 0 and initial conditions y(0) = 3 and y(0) = 1. From thecharacteristic equation

λ2 + 4λ+ 13 = 0

we find the roots λ1 = −2 + j 3 and λ2 = −2− j 3. The homogeneous solution becomes

yh(t) = e−2t(C1 cos 3t+ C2 sin 3t)

The transfer function H(s) for this system is given by

H(s) =1

s2 + 4s+ 13

For an input u(t) = e−3t the particular solution has the form

yp(t) = H(−3) e−3t =1

10e−3t = 0.1 e−3t

The final solution becomes

y(t) = yh(t) + yp(t) = e−2t(C1 cos 3t+ C2 sin 3t) + 0.1 e−3t

with derivative

y(t) = −2 e−2t(C1 cos 3t+ C2 sin 3t) + 3 e−2t(−C1 sin 3t+ C2 cos 3t)− 0.3 e−3t

and so

y(0) = C1 + 0.1 , y(0) = −2C1 + 3C2 − 0.3

With y(0) = 1 and y(0) = 3 we find C1 = 0.9 and C2 = 1.7.

Page 71: System Control Notes

4.2. TIME RESPONSES 71

Frequency response

We will now give a special attention to the particular solution for a sinusoidal input. Wealready discussed that the sinusoid can be expressed as a sum of two harmonic functions

u(t) = cosωt = 1/2ejωt + 1/2e−jωt

If we let s = jω then the response to u(t) = ejωt is equal to

y(t) = 1/2H(jω)ejωt + 1/2H(−jω)e−jωt

Note that similar to (4.20) we can define

H(jω) = M(jω) ejφ(jω)

with

M(jω) = |H(jω)| and φ(jω) = ∠H(jω) (4.21)

With this substitution we find

y(t) = M(jω)(1/2ej(ωt+φ(jω)) + 1/2e−j(ωt+φ(jω))

)

= M(jω) cos(ωt+ φ(jω)) (4.22)

This means that if a system represented by the transfer function H(s) has a sinusoidalinput, the output will be sinusoidal at the same frequency with magnitude M(jω) and willbe shifted in phase by an angle φ(jω).

Example 4.6 Frequency response of a second-order system. Consider a second-order sys-tem with input u(t) and output y(t), satisfying the differential equation

y(t) + 0.1y(t) + y(t) = u(t)

The transfer function of this system is given by

H(s) =1

s2 + 0.1 s+ 1

and so

H(jω) = M(jω) ejφ(jω) =1

−ω2 + j 0.1ω + 1

with

M(jω) =

∣∣∣∣

1

−ω2 + j 0.1ω + 1

∣∣∣∣=

1√

(1− ω2)2 + (0.1ω)2=

1√ω4 − 1.99ω2 + 1

φ(jω) = ∠H(jω) = − arctan

(0.1ω

1− ω2

)

Figure 4.3 shows the plots of M(jω) and φ(jω) as a function of the frequency. These plotsare called the Bode plots of the system (named after H.W. Bode). The frequencies on thehorizontal axis are customarily given on a logarithmic scale. The magnitude M(jω) isgiven in decibels (dB), which means that we plot 20 log[M(jω)] on the vertical axis. Forthe phase φ(jω) we use a linear scale on the vertical axis.

Page 72: System Control Notes

72 CHAPTER 4. GENERAL SYSTEM ANALYSIS

−40

−20

0

20

10−1 100 10120log[M

(jω)]→

ω −→

−180

−90

0

10−1 100 101

φ(jω)−→

ω −→

Figure 4.3: The magnitude M(jω) and the phase φ(jω).

4.3 Time response using Laplace transform

In this section we will derive the time response of linear time-invariant system (4.12), withinitial conditions

dn−1 y(t)

d tn−1

∣∣∣∣t=0

= cn−1, . . . ,d y(t)

d t

∣∣∣∣t=0

= c1, y(0) = c0 (4.23)

using the Laplace transform. In Section 4.1 we have introduced the notion of transferfunction and have given the property for a initially-at-rest systems:

Ldk y(t)

d tk

= sk Y (s) (4.24)

where Y (s) is the Laplace transform of y(t). For the computation of the time responsefor a system that is not initially-at-rest (so c0, . . . , cn−1 in (4.23) are not all equal to zero),property (4.24) can be extended into the following property

Ldk y(t)

d tk

= sk Y (s)−k−1∑

i=0

sk−1−i di y(t)

d ti

∣∣∣∣t=0

(4.25)

= sk Y (s)−k−1∑

i=0

sk−1−ici (4.26)

Page 73: System Control Notes

4.3. TIME RESPONSE USING LAPLACE TRANSFORM 73

so in particular for k = 1, 2 we find

Ld y(t)

d t

= s Y (s)− y(0) = s Y (s)− c0

Ld2 y(t)

d t2

= s2 Y (s)− s y(0)− y(0) = s2 Y (s)− s c0 − c1

Consider the differential equation

dn y(t)

d tn+ a1

dn−1 y(t)

d tn−1+ . . .+ an−1

d y(t)

d t+ any(t) = f(t). (4.27)

for a known forcing function f(t). Applying the Laplace transformation we obtain

Ldn y(t)

d tn

+ L

a1dn−1 y(t)

d tn−1

+ . . .

+ L

an−1d y(t)

d t

+ anL

y(t)

= L

f(t)

. (4.28)

Substitution of (4.26) into (4.28) gives us

sn Y (s)−n−1∑

i=0

sn−1−ici + a1sn−1 Y (s)− a1

n−2∑

i=0

sn−2−ici + . . . (4.29)

+ an−1s Y (s)− y(0) + an Y (s) = F (s). (4.30)

where Y (s) and F (s) are the Laplace transforms of y(t) and f(t), respectively. From thisequation we can find an expression for Y (s). We can now compute y(t) for t ≥ 0 fromY (s), using the inverse Laplace transformation. Let Y (s), s ∈ C be the Laplace transformof y(t), then the inverse Laplace transformation is defined as follows:

y(t) = L−1Y (s) =1

2π j

∫ ∞

−∞Y (s) es t d s , t ≥ 0 (4.31)

A problem is that the inverse Laplace integral is not always easy to compute. An alternativeway to find L−1Y (s) is carrying out a partial fraction expansion, in which Y (s) is brokeninto components

Y (s) = Y1(s) + Y2(s) + . . .+ Yn(s)

for which the inverse Laplace transforms of Y1(s), . . . , Yn(s) are available from Table 2.1 orthe Table in Appendix B. The method using partial fraction expansion is possible becausethe inverse Laplace transformation is a linear operation, so:

L−1αf1(t) + βf2(t) = αL−1f1(t) + βL−1f2(t) (4.32)

The procedure to compute the output signal using Laplace transformation now consists ofthree steps:

Page 74: System Control Notes

74 CHAPTER 4. GENERAL SYSTEM ANALYSIS

Step 1: We compute the so called Laplace transform F (s) of the forcing signal f(t). Wecan use Equation (2.8) or we can use Table 2.1, where for a number of known signalsthe Laplace transforms are given.

Step 2: In the second step we substitute (4.26) and obtain an expression for the Laplacetransform Y (s).

Step 3: Finally, split Y (s) up in pieces by the so called partial fractional expansion. Forevery term we apply the Inverse Laplace transform using Table 2.1 or the Table inAppendix B to retrieve the output signal y(t) of the original problem.

Partial fractional expansion

In the previous paragraph we discussed a procedure to compute the output signal of asystem using Laplace transformation. In step 3 the procedure tells us to split Y (s) up intocomponents that occur in Table 2.1. This splitting up can be done using partial fractionexpansion.Let the Laplace transform Y (s) of a signal y(t) be given by the ratio of large polynomials:

Y (s) =b1s

m + b2sm−1 + bms+ bm+1

sn + a1sn−1 + an−1s+ an

Let p1, . . . , pn be the roots of the polynomial a(s) = sn + a1sn−1 + an−1s+ an. Then a(s)

can be written as

a(s) = (s− p1)(s− p2) · · · (s− pn−1)(s− pn) =

n∏

i=1

(s− pi)

We can then rewrite Y (s) as a sum of partial fractions:

Y (s) =C1

(s− p1)+

C2

(s− p2)+ . . .+

Cn

(s− pn)

where the coefficients C1, . . . , Cn can be determined by

Ci = (s− pi)Y (s)|s=pi, i = 1, . . . , n (4.33)

If we have a multiple pole in s = η with multiplicity r we obtain

a(s) = (s− η)r(s− pr+1) . . . (s− pn)

Now we can then rewrite Y (s) as a sum of partial fractions:

Y (s) =β0

(s− η)r+

β1

(s− η)r−1+ . . .+

βr−1

(s− η)+

Cr+1

(s− pr+1)+ . . .+

Cn

(s− pn)

Page 75: System Control Notes

4.3. TIME RESPONSE USING LAPLACE TRANSFORM 75

where the Ci’s are determines using Equation (4.33), and the βi’s can be computed using:

βk =1

k!

dk

d sk

(

(s− η)rY (s))∣∣∣∣s=η

, for k = 0, . . . , r−1

If we have a complex pole in p1 = η + jθ then we will also find the complex conjugatep2 = η−jθ as a pole. The corresponding coefficients C1 and C2 will be complex conjugatesof each other, so if C1 = γ + j ζ , then C2 = γ − j ζ and so

C1

(s− p1)+

C2

(s− p2)= 2γ

s+ η

(s+ η)2 + θ2− 2ζ

θ

(s + η)2 + θ2

This leads the solution

y(t) = 2γ L−1

(s+ η

(s+ η)2 + θ2

)

−2ζ L−1

(s+ η)2 + θ2

)

= 2γ e−η t cos(θ t)−2ζ e−η t sin(θ t)

Example 4.7 Consider the function

Y (s) = 6(s+ 2)(s+ 4)

s(s+ 1)(s+ 3)

We can rewrite it as a sum of partial fractions:

Y (s) =C1

s+

C2

s + 1+

C3

s+ 3

and using Equation (4.33) we can compute the coefficients

C1 = 16 , C2 = −9 , C3 = −1

and the output signal y(t) is now given by

y(t) = 16L−11s − 9L−1 1

s+ 1 − L−1 1

(s+ 3)

= 16− 9 e−t − e−3t

Example 4.8 Consider the function

Y (s) =(s+ 3)

(s+ 1)(s+ 2)2

This system has a multiple pole at s = −2 with multiplicity 2. We can rewrite Y (s) as asum of partial fractions:

Y (s) =C1

s+ 1+

C2

s+ 2+

C3

(s+ 2)2

Page 76: System Control Notes

76 CHAPTER 4. GENERAL SYSTEM ANALYSIS

and using Equation (4.33) we can compute the coefficients

Y (s) =2

s+ 1+

−2

s+ 2+

−1

(s+ 2)2

and so the output signal is given by

y(t) = 2L−1 1

s+ 1 − 2L−1 1

s+ 2 − L−1 1

(s+ 2)2

= 2 e−t − 2 e−2t − t e−2t

Example 4.9 Consider the function

Y (s) =10

s(s2 + 2s+ 5)

This system has a complex pole pair at s = −1± j 2. We can rewrite it as a sum of partialfractions:

C1

s+

C2(s+ 1)

(s+ 1)2 + 22+

C3 · 2(s+ 1)2 + 22

and using Equation (4.33) we can compute the coefficients

Y (s) =2

s− 2s+ 4

s2 + 2s+ 5=

2

s+

−2(s+ 1)

(s+ 1)2 + 22+

−1 · 2(s+ 1)2 + 22

and so the output signal becomes

y(t) = 2L−11s − 2L−1 s+ 1

(s+ 1)2 + 22 − L−1 2

(s+ 1)2 + 22

= 2− 2 e−t cos 2t− e−t sin 2t

4.4 Impulse response model: convolution

In this section we will show that the time response of a linear time-invariant system can becompletely characterized in terms of the unit impulse response of the system. Recall thatthe impulse response of a system can be determined by exciting the system with a unitimpulse δ(t) and observing the corresponding output y(t). We will denote this impulseresponse as h(t).To develop the theory of convolution we will start by considering the rectangular functionδT (t), t ∈ R for a small scalar value T > 0 as defined in Section 1.1. First note that

δT (t) = 1/T(

us(t)− us(t− T ))

Page 77: System Control Notes

4.4. IMPULSE RESPONSE MODEL: CONVOLUTION 77

system- -u(t) = δ(t) y(t) = h(t)

impulse impulseresponse-

6u

t

δ(t)

-

6y

t

h(t)

Figure 4.4: The impulse response h(t)

This means that using the results from Section 4.2 we can compute the response y(t) forthe input u(t) = δT (t). We denote this response as hT (t) = y(t) (see Figure 4.5).Using linearity and time-invariance we find that multiplying the input by a constant β ∈ R

and shifting the input in time with α > 0 gives an output with the same multiplicationand time shift:

u(t) = βδT (t− α) =⇒ y(t) = βhT (t− α)

-

6u

t

1/T

-T

u(t) = δT (t)

=⇒-

6y

ty(t) = hT (t)

-

6u

t

u(t) = β δT (t− α)

=⇒-

6y

t

y(t) = β h(t− α)

Figure 4.5: Response for a rectangular function

Now assume that we aim to compute the response y(t) of a linear time-invariant systemfor an arbitrary u(t). We begin our derivation by considering a ‘staircase’ approximationof of the input u(t):

u(t) ≈ uT (t) =∑

k

u(k T ) T δT (t− k T ) (4.34)

Page 78: System Control Notes

78 CHAPTER 4. GENERAL SYSTEM ANALYSIS

-

6u

t-

6uT

tT 2T 3T 4T 5T 6T

u(t) ≈ uT (t) =∑

k

u(k T ) T δT (t− k T )

Figure 4.6: Staircase approximation of an input signal

as illustrated in Figure 4.6. Now we will consider the response yT (t) to the staircase signaluT (t). The staircase signal uT is built up from a number of rectangular functions T δT ,shifted in time by kT and multiplied by u(k T ) for k ∈ Z. As was derived before, if hT (t) isthe output signal for an input signal δT (t), then shifting in time by kT and multiplying byu(k T ) gives that for an input u(k T ) T δT (t−k T ) we obtain an output u(k T ) T hT (t−k T )for any k ∈ Z. Now the approximation uT from equation (4.34) is a linear combination ofdelayed rectangular functions. The output corresponding to this input can be expressedas a linear combination of delayed response hT as follows:

yT (t) =∑

k

u(k T ) T hT (t− k T ) (4.35)

So,

-

6u

t

uT (t) =∑

k

u(k T ) T δT (t− k T )

=⇒-

6y

t

yT (t) =∑

k

u(k T ) T hT (t− k T )

Figure 4.7: Computation of the system response using a staircase approximation

Page 79: System Control Notes

4.5. ANALYSIS OF STATE SYSTEMS 79

u(t) ≈ uT (t) =∑

k

u(k T ) T δT (t−k T ) =⇒ y(t) ≈ yT (t) =∑

k

u(k T ) T hT (t−k T )

As we let T approach 0, the approximation will improve for smaller T , and in the limit yT

will be equal to the true y(t):

y(t) = limT→0

yT (t) = limT→0

k

u(k T ) T hT (t− k T )

In the limit we can replace T δT (t− k T ) by δ(t− τ) d τ , and T hT (t− k T ) by h(t− τ) d τ ,where δ(t) is the unit impulse function and h(t) is the impulse response. Moreover, thesum will become an integral and we therefore obtain

y(t) =

∫ ∞

−∞u(τ) h(t− τ) d τ (4.36)

this is called the convolution integral. The convolution of two signals u and h will berepresented symbolically as

y(t) = u(t) ∗ h(t) (4.37)

A basic property of the convolution is that it is commutative, so

y(t) = u(t) ∗ h(t) = h(t) ∗ u(t) =∫ ∞

−∞h(τ) u(t− τ) d τ (4.38)

This means that the input-output behavior of a linear time-invariant system can be de-scribed by either one of the following convolution integrals:

y(t) =

∫ ∞

−∞u(τ) h(t− τ) d τ

y(t) =

∫ ∞

−∞u(t− τ) h(τ) d τ

input-output behavior

4.5 Analysis of state systems

State system time response

In this section we examine the time response of a linear time-invariant state model in thestandard state equation form

x(t) = Ax(t) +Bu(t)y(t) = Cx(t) +Du(t)

(4.39)

Before we compute this time response we first introduce the state transformation whichgives us the A, B, C and D matrices for a different set of state variables. A special statetransformation is the modal transformation, which gives us a state system description inwhich the A-matrix is diagonal. This will facilitate the computations dramatically.

Page 80: System Control Notes

80 CHAPTER 4. GENERAL SYSTEM ANALYSIS

State transformation

There is no unique set of state variables that describe a given system; many different setsof variables may be selected to yield a complete system description.

Example 4.10 Consider the fluid flow system of Section 2.2, in which the dynamics be-tween the two levels in the vessels and the inflow are described as follows:

h1(t) =1

ρA1win(t)−

g

A1R1h1(t) +

g

A1R1h2(t)

h2(t) =g

A2R1h1(t)−

g(R1 +R2)

A2R1R2h2(t) .

Note that by choosing the state as

x(t) =

[h1(t)h2(t)

]

and as an input u(t) = win(t), we obtain the state system with state equation

x(t) =

− g

A1R1

g

A1R1g

A2R1−g(R1 +R2)

A2R1R2

x(t) +

1

ρA1

0

u(t)

Now suppose we are interested in the mean level y(t) = (h1(t) + h2(t))/2, then the outputequation is given by

y(t) =[0.5 0.5

]x(t) + 0 u(t)

Note that we could have chosen the states in a different way, for example

x′(t) =

[(h1(t) + h2(t))/2(h1(t)− h2(t))/2

]

.

In other words, the first state gives the average water level and the second state gives thedifference divided by two. Now we obtain a new set of equations:

h1(t) + h2(t)

2=

( 2g

2A2R1

+g

2A2R2

) h1(t) + h2(t)

2

−( 2g

A1R1

+g

A2R2

) h1(t)− h2(t)

2+

1

2ρA1

win(t)

h1(t)− h2(t)

2= −

( 2g

2A2R1

+g

2A2R2

) h1(t) + h2(t)

2

−( 2g

A1R1− g

A2R2

) h1(t)− h2(t)

2+

1

2ρA1win(t)

Page 81: System Control Notes

4.5. ANALYSIS OF STATE SYSTEMS 81

so we obtain the state system with state equation

x′(t) =

2g

2A2R1+

g

2A2R2− 2g

A1R1− g

A2R2

− 2g

2A2R1− g

2A2R2− 2g

A1R1+

g

A2R2

x

′(t) +

1

2ρA11

2ρA1

u(t)

and the output equation:

y(t) =[1 0

]x(t) + 0 u(t)

The first system with state x(t) and the second system with state x′(t) both describe thesame physical system with the same input u(t) = win(t) and the same output y(t) = (h1(t)+h2(t))/2, but the state description is different because of a different choice of states.

In the previous example we have introduced a state transformation

x(t) =

[1 11 −1

]

x′(t)

or

x(t) = T x′(t) (4.40)

We will now consider how the system matrices will change if we introduce a state trans-formation (4.40) where T is non-singular (invertible). First note that the time-derivativeof the new state x′(t) is given by

x(t) = T x′(t)

Consider the original system

x(t) = Ax(t) +Bu(t)y(t) = Cx(t) +Du(t)

(4.41)

Substitution of x(t) = T x′(t) and x(t) = T x′(t) into (4.41) leads to

T x′(t) = AT x′(t) +Bu(t)y(t) = CT x′(t) +Du(t)

Multiplying the first equation by the inverse T−1 gives us the equations

x′(t) = T−1AT x′(t) + T−1Bu(t)y(t) = CT x′(t) +Du(t)

(4.42)

Page 82: System Control Notes

82 CHAPTER 4. GENERAL SYSTEM ANALYSIS

By defining

x′(t) = T−1x(t)

A′ = T−1AT

B′ = T−1B

C ′ = C T

D′ = D

we arrive at the state system

x′(t) = A′x′(t) +B′u(t)y(t) = C ′x′(t) +D′u(t)

(4.43)

which has the same input-output behavior as (4.41).

Modal transformation

Consider system (4.39). We will now give a closer loop to the matrix A which is called thesystem matrix of the system. The values λi satisfying the equation

λi mi = Ami for mi 6= 0 (4.44)

are known as the eigenvalues of A and the corresponding column vectors m are defined asthe eigenvectors. Equation (4.44) can be rewritten as:

(λiI − A)mi = 0 (4.45)

The condition for a non-trivial solution of such a set of linear equations is that

det(λiI − A) = 0 (4.46)

which is defined as the characteristic polynomial of the A matrix. Eq. (4.46) may bewritten as

λn + an−1λn−1 + . . .+ a1λ+ 1 = 0 (4.47)

or, in factored form in terms of its roots λ1, . . . , λn,

(λ− λ1)(λ− λ2) · · · (λ− λn) = 0 (4.48)

Let us consider the case that all eigenvalues are distinct. Define

M =[m1 m2 · · · mn

](4.49)

Page 83: System Control Notes

4.5. ANALYSIS OF STATE SYSTEMS 83

and

Λ =

λ1 0 · · · 0

0 λ2...

.... . .

0 · · · λn

, (4.50)

then the eigenvalue decomposition of the A matrix is given by

A = M ΛM−1

and system (4.39) can be transformed to the diagonal form

x′(t) = A′x′(t) +B′u(t)y(t) = C ′x′(t) +D′u(t)

(4.51)

by choosing M as a state transformation matrix (see Section 2.4):

x′(t) = M−1x(t)

A′ = M−1AM = Λ

B′ = M−1B

C ′ = CM

D′ = D

This transformation is called the modal transformation.

Example 4.11 Consider the second-order system

x(t) =

[−6 6−2 1

]

x(t) +

[11

]

u(t)

y(t) =[1 0

]x(t) + 4 u(t)

An eigenvalue decomposition of the matrix A gives us eigenvalues λ1 = −2 and λ2 = −3with eigenvector

m1 =

[32

]

, m2 =

[21

]

and so

Λ = M−1 AM =

[−1 22 −3

][−6 6−2 1

][3 22 1

]

=

[−2 00 −3

]

The system matrices after the modal transformation become

A′ = M−1AM = Λ =

[−2 00 −3

]

B′ = M−1B =

[1−1

]

C ′ = CM =[3 2

]D′ = D = 4

Page 84: System Control Notes

84 CHAPTER 4. GENERAL SYSTEM ANALYSIS

In the case of multiple eigenvalues, the modal transformation is not always possible. Onepossible solution is to transform the A matrix into a modified-diagonal form or Jordanform (see the book by Kailath [4]). The A matrix will then be almost diagonal in the sensethat its only non-zero entries lie on the diagonal and the superdiagonal (A superdiagonalentry is one that is directly above and to the right of the main diagonal).

Homogeneous state response

The computation of the time response of a state system consists of two steps: First thestate-variable response x(t) is determined by solving the first-order state equation (4.39.1),and then the state response is substituted into the algebraic output equation (4.39.2) inorder to compute y(t). The state variable response of a system described by (4.39) withzero input and an arbitrary set of initial conditions x(0) is the solution of the system of nhomogeneous first-order differential equations

x(t) = Ax(t) , x(0) = x0 (4.52)

In the scalar case for a ∈ R (see Section 3.1) we obtain

x(t) = x(0) ea t

For the matrix case we can write the same solution:

x(t) = x(0) eAt

where

eA t = I + A t+(At)2

2+ . . .

The solution is often written as:

x(t) = Φ(t) x(0) (4.53)

where Φ(t) = eA t is defined as the state transition matrix. To compute this state transitionmatrix we need to compute the exponential of a matrix. This can be done by bringing themodel into the modal form. We compute M and Λ according to (4.49) and (4.50).

Φ(t) = eAt

= I + A t+(At)2

2+ . . .

= M M−1 +M Λ tM−1 +M Λ2t2M−1

2+ . . .

= M(

I + Λ t+(Λt)2

2+ . . .

)

M−1

= M eΛ tM−1

Page 85: System Control Notes

4.5. ANALYSIS OF STATE SYSTEMS 85

and so

Φ(t) = M eΛ tM−1 (4.54)

For eΛ t we derive:

eΛt = I + Λ t+(Λt)2

2+ . . .

=

1 + λ1t+(λ1t)2

2+ . . . 0 · · · 0

0 1 + λ2t+(λ2t)2

2+ . . .

......

. . .

0 · · · 1 + λnt +(λnt)2

2+ . . .

=

eλ1t 0 · · · 0

0 eλ2t...

.... . .

0 · · · eλnt

Example 4.12 Consider the first-order state system

x(t) = −3 x(t) + 2 u(t)y(t) = 5 x(t) + 2 u(t).

The homogeneous state differential equation is given by

x(t) = −3 x(t)

It follows that the 1× 1 state transition matrix is given by

Φ(t) = e−3 t, t > 0, t ∈ R.

Example 4.13 Consider the second-order system of Example 4.11. The homogeneousstate differential equation is

x(t) = Ax(t) =

[−6 6−2 1

]

x(t)

Then the 2× 2 state transition matrix is given by

Φ(t) = eAt = e

−6 6−2 1

t

Page 86: System Control Notes

86 CHAPTER 4. GENERAL SYSTEM ANALYSIS

Using the model transformation we can compute

Φ(t) = M eΛ tM−1

=

[3 22 1

]

e

−2 00 −3

t

[−1 22 −3

]

=

[3 22 1

][e−2t 00 e−3t

][−1 22 −3

]

=

[−3e−2t+4e−3t −2e−2t+2e−3t

6e−2t−6e−3t 4e−2t−3e−3t

]

The state transition matrix Φ(t) has the following properties:

1. Φ(0) = I.

2. x(t1) = Φ(t1)x(0)

3. Φ(−t) = Φ−1(t), and so x(0) = Φ(−t1)x(t1) = Φ−1(t1)x(t1).

4. Φ(t1)Φ(t2) = Φ(t1 + t2), and so

x(t2) = Φ(t2)x(0) = Φ(t2)Φ(−t1)x(t1) = Φ(t2 − t1)x(t1)

or

x(t2) = Φ(t2 − t1)x(t1)

5. If A is a diagonal matrix, then eA t is also a diagonal matrix and each diagonal elementis equal to the exponential of the corresponding diagonal element of the A matrixtimes t, that is, eaiit:

e

a11 0. . .

0 ann

t =

ea11t 0. . .

0 eannt

The forced response of a state system

Let us now determine the solution of the inhomogeneous state equation

x(t) = Ax(t) +Bu(t) (4.55)

We can rewrite this as

x(t)−Ax(t) = Bu(t)

Page 87: System Control Notes

4.5. ANALYSIS OF STATE SYSTEMS 87

Multiplying by e−At gives

e−At(

x(t)−Ax(t))

= e−AtBu(t)

For the first term we can write:

e−At(

x(t)−Ax(t))

=d

dt

(

e−Atx(t))

and so

d

dt

(

e−Atx(t))

= e−AtBu(t)

Integrating both sides gives us∫ t

0

d

(

e−Aτx(τ))

dτ =

∫ t

0

e−AτBu(τ) dτ

or

e−Atx(t)− e−A0x(0) =

∫ t

0

e−AτBu(τ) dτ

and so

x(t) = eAtx(0) + eAt

∫ t

0

e−AτBu(τ) dτ

This results in

x(t) = eAt x(0) +

∫ t

0

eA (t−τ)B u(τ) dτ (4.56)

The system output response of a state systems

The output response of a linear time-invariant system is easily derived by substitution of(4.56) into (4.39.2):

y(t) = C x(t) +Du(t)

= C

(

eA t x(0) +

∫ t

0

eA (t−τ)B u(τ) dτ

)

+Du(t)

So the forced output response is given by

y(t) = C eA t x(0) +

∫ t

0

C eA (t−τ)B u(τ) dτ +Du(t) (4.57)

Example 4.14 Consider the first-order system of Example 4.12. For this system the forcedoutput response is

y(t) = 5 e−3 t x(0) +

∫ t

0

10 e−3 (t−τ)u(τ) dτ + 2 u(t)

Page 88: System Control Notes

88 CHAPTER 4. GENERAL SYSTEM ANALYSIS

4.6 Relation between various system descriptions

In this section we summarize the different types of system description for linear time-invariant systems.

The transfer function of linear time-invariant state systems

Consider the state system

x(t) = Ax(t) +B u(t) ,

y(t) = C x(t) +Du(t) .

Note that x(t) is a vector. The Laplace transform of a vector x(t) is given by the Laplacetransform of its entries:

L−1x(t) = L−1

x1(t)x2(t)...

xn(t)

=

X1(s)X2(s)

...Xn(s)

= X(s)

where L−1xi(t) = Xi(s) for i = 1, . . . , n. For the derivative x(t) it follows:

L−1x(t) =

L−1x1(t)L−1x2(t)

...L−1xn(t)

=

sX1(s)sX2(s)

...sXn(s)

=

s 0 · · · 00 s 0...

. . .

0 0 . . . s

X1(s)X2(s)

...Xn(s)

= s I X(s)

Let L−1u(t) = U(s) and L−1y(t) = Y (s), then we can write the state equation inLaplace form as s I X(s) = AX(s) + B U(s) , or (s I − A)X(s) = BU(s) . If we assumethat the matrix (s I−A) is invertible, we obtain

X(s) = (s I − A)−1B U(s) .

Page 89: System Control Notes

4.6. RELATION BETWEEN VARIOUS SYSTEM DESCRIPTIONS 89

Substitution into the second state equation gives us

Y (s) = C X(s) +DU(s) = C (s I −A)−1B U(s) +DU(s) (4.58)

and so the transfer function of state system (4.39) is given by

H(s) = C (s I − A)−1B +D (4.59)

Remark:Note that the transfer function is an input-output description of the system. A statetransformation does not influence the input-output behavior of the system. We thereforemay use the modal form of the state space description of the system, and the transferfunction becomes:

H(s) = C ′ (s I − Λ)−1B′ +D′

which can easily be computed, because of the diagonal form of Λ:

(s I − λ)−1 =

s− λ1 0 · · · 0

0 s− λ2...

.... . .

0 · · · s− λn

−1

=

(s− λ1)−1 0 · · · 0

0 (s− λ2)−1 ...

.... . .

0 · · · (s− λn)−1

Example 4.15 Consider the system of Example 4.11 and Example 4.13. The transfer isgiven by

H(s) = C ′ (s I − Λ)−1B′ +D′

=[3 2

](

s I −[−2 00 −3

])−1 [1−1

]

+ 4

=[3 2

]([

s+ 2 00 s+ 3

])−1 [1−1

]

+ 4

=[3 2

][

1s+2

0

0 1s+3

][1−1

]

+ 4

=3

s+ 2+

−2

s+ 3+ 4

=4 s2 + 21 s+ 29

s2 + 5 s+ 6(4.60)

Page 90: System Control Notes

90 CHAPTER 4. GENERAL SYSTEM ANALYSIS

The relation between state systems and input-output system

Compute the Laplace transforms X(s) = Lx(t), U(s) = Lu(t), and Y (s) = Ly(t)and let us consider the relation (4.58):

Y (s) =(

C (s I −A)−1B +D)

U(s)

The inverse of an n× n matrix can be computed as follows (see Appendix A)

M−1 =1

detMadjM

and so

Y (s) =(C adj (s I − A)B

det(s I − A)+D

)

U(s) (4.61)

=C adj (s I −A)B +D det(s I − A)

det(s I −A)U(s) (4.62)

From here we can derive the input-output equation

det(s I −A)Y (s) =(

C adj (s I −A)B +D det(s I −A))

U(s) (4.63)

Note that det(s I−A) and C adj (s I−A)B+D det(s I−A) are polynomials in the variables and so by inverse Laplace transformation we find the input-output differential equation.

Example 4.16 Consider the state system

x(t) =

[2 −33 −4

]

x(t) +

[12

]

u(t) ,

y(t) =[1 0

]x(t) + 2 u(t) .

(4.64)

From

s I−A =

[s 00 s

]

−[2 −33 −4

]

=

[s− 2 3−3 s+ 4

]

we compute the determinant and the adjoint of s I−A as follows:

det([

s− 2 3−3 s+ 4

])

= (s− 2)(s+ 4)− 3(−3) = s2 + 2s+ 1

adj([

s− 2 3−3 s+ 4

])

=

[s+ 4 −33 s− 2

]

Page 91: System Control Notes

4.6. RELATION BETWEEN VARIOUS SYSTEM DESCRIPTIONS 91

Furthermore

C adj (s I − A)B +D det(s I −A)

=[1 0

][s+ 4 −33 s− 2

] [12

]

+ 2(s2 + 2s+ 1)

=[s+ 4 −3

][12

]

+ (2s2 + 4s+ 2)

= (s− 2) + (2s2 + 4s+ 2)

= 2s2 + 5s

The input-output relation in the Laplace domain becomes

det(s I − A)Y (s) =(

C adj (s I − A)B +D det(s I − A))

U(s)

(s2 + 2s+ 1)Y (s) = (2s2 + 5s)U(s)

which gives us the final input-output differential equation

d2 y(t)

d t2+ 2

d y(t)

d t+ y(t) = 2

d2 u(t)

d t2+ 5

d u(t)

d t

The transformation of an input-output description into a state description can be donein many ways. Note that the state representation is not unique and therefore the trans-formation is not unique. We will now discuss one specific realization, which is called thecontrollability canonical form.Consider the differential equation

dn y(t)

d tn+ a1

dn−1 y(t)

d tn−1+ . . .+ an−1

d y(t)

d t+ any(t)

= b0dn u(t)

d tn+ b1

dn−1 u(t)

d tn−1+ . . .+ bn−1

d u(t)

d t+ bnu(t)

This equation can be written in a state representation

x(t) = Ax(t) +B u(t)

y(t) = C x(t) +Du(t)

by choosing:

A =

−a1 · · · −an−1 −an1 0 0...

. . ....

0 1 0

, B =

10...0

(4.65)C =

[b1 − b0a1 · · · bn−1 − b0an−1 bn − b0an

], D =

[b0

]

Page 92: System Control Notes

92 CHAPTER 4. GENERAL SYSTEM ANALYSIS

To prove that this is really a state system representing the input-output differential equa-tion, we simply transform this state system into an input-output system using equation(4.63). We have

det(s I −A) = det

s+ a1 a2 · · · an−2 an−1 an−1 s 0 0 0...

. . .. . .

. . ....

0 0. . . −1 s

0 0 · · · 0 −1 s

= sn + a1sn−1 + · · ·+ an−1s + an

Furthermore

adj (s I −A) =

sn−1 ∗ · · · ∗sn−2 ∗ · · · ∗...

......

s ∗ · · · ∗1 ∗ · · · ∗

where the stars indicate that the values can be computed but are not relevant.

C adj (S I − A)B =[b1 − b0a1 · · · bn−1 − b0an−1 bn − b0an

]

sn−1

sn−2

...s1

= (b1 − b0a1)sn−1 + . . .+ (bn−1 − b0an−1)s + bn − b0an

and so

C adj (s I − A)B +D det(s I − A) =

= (b1 − b0a1)sn−1 + (b2 − b0a2)s

n−2 + . . .+ (bn−1 − b0an−1)s + (bn − b0an)

+ b0(sn + a1s

n−1 + · · ·+ an−1s + an)

= b0sn + b1s

n−1 + b2sn−2 + . . .+ bn−1s+ bn

We obtain

snY (s) + a1sn−1Y (s) + . . .+ an−1sY (s) + anY (s)

= b0snU(s) + b1s

n−1U(s) + . . .+ bn−1sU(s) + bnU(s)

This means that after a inverse Laplace transformation we end up with the original differ-ential equation.

Page 93: System Control Notes

4.6. RELATION BETWEEN VARIOUS SYSTEM DESCRIPTIONS 93

Example 4.17 Consider the final input-output description of Example 4.16:

d2 y(t)

d t2+ 2

d y(t)

d t+ y(t) = 2

d2 u(t)

d t2+ 5

d u(t)

d t

Using (4.65) we derive with n = 2:

A′ =

[−a1 −a21 0

]

=

[−2 −11 0

]

,

B′ =

[10

]

C ′ =[b1 − b0a1 b2 − b0a2

]=

[1 −2

],

D′ =[b0

]=

[2]

Note that the system matrices of this realization are different from the system matrices in(4.64). The two realizations are related by the state transformation matrix

T =

[1 −22 −1

]

with inverse

T−1 =1

3

[−1 2−2 1

]

We find that

x′(t) = A′ x′(t) +B′u(t) (4.66)

y(t) = C ′ x′(t) +D′u(t) (4.67)

with

x′(t) = T−1x(t)

A′ = T−1AT

B′ = T−1B

C ′ = C T

D′ = D

The impulse response of a linear time-invariant state system

Let

u(t) = δ(t)

Page 94: System Control Notes

94 CHAPTER 4. GENERAL SYSTEM ANALYSIS

where δ(t) is the unit impulse function, defined in (1.2).Following (4.57) we find

y(t) = CeA t x(0) +

∫ t

0

CeA(t−τ)B δ(τ) dτ +Dδ(t) (4.68)

Using property (1.4) we obtain:

∫ t

0

CeA(t−τ)B δ(τ) dτ = CeA tB us(t)

Finally note that in the impulse response we usually assume the system initially at rest,so x(0) = 0. Equation (4.68) now becomes

h(t) = CeA tB us(t) +Dδ(t) (4.69)

which is the impulse response of the linear time-invariant state system (4.39).

Remark:Note that a state transformation does not influence the input-output behavior of the sys-tem. We therefore may use the diagonal form of the state space description of the system.The impulse response becomes:

h(t) = C ′eΛ tB′ us(t) +D′δ(t)

= CMeΛ tM−1B us(t) +Dδ(t)

which can easily be computed.

Example 4.18 Consider the second-order system of Example 4.11 and Example 4.13. Us-ing the model transformation we can derive

h(t) = C ′eΛ tB′ us(t) +D′δ(t)

=[3 2

][e−2t 00 e−3t

][1−1

]

us(t) + 4δ(t)

= (3e−2t − 2e−3t)us(t) + 4δ(t) (4.70)

The relation between the transfer function and the impulse re-sponse

One of the properties of the Laplace transformation is that the Laplace transform of aconvolution of two functions f1 and f2 results in the product of the Laplace transforms off1 and f2:

Lf1(t) ∗ f2(t) = Lf1(t)Lf2(t) (4.71)

Page 95: System Control Notes

4.6. RELATION BETWEEN VARIOUS SYSTEM DESCRIPTIONS 95

Now recall the convolution integral to compute the output signal in the time domain

y(t) = h(t) ∗ u(t) =∫ ∞

−∞u(τ) h(t− τ) d τ

Using property (4.71) we find:

Y (s) = Ly(t) = Lh(t) ∗ u(t) = Lh(t)Lu(t) = Lh(t)U(s)

Comparing this result to (4.5) we find that

H(s) = Lh(t) and h(t) = L−1H(s) (4.72)

where H(s) is the transfer function of the system. This means that the Laplace transformof the impulse response is equal to the transfer function.

Overview of all the relations

A,B,C,D

H(s)

H(jω)

h(t)Differential

Equation

Eq. (4.63)

Eq. (4.65)

Eq. (4.59)

Eq. (4.6)

s = j ω

Eq. (4.72)

Eq. (4.69)

Figure 4.8: Relations between various LTI system descriptions

The first description we discussed in Section 2.3 is the input-output differential equation:

dn y(t)

d tn+ a1

dn−1 y(t)

d tn−1+ . . .+ an−1

d y(t)

d t+ any(t)

= b0dm u(t)

d tm+ b1

dm−1 u(t)

d tm−1+ . . .+ bm−1

d u(t)

d t+ bmu(t)

Page 96: System Control Notes

96 CHAPTER 4. GENERAL SYSTEM ANALYSIS

The second description we discussed in Section 2.4 is the state description:

x(t)= Ax(t) +B u(t) ,y(t) = C x(t) +Du(t),

The third description we discussed in Section 4.1 is the transfer function description:

H(s) =b0 s

m + b1 sm−1 + . . .+ bm−2 s

2 + bm−1 s+ bmsn + a1 sn−1 + . . .+ an−2 s2 + an−1 s+ an

The last description we discussed in Section 4.4 is the convolution description:

y(t) = u(t) ∗ h(t) = h(t) ∗ u(t) =∫ ∞

−∞h(τ) u(t− τ) d τ

In Figure 4.8 the equations that gives the relations between the various system descriptionsare given.

4.7 Stability

Stability of linear time-invariant input-output systems

In this section we discuss the stability of a linear time-invariant system as defined in (4.1).Let pi, i = 1, . . . , n be the system poles, i.e. the solutions of the homogeneous equation

λn + a1λn−1 + · · ·+ an = 0

The system is asymptotically stable if and only if all components in the homogeneousresponse from a finite set of initial conditions decay to zero as time increases, or

limt→∞

n∑

i=1

Ciepi t = 0 (4.73)

where pi are the system poles1.In order for a linear time-invariant system to be stable, all its poles must have a real partsmaller than zero, i.e. they must all lie in the left half plane. An unstable pole, lying in theright half plane, generates a component in the system homogeneous response that increaseswithout bound from any finite initial conditions. A system having one or more poles lyingon the imaginary axis with multiplicity equal to one has nondecaying (usually oscillatory)components in the homogeneous response and is defined to be marginally stable. For apole on the imaginary axis with multiplicity higher than one, the homogeneous responsewill grow unboundedly.

1In (4.73) we assume that all poles have multiplicity equal to one. If a pole pi has multiplicity mi > 1,the terms for this pole has the form Cit

jepi t, j = 0, . . . ,mi − 1

Page 97: System Control Notes

4.7. STABILITY 97

Stability of linear time-invariant state systems

Consider the linear time-invariant state system of (4.39). For asymptotic stability, thehomogeneous response of the state vector x(t) should return to the origin for any arbitraryinitial condition x(0) for time t → ∞, or

limt→∞

x(t) = limt→∞

Φ(t) x(0) = limt→∞

M eΛ tM−1 x(0) = 0

for any x(0) and all eigenvalues have multiplicity equal to one. All the elements of x(t) area linear combination of the modal components eλi t, and therefore, the stability of a systemresponse depends on all components decaying to zero with time. If Re(λi) > 0 for some λi,the component will grow exponentially with time and the sum is by definition unbounded.The requirements for system stability may therefore be summarized:

A linear time-invariant state system described by the state equation x(t) = Ax(t) + b u(t)is asymptotically stable if and only if all eigenvalues have real part smaller than zero.

Three other separate conditions should be considered:

1. If one or more eigenvalues, or pair of conjugate eigenvalues, has a real part larger thanzero, there is at least one corresponding modal component that increases exponen-tially without bound from any initial condition, violating the definition of stability.

2. Any pair of conjugate eigenvalues on the imaginary axis (real part equal to zero),λi,i+1 = ±jω with multiplicity equal to one, generates an undamped oscillatorycomponent in the state response. The magnitude of the homogeneous state responseneither decays nor grows but continues to oscillate for all time at a frequency ω. Sucha system is defined to be marginally stable. For poles on the imaginary axis withmultiplicity higher than 1, the homogeneous state response will grow unboundedly.

3. An eigenvalue λ = 0 with multiplicity one generates a model exponent eλ t = e0 t = 1that is a constant. The system response neither decays or grows, and again the systemis defined to be marginally stable. A eigenvalue λ = 0 with multiplicity mi > 1gives additional components tj, j = 1, . . . , mi − 1 and will lead to an unboundedhomogeneous state response.

Stability of convolution systems

In this section we consider BIBO stability of a convolution system, which is described bythe convolution integral (4.38). We say a linear time-invariant system is Bounded-Input-Bounded-Output (BIBO) stable if a bounded input sup

t|u(t)| = M1, produces a bounded

output supt

|y(t)| = M2. A necessary and sufficient condition for such BIBO stability is

that the impulse response h(t) is such that∫ ∞

−∞|h(t)| dt = M3 < ∞ (4.74)

Page 98: System Control Notes

98 CHAPTER 4. GENERAL SYSTEM ANALYSIS

First, we show that the system is stable if (4.74) holds:

supt

|y(t)| = supt

∣∣∣∣

∫ ∞

−∞h(τ)u(t− τ) dτ

∣∣∣∣

≤ supt

∫ ∞

−∞|h(τ)u(t− τ)|dτ

≤ supt

∫ ∞

−∞|h(τ)| |u(t− τ)|dτ

≤∫ ∞

−∞|h(τ)|M1dτ

≤ M3M1

This means that M2 is finite if (4.74) holds. That (4.74) is necessary can be seen as follows.Assume that for we want to compute y(0) for an input given u given by

u(t) := sgn[h(−t)] =

1 if h(−t) > 0

0 if h(−t) = 0

−1 if h(−t) < 0

, ∀t.

Then, M1 = supt

|u(t)| = 1 and,

y(0) =

∫ ∞

−∞h(τ)u(−τ) dτ

=

∫ ∞

−∞|h(τ)| dτ

This shows that if M3 is not bounded, that y(t0) is not bounded and so the system is notBIBO stable. This shows that for BIBO stability (4.74) is necessary.

Example 4.19 Consider the system of Example 4.11 and Example 4.13. The eigenvalueof the matrix

A =

[−6 6−2 1

]

are λ1 = −2 and λ2 = −3. Both eigenvalues are negative real, which means that thissystem is stable.As we will show in the next section the transfer function the input-output differential equa-tion of this system is given by

d2 y(t)

d t2+ 5

d y(t)

d t+ 6 y(t) = 4

d2 u(t)

d t2+ 21

d u(t)

d t+ 29 u(t)

The characteristic equation is equal to

λ2 + 5 λ+ 6 = 0

Page 99: System Control Notes

4.8. EXERCISES 99

and we find (not surprisingly) the poles λ1 = −2 and λ2 = −3, which are equal to theeigenvalues of the A-matrix of the corresponding state system. Both poles are negativereal, which means that this system is stable.Finally we can compute the impulse response h(t) given in Equation (4.70).

M3 =

∫ ∞

−∞|h(t)| dt

=

∫ ∞

−∞|(3e−2t − 2e−3t)us(t) + 4δ(t)| dt

=

∫ ∞

−∞(3e−2t − 2e−3t)us(t) + 4δ(t) dt

= 3

∫ ∞

0

e−2t dt− 2

∫ ∞

0

e−3t dt+ 4

∫ ∞

−∞δ(t) dt

= −3

2e−2t

∣∣∞

0+

2

3e−3t

∣∣∞

0+ 4

=3

2− 2

3+ 4 < ∞

which means that this system is also bounded-input-bounded-output stable.

Example 4.20 Consider the third-order state system

x(t) =

2 9 12−9 26 366 −18 −25

x(t) +

124

u(t)

The eigenvalues of A are λ1 = −1 and λ2,3 = 2 ± j 3. Note that two eigenvalues have apositive real part, which means that this system is unstable.

4.8 Exercises

Exercise 1. Transfer functions

Compute the transfer function of the system described by the following differential equa-tions:

a)d2 y(t)

d t2+ 6

d y(t)

d t+ 5y(t) = 4u(t)

b)d3 y(t)

d t3− 13

d y(t)

d t+ 12y(t) = 2

d3 u(t)

d t3+ 12

d2 u(t)

d t2+ 24

d u(t)

d t+ 16u(t)

c)d4 y(t)

d t4+ 6

d3 y(t)

d t3+ 22

d2 y(t)

d t2+ 30

d y(t)

d t+ 13y(t)

= 3d3 u(t)

d t3+ 6

d2 u(t)

d t2− 21

d u(t)

d t+ 12u(t)

Page 100: System Control Notes

100 CHAPTER 4. GENERAL SYSTEM ANALYSIS

Exercise 2. Poles, zeroes, stability

Consider the systems a–c of Exercise 1, compute the poles and zeros, and determine whetherthe systems are stable or unstable.

Exercise 3: Frequency response

Consider the following system:

d2 y(t)

d t2+ 6

d y(t)

d t+ 5y(t) = u(t)

Compute the frequency response of this system, determine the magnitudeM(jω) and phaseφ(jω), and compute the output y(t) for a given input u(t) = 4 cos 3 t.

Exercise 4: Time response

Consider the system

y(t) + 6y(t) + 13y(t) = 13 u(t)

with step input u(t) = 1 for t ≥ 0. Compute the output y(t) for the initial conditionsy(0) = 2 and y(0) = −5.

Exercise 5: Time response

Consider the system

y(t) + 10y(t) + 25y(t) = 40 u(t)

with step input u(t) = e−3t for t ≥ 0. Compute the output y(t) for the initial conditionsy(0) = −65 and y(0) = 55.

Exercise 6: Impulse response

Consider a system with impulse response

h(t) = us(t)

Compute the output y(t) for t ≥ 0 when the input is given by

u(t) =

0 for t < 01 for 0 ≤ t < 10 for t ≥ 1

Page 101: System Control Notes

4.8. EXERCISES 101

Exercise 7: State systems

Consider the state system

x(t) =

[3 −221 −10

]

x(t) +

[12

]

u(t)

y(t) =[4 −1

]x(t)

1. Is this state system stable?

2. Compute the system matrices after a modal transformation.

3. Compute the homogeneous response of this system for x(0) = [ 1 1 ]T .

4. Compute the forced response for t ≥ 0 with x(0) = 0 and u(t) = us(t).

5. Derive the impulse response of the system.

6. Derive the transfer function of the system.

Page 102: System Control Notes

102 CHAPTER 4. GENERAL SYSTEM ANALYSIS

Page 103: System Control Notes

Chapter 5

Nonlinear dynamical systems

In this chapter we will consider some examples of nonlinear (differential) systems. We willalso discuss the concept of linearization, which gives us the possibility to approximate thebehavior of a nonlinear system locally by a linear system description.

5.1 Modeling of nonlinear dynamical systems

In Chapter 2 we have discussed the modeling of dynamical systems with only linear basicelements. In practice however we will often encounter phenomena that are nonlinear. Wewill present two examples with nonlinear elements and show that the differential equationscan be derived in a similar way as in the linear case.Let u(t) be the input signal and let y(t) be the output signal of a nonlinear dynamicalsystem. The relation between inputs and outputs of dynamical systems can be describedby a differential equation:

dn y(t)

d tn= F

(dn−1 y(t)

d tn−1, . . . ,

d y(t)

d t, y(t),

dm u(t)

d tm,dm−1 u(t)

d tm−1, . . . ,

d u(t)

d t, u(t)

)

Example 5.1 (Mechanical system)

A mass M is connected to the ceiling by a nonlinear spring and a linear damper in theconfiguration of Figure 5.1. The spring force is given by fs = −k y2 and the damping forceis equal to fd = −c y, where k and c are constants. The gravity force is equal to fg = mgand finally there is an external force fext acting on the mass. Our task is to derive thedifferential equations for this system.

We use Newton’s law for this system and we obtain

m y =∑

i

fi

103

Page 104: System Control Notes

104 CHAPTER 5. NONLINEAR DYNAMICAL SYSTEMS

m

6 6

? ?

k y2 c y

m g fext

?

y

Figure 5.1: Example of a nonlinear mechanical system

where fi are all forces acting on mass m. We have

m y = fs + fd + fg + fext = −k y2 − c y +mg + fext

Example 5.2 (Water flow system)Given a system with two water vessels in the configuration of Figure 5.2. Water runs intothe upper vessel from a source with flow win, from the upper vessel through restriction withfluid resistance R1 into the lower vessel with flow wmed, and out of the lower vessel throughrestriction with fluid resistance R2 with flow wout. The pressures at the bottoms of thewater vessels are denoted by p1 and p2. The outside pressure is p0. The area of both vesselsis A, and the water levels are denoted by h1 and h2. There are two differences betweenthis system and that of Figure 2.15. The first difference is that in the system of Figure 5.2the water of the first vessel flows freely into the second vessel (and so the flow wmed doesnot depend on p2), and the second difference is that we assume a nonlinear behavior of therestrictions:

wmed =1

R1

√p1 − p0

wout =1

R2

√p2 − p0

By introducing the square root in the relation between the flow and the pressure, the equa-tions can describe the physical behavior more realistically. Our task is to derive the differ-ential equations for this system.

First we consider the upper vessel:The net flow into the upper level is w1 = win − wmed. The fluid capacitance of the uppervessel is given by

C1 =A1

g

Page 105: System Control Notes

5.1. MODELING OF NONLINEAR DYNAMICAL SYSTEMS 105

-win

A

R1

h1

p1

wmed-

A

R2

h2

p2wout -

p0

Figure 5.2: Example of a nonlinear fluid flow system

The change in pressure p1 is now given by:

p1 =1

C1w1 =

g

A1(win − wmed)

Substitution of wmed =1

R1

√p1 − p0 gives us

p1 =g

A1win −

g

A1R1

√p1 − p0

The derivation for the lower vessel is similar:The net flow into the lower vessel is w2 = wmed − wout. The fluid capacitance of the lowervessel is given by

C2 =A2

g

The change in pressure p2 is now given by:

p2 =1

C2w2 =

g

A2(wmed − wout)

For the flow wout we find:

wout =1

R2

√p2 − p0

Substitution of wmed and wout into the previous equation yields:

p2 =g

A2R1

√p1 − p0 −

g

A2R2

√p2 − p0

Page 106: System Control Notes

106 CHAPTER 5. NONLINEAR DYNAMICAL SYSTEMS

So summarizing, the two differential equations describing the dynamics of the 2 vesselsystem are as follows:

p1 =g

A1win −

g

A1R1

√p1 − p0

p2 =g

A2R1

√p1 − p0 −

g

A2R2

√p2 − p0

Now we can use the relation between the fluid levels hi and pi, i = 1, 2:

pi − p0 = ρghi , and pi = ρghi

and we can rewrite the equations as

ρgh1 =g

A1win −

g

A1R1

ρgh1

ρgh2 =g

A2R1

ρgh1 −g

A2R2

ρgh2

or by introducing the parameters R1 = R1√ρg and R2 = R2

√ρg

h1 =1

A1ρwin −

g

A1R1

h1

h2 =g

A2R1

h1 −g

A2R2

h2

Nonlinear state systems

We now introduce the notion of nonlinear state systems. A nonlinear system can then berepresented by the state equations

x(t) = f(x, u),y(t) = h(x, u),

(5.1)

where f and h are nonlinear mappings. We call a model of this form a nonlinear statespace model. The dimension of the state vector is called the order of the system. The sys-tem (5.1) is called time-invariant because the functions f and h do not depend explicitlyon time t; there are more general time-varying systems where the functions do depend ontime. The model consists of two functions: the function f gives the rate of change of thestate vector as a function of state x and input u, and the function h gives the output signalas functions of state x and control u. A system is called a linear state space system if thefunctions f and h are linear in x and u.

Example 5.3 (Mechanical system)Consider the system of Example 5.1. The nonlinear differential equation is given by

m y(t) = −k y2(t)− c y(t) +mg + fext(t)

Page 107: System Control Notes

5.2. STEADY STATE BEHAVIOR AND LINEARIZATION 107

If we choose the state x(t) =[y(t) y(t)

]and input u(t) = fext(t) we obtain the nonlinear

state equations

x1(t) = −k/mx22(t)− c/mx1(t) + g + 1/mu(t)

x2(t) = x1(t)y(t) = x2(t)

Example 5.4 (Water flow system)Consider the system of Example 5.2. The nonlinear differential equations are given by

h1 =1

A1ρwin −

g

A1R1

h1

h2 =g

A2R1

h1 −g

A2R2

h2

If we choose the state x(t) =[h1(t) h2(t)

], the input u(t) = win(t) and the output

y(t) = h2(t) we obtain the nonlinear state equations

x1(t) =1

A1ρu(t)− g

A1R1

x1(t)

x2(t) =g

A2R1

x1(t)−g

A2R2

x2(t)

y(t) = x2(t)

5.2 Steady state behavior and linearization

An equilibrium point (or steady state) is a point where the system comes to a rest. Fora system at rest all signals will be constant and so in an equilibrium point the derivativeof the state will be zero. We define an equilibrium point, or steady state, of a nonlinearsystem as follows:

Definition 5.1 Consider a nonlinear state system, described by (5.1). For a steady stateor equilibrium point (x0, u0, y0) there holds

f(x0, u0) = 0

with a corresponding output y0:

y0 = g(x0, u0)

Example 5.5 (Mechanical system)Consider the nonlinear state system of Example 5.3. We aim at computing the equilibriumpoint (x0, u0, y0) for the system

x1(t) = −k/mx22(t)− c/mx1(t) + g + 1/mu(t)

x2(t) = x1(t)y(t) = x2(t)

Page 108: System Control Notes

108 CHAPTER 5. NONLINEAR DYNAMICAL SYSTEMS

If we set f(x0, u0) = 0 we obtain

0 = −k/mx20,2 − c/mx0,1 + g + 1/mu0

0 = x0,1

This means that x0,1 = 0 and k/mx20,2(t) = g + 1/mu(t), so x0,2 =

√mg+u0

k. This means

that if the external force fext(t) = u0 is constant, and the system is in rest then y0 = x0,1 = 0

and y0 = x0,2 =√

mg+u0

k.

Example 5.6 (Water flow system)The equilibrium point (x0, u0, y0) of the nonlinear water flow system of Example 5.4 canbe computed by setting x = 0:

0 =1

A1ρu0(t)−

g

A1R1

x0,1(t)

0 =g

A2R1

x0,1(t)−g

A2R2

x0,2(t)

y0(t) = x0,2(t)

We find:

x0,1 =R2

1

g2ρ2u20

x0,2 =R2

2

R21

x0,1 =R2

2

g2ρ2u20

y0(t) = x0,2

In many physical systems the relations used to define the model elements are inherentlynonlinear. The analysis of systems containing such elements is a much more difficulttask than that for a system containing only linear elements, and for many such systems ofinterconnected nonlinear elements there may be no exact analysis technique. In engineeringpractice it is often convenient to approximate the behavior of a nonlinear systems by alinear one over a limited range of operation, usually in the neighborhood of an equilibriumpoint. The achieve this linear behavior we have to do a linearization step. We study smallvariations about the equilibrium (x0, u0, y0), where x0, u0 and y0 satisfy f(x0, u0) = 0 andg(x0, u0) = y0. To derive the linear behavior we look at small variations x, u, and y aboutthe equilibrium (x0, u0, y0):

x(t) = x0 + x(t)u(t) = u0 + u(t)y(t) = y0 + y(t)

Page 109: System Control Notes

5.2. STEADY STATE BEHAVIOR AND LINEARIZATION 109

First of all note that ˙x(t) = x− x0 = x, and so

˙x(t) = f(x0 + x(t), u0 + u(t))y0 + y(t) = g(x0 + x(t), u0 + u(t))

Now we can, using Taylor expansion, describe the nonlinear equations in terms of thesesmall variations x and u, which yields

˙x(t) = f(x0, u0) + A x(t) +B u(t))y(t) = g(x0, u0) + C x(t) +D u(t))− y0

where A, B, C, and D are computed as

A =∂f

∂x

∣∣∣∣x = x0

u = u0

, B =∂f

∂u

∣∣∣∣x = x0

u = u0

, C =∂h

∂x

∣∣∣∣x = x0

u = u0

, D =∂h

∂u

∣∣∣∣x = x0

u = u0

(5.2)

With f(x0, u0) = 0 and y0 = g(x0, u0), this reduces to:

˙x(t) = A x(t) +B u(t)y(t) = C x(t) +D u(t)

Example 5.7 (Mechanical system)Consider the mechanical system of Example 5.5. We derived the system equations

x1(t) = −k/mx22(t)− c/mx1(t) + g + 1/mu(t)

x2(t) = x1(t)y(t) = x2(t)

and found the equilibrium point x0,1 = 0, x0,2 =√

mg+u0

k, and y0 = x0,2 =

√mg+u0

k. We

compute

A =∂f

∂x

∣∣∣∣x=x0,u=u0

=

[∂f1∂x1

∂f1∂x2

∂f2∂x1

∂f2∂x2

]

x = x0

u = u0

=

[−c/m −2 k/mx2

1 0

]

x = x0

u = u0

=

[

−c/m −2√

mkg+ku0

m2

1 0

]

B =∂f

∂u

∣∣∣∣x=x0,u=u0

=

[∂f1∂u

∂f2∂u

]

x = x0

u = u0

=

[1/m0

]

C =∂g

∂x

∣∣∣∣x=x0,u=u0

=[

∂g∂x1

∂g∂x2

]

x = x0

u = u0

=[0 1

]

Page 110: System Control Notes

110 CHAPTER 5. NONLINEAR DYNAMICAL SYSTEMS

D =∂g

∂u

∣∣∣∣x=x0,u=u0

=[

∂g∂u

]

x = x0

u = u0

= 0

So for small variations x, u and y about the working point equilibrium (x0, u0, y0) the linearbehavior is described by the linear time-invariant state system

˙x(t) = A x(t) +B u(t)y(t) = C x(t) +D u(t)

These equations can describe the dynamic behavior of the mechanical system quite accu-rately as long as the signals u(t) and x(t) remain small.

Example 5.8 (Water flow system)Consider the nonlinear flow system of Example 5.6 with state equations

x1(t) =1

A1ρu(t)− g

A1R1

x1(t)

x2(t) =g

A2R1

x1(t)−g

A2R2

x2(t)

y0(t) = x0,2(t)

and the equilibrium point x0,1 =R2

1

g2ρ2u20, x0,2 =

R22

g2ρ2u20 and y0(t) = x0,2. We compute

A =∂f

∂x

∣∣∣∣x=x0,u=u0

=

[∂f1∂x1

∂f1∂x2

∂f2∂x1

∂f2∂x2

]

x = x0

u = u0

=

g

2A1R1

√x1(t)

0

g

2A2R1

√x1(t)

g

2A2R2

√x2(t)

x = x0

u = u0

=

[ 12A1ρu0

01

2A2ρu0

12A2ρu0

]

B =∂f

∂u

∣∣∣∣x=x0,u=u0

=

[∂f1∂u

∂f2∂u

]

x = x0

u = u0

=

[1

A1ρ

0

]

C =∂g

∂x

∣∣∣∣x=x0,u=u0

=[

∂g∂x1

∂g∂x2

]

x = x0

u = u0

=[0 1

]

D =∂g

∂u

∣∣∣∣x=x0,u=u0

=[

∂g∂u

]

x = x0

u = u0

= 0

Page 111: System Control Notes

5.3. EXERCISES 111

So for small variations x, u, and y about the working point equilibrium (x0, u0, y0) thelinear behavior is described by the linear time-invariant state system

˙x(t) = A x(t) +B u(t)y(t) = C x(t) +D u(t)

These equations can describe the dynamic behavior of the mechanical system quite accu-rately as long as the signals u(t) and x(t) remain small.

5.3 Exercises

Exercise 1. Pendulum system

A point mass m is attached to the end of a massless rod with length ℓ that is rotatingabout a fixed pivot as shown in Figure 5.3. The angle between the rod and the verticalaxis is θ(t), and the external force working on the ball is fe(t).

e

j

θ

fefg

fgx

fzfgy

Figure 5.3: A simple point-mass pendulum

Now perform the following tasks:

1. Compute the equilibrium point for fe(t) = 0.5mg.

2. Linearize the system around this equilibrium point.

Exercise 2. Electrical circuit

An electrical circuit consists of a linear capacitor and a nonlinear resistor as shown inFigure 5.4. For the resistor there holds:

(v1 − v2)3 = R i3

Page 112: System Control Notes

112 CHAPTER 5. NONLINEAR DYNAMICAL SYSTEMS

e-i1

r

r

C?

i2 R

v1

e

?

i3

v2

Figure 5.4: A nonlinear electrical circuit

Further we have v2 = 0, R = 2 Ω, and C = 1/4 F.Now perform the following tasks:

1. Compute the equilibrium point for v1,0 = 2.

2. Linearize the system around this equilibrium point with v1,0 = 2.

Page 113: System Control Notes

Chapter 6

An introduction to feedback control

Engineers often use control methods engineering to enhance the performance of systems inmany fields of application, such as mechanical, electrical, electromechanical, and fluid/heatflow systems (see Chapter 2). This chapter gives an introduction to the field of controlengineering. Some basic definitions and terminology will be introduced and the concept offeedback will be presented.

Definition 6.1 Given a system with some inputs for which we can set the values, controlis a set of actions undertaken in order to obtain a desired behavior of the system, and itcan be applied in an open-loop or a closed-loop configuration by supplying the proper controlsignals.

Controllers can be found in all kinds of technical systems, from cruise control to a centralheating system, from hard disks to washing machines, from GPS to oil refineries, fromwatches to communication satellites. In many cases the impact of control is not recognizedfrom the outside. Control is therefore often called the hidden technology.

6.1 Block diagrams

In Section 4.1 we have seen that a linear time-invariant system can be represented by atransfer function. Often real-life physical systems consist of many subsystems where eachsubsystem can be described by a differential equation and therefore can be represented bya transfer function. If we want to look at the overall system on a higher, less detailed level,we can draw a block diagram of the system.

Definition 6.2 A block diagram is a diagram of a system, in which the principal subsys-tems are represented by blocks interconnected by arrows that show the interaction betweenthe blocks.

Example 6.1 Consider the three-vessel water flow system of Figure 6.1.

113

Page 114: System Control Notes

114 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

-win

A h1

R1

A h2

R2

A h3

R3

Figure 6.1: Example of a three-vessel water flow system

Using the modeling techniques of Chapter 2 we can describe the system with three differen-tial equations

h1 =1

ρAwin −

g

AR1h1,

h2 =g

AR1h1 −

g

AR2h2,

h3 =g

AR2h2 −

g

AR3h3.

We consider each equation as a subsystem. In the first subsystem, the input is the inflowwin(t), and the output of the system is the water level h1(t). In the second subsystem wehave h1(t) as an input and h2(t) as an output. In the third subsystem we have h2(t) as aninput and h3(t) as an output. In Figure 6.2 the three-vessel water flow system is representedby a block diagram, in which each block represents one of the three vessels.

Water vessel 1 Water vessel 2 Water vessel 3-win -h1 -h2 -h3

Figure 6.2: Block diagram of the three-vessel water flow system

For the three-vessel water flow system of Example 6.1 we see that the block diagramconsists of three subsystems that are in a series connection. Each of the subsystems can

Page 115: System Control Notes

6.1. BLOCK DIAGRAMS 115

be described by a transfer function:

Subsystem 1: H1(s) =

1ρA

s+ gAR1

Subsystem 2: H2(s) =

gAR1

s+ gAR2

Subsystem 3: H3(s) =

gAR2

s+ gAR3

Interconnection of systems

Series interconnection:

In a series interconnection of two systems the output of the first system becomes theinput of the second system (See Figure 6.3). Let U1(s) and Y1(s) denote the Laplace

U(s)- H1(s) - H2(s) -

Y (s)

Figure 6.3: Series connection: Y (s) = H1(s)H2(s)U(s)

transforms of the input u1(t) and the output y1(t) of the first system, respectively, anddenote by U2(s) and Y2(s) the Laplace transforms of the input u2(t) and the output y2(t)of the second system. From the previous section we know that Y1(s) = H1(s)U1(s) andY2(s) = H2(s)U2(s). With u2(t) = y1(t) and consequently U2(s) = Y1(s) we find

Y2(s) = H2(s)H1(s)U1(s)

and the transfer function of the series connection is the product of the two transfer func-tions:

Hseries(s) = H2(s)H1(s)

Example 6.2 Consider the water flow system of Example 6.1. Let U(s) and Y (s) be theLaplace transform of u(t) = win(t) and y(s) = h3(t), respectively. The three blocks H1, H2

Page 116: System Control Notes

116 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

and H3 are in a series connection so,

Htot(s) = H3(s)H2(s)H1(s)

=

1ρA

s+ gAR1

gAR1

s + gAR2

gAR2

s+ gAR3

=

g2

ρA3R1R2

(s+ gAR1

)(s+ gAR2

)(s+ gAR3

)

and thus

Y (s) = Htot(s)U(s)

Parallel interconnection:

In a parallel interconnection two systems have the same input and the outputs of thesystems are added (See Figure 6.4). Let the Laplace transform of input and output of the

U(s)

-

-

H1(s)

H2(s)

?

6e -Y (s)

Figure 6.4: Parallel connection: Y (s) = (H1(s) +H2(s))U(s)

first system be given by U1(s) and Y1(s), and of the second system by U2(s) and Y2(s).The overall output is given by y(t) = y1(t) + y2(t). With u(t) = u1(t) = u2(t) we find forthe Laplace transforms of the output:

Y (s) = Y1(s) + Y2(s) = H1(s)U(s) +H2(s)U(s) = (H1(s) +H2(s))U(s)

and the transfer function of the parallel connection is the sum of the two transfer functions:

Hparallel(s) = H1(s) + H2(s)

Feedback interconnection:

Another important type of interconnection is the feedback interconnection, in which thesystems are placed in a loop, e.g. the output of the first system is the input of the secondsystem, and the output of the second system is the input of the first system, possibly addedto an external signal (See Figure 6.5). Let U1(s) and Y1(s) denote the Laplace transforms of

Page 117: System Control Notes

6.1. BLOCK DIAGRAMS 117

R(s) U1(s)-6e

+

- H1(s)

H2(s)

-

Y1(s)

Y2(s)

Figure 6.5: Feedback interconnection: Y (s) =H1(s)

1 +H2(s)H1(s)R(s)

the input u1(t) and the output y1(t) of the first system, respectively, and denote by U2(s)and Y2(s) the Laplace transforms of the input u2(t) and the output y2(t) of the secondsystem. Furthermore, let R(s) denote the Laplace transform of the reference signal r(t).The loop is created by setting u2(t) = y1(t) and u1(t) = r(t) − y2(t). Consequently weobtain U2(s) = Y1(s), U1(s) = R(s)− Y2(s). By substitution we find:

U1(s) = R(s)−Y2(s) = R(s)−H2(s)U2(s) = R(s)−H2(s)Y1(s) = R(s)−H2(s)H1(s)U1(s)

Hence, we obtain(

1 +H2(s)H1(s))

U1(s) = R(s)

and so

U1(s) =1

1 +H2(s)H1(s)R(s)

Y1(s) =H1(s)

1 +H2(s)H1(s)R(s)

and the transfer function from r to y1 is given by

Hfeedback(s) =H1(s)

1 +H2(s)H1(s)

When the feedback argument y2(t) is subtracted (see Figure 6.5) we call it a negativefeedback. Negative feedback often appear in controller design an is required for systemstability. For a negative feedback configuration as in Figure 6.5 we can express the solutionby a simple rule:

The transfer function of a single-loop negative feedback system is given by the forwardtransfer function divided by the sum of one plus the loop gain function.

where the loop gain function is the product of the transfer functions making the loop, that

Page 118: System Control Notes

118 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

is, the products of the gains in the loop.

For the configuration of Figure 6.5 the forward transfer function is equal to H1(s), the loopgain function is equal to H2(s)H1(s), and so Hfeedback(s) = H1(s)/(1 +H2(s)H1(s)).

R(s) U1(s)-6e

+

+

- H1(s)

H2(s)

-

Y1(s)

Y2(s)

Figure 6.6: Positive feedback interconnection: Y (s) =H1(s)

1−H2(s)H1(s)R(s)

When the feedback argument is added (instead of subtracted) we call it a positive feedback(See Figure 6.6). For a positive feedback configuration the solution is given by the rule:

The transfer function of a single loop positive feedback system is given by the forwardtransfer function divided by one minus the loop gain function.

Block diagram manipulations

Figure 6.7 shows some basic block diagram manipulations for nodes where signals split intotwo branches, or where signals are added. The basic manipultions can be used to convertblock diagrams without effecting the mathematical relationships.

Example 6.3 (Simplifying block scheme)In this example we will consider the block diagram of Figure 6.8 and find the transferfunction from input r(t) to output y(t).Using the manipulations defined in Figure 6.7 we first replace the closed-loop of H1(s) andH3(s) by H1(s)/(1−H1(s)H3(s)) (Note that we have a positive feedback here). The next stepis to shift the input of system H6(s) over the system H2(s) using the basic manipulation stepgiven in Figure 6.7.b. Now we have a system consisting of two subsystems: The first sub-system is the feedback of H1(s)/(1−H1(s)H3(s)), H2(s) and H4(s). For this subsystem usethe rule for a positive feedback configuration: The transfer function of a single-loop positivefeedback system is given by the forward transfer function (H1(s)H2(s)/(1 − H1(s)H3(s)))divided by one minus the loop gain function (1−H1(s)H2(s)H4(s)/(1−H1(s)H3(s))). Thisresults in the following transfer function for the first subsystem:

Hsub,1(s) =H1(s)H2(s)/(1−H1(s)H3(s))

1− (H1(s)H2(s)H4(s))/(1−H1(s)H3(s))=

H1(s)H2(s)

(1−H1(s)H3(s)−H1(s)H2(s)H4(s))

Page 119: System Control Notes

6.1. BLOCK DIAGRAMS 119

-

-

H(s) -U(s) Y1(s)

Y2(s)

⇐⇒

(a)

-

- H(s) -

H(s) -U(s) Y1(s)

Y2(s)

-

-

H(s) -U(s) Y1(s)

Y2(s)

⇐⇒

(b)

-?

-

H(s)

1H(s)

-U(s) Y1(s)

Y2(s)

- e -6

H(s) -U1(s)

U2(s)

Y (s)

⇐⇒

(c)

-

-

-6e -H(s)

H(s)

U1(s)

U2(s)

Y (s)

- -6e -H(s)

U1(s)

U2(s)

Y (s)

⇐⇒

(d)

- e -

6

61

H(s)

H(s) -U1(s)

U2(s)

Y (s)

Figure 6.7: Basic block diagram manipulations

The second subsystem consists of the parallel interconnection of H6(s)/H2(s) and H5(s),leading to

Hsub,2(s) = H6(s)/H2(s) +H5(s) =H6(s) +H5(s)H2(s)

H2(s)

Page 120: System Control Notes

120 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

R(s)- eX1(s)- eX2(s)- H1

-

X3(s)

H3

6H2

-X4(s)

H4

6H5

- e -Y (s)

- H6

?

Figure 6.8: Simplification of block diagram

R(s)- e - H1

1−H1H3

- H2-

H4

6H5

- e -Y (s)

- H6

?

R(s)- e - H1

1−H1H3

- H2-

H4

6H5

- e -Y (s)

- H6

H2

?

︸ ︷︷ ︸

Hsub,1

︸ ︷︷ ︸

Hsub,2

Figure 6.9: Final blockscheme after some block manipulations

The final transfer function is the series connection of Hsub,1(s) and Hsub,2(s):

H(s) =

(H1(s)H2(s)

(1−H1(s)H3(s)−H1(s)H2(s)H4(s))

) (H6(s) +H5(s)H2(s)

H2(s)

)

=H1(s)H2(s)H5(s) +H1(s)H6(s)

1−H1(s)H3(s)−H1(s)H2(s)H4(s)

Page 121: System Control Notes

6.2. CONTROL CONFIGURATIONS 121

Algebraic approach in block diagram computations

A more algebraic approach starts with writing down the input-output equations of allsubsystems, and proceeds with eliminating the internal signals, which are not relevant. Wewill show this approach by computing the overall transfer for the system of Example 6.3.

Example 6.4 (Simplifying block scheme II)In this example we will consider the block diagram of Figure 6.8 and find the transferfunction from input r(t) to output y(t). We write down the equations:

X1(s) = R(s) +H4X4(s) (6.1)

X2(s) = X1(s) +H3(s)X3(s) (6.2)

X3(s) = H1(s)X2(s) (6.3)

X4(s) = H2X3(s) (6.4)

Y (s) = H6(s)X3(s) +H5(s)X4(s) (6.5)

Elimination of X1 and X2 from (6.1)–(6.3) gives us

X3(s) = H1(s)R(s) +H1(s)H4(s)X4(s) +H1(s)H3(s)X3(s) (6.6)

Substitution of (6.4) into (6.6) and (6.5) gives us:

X3(s) = H1(s)R(s) +H1(s)H2(s)H4(s)X3(s) +H1(s)H3(s)X3(s) (6.7)

Y (s) = H6(s)X3(s) +H2(s)H5(s)X3(s) (6.8)

Equation (6.7) can be rewritten as

(

1−H1(s)H2(s)H4(s)−H1(s)H3(s))

X3(s) = H1(s)R(s) (6.9)

so

X3(s) =(

1−H1(s)H2(s)H4(s)−H1(s)H3(s))−1

H1(s)R(s) (6.10)

Finally, substitution of (6.10) into (6.8) leads to the final result

Y (s) =(

H6(s) +H2(s)H5(s))(

1−H1(s)H2(s)H4(s)−H1(s)H3(s))−1

H1(s)R(s)

(6.11)

and so the overall transfer function is

H(s) =H1(s)H6(s) +H1(s)H2(s)H5(s)

1−H1(s)H2(s)H4(s)−H1(s)H3(s)

Page 122: System Control Notes

122 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

controller processreference

controlsignal output

- - -

Figure 6.10: Open loop configuration

6.2 Control configurations

Open-loop control system

In an open-loop control configuration the system is controlled in a certain pre-describedmanner, regardless of the actual state of the system. The open-loop configuration is givenin Figure 6.10.In the field of control systems, the reference signal input can often be seen as a trajectorywhich the output signal should track. To achieve that, the controller initiates a precom-puted action which aims to making the process output equal to the desired reference signal.

Closed-loop control system

In a closed-loop control configuration the controller produces a control signal based onthe difference between the desired and the measured system output. The closed-loopconfiguration is given in Figure 6.11.

controller processreference

controlsignal output

measured output

e+ −- - - -

6

Figure 6.11: Closed-loop configuration

In a closed-loop configuration, the measured output is compared with the desired referencesignal to provide an error signal that then initiates corrective action until the feedbacksignal duplicates the reference signal. In this chapter we assume the sensor is optimal anderrors in the measurement can be neglected. In that case the measured output is equal tothe real output.

Open-loop systems are simpler than closed-loop systems and perform satisfactory in appli-cations involving highly repeatable processes, having well established characteristics, andthat are not exposed to disturbances. In the case of model uncertainty or disturbancesacting on the system, closed-loop methods are preferred.

Page 123: System Control Notes

6.2. CONTROL CONFIGURATIONS 123

The controller in open-loop configuration is often referred to as a feedforward controller .The controller in closed-loop configuration is often referred to as a feedback controller .

Analysis of open-loop and closed-loop control systems

In the linear time-invariant case the controller and the process will be linear time-invariantsystems and can be represented by transfer functions, for the plant H(s) and for thecontroller D(s). For the open loop configuration we obtain Figure 6.12. In this setup wewant the output y(t) to track the reference signal r(t) with Laplace transform R(s). In anyphysical system, there is always some amount of external disturbance that influences theprocess behavior. This disturbance signal is denoted by w(t) with its Laplace transformW (s).

controller

Dol(s)

process

H(s)

R(s) Uol(s)

W (s)

Yol(s)- - e? - -

Figure 6.12: Open-loop control configuration with feedforward controller Dol

Let the output of the process be given by yol(t) with Laplace transform Yol(s), then wefind

Yol(s) = H(s)Dol(s)R(s) +H(s)W (s) = Tol(s)R(s) +H(s)W (s)

where Tol(s) = H(s)Dol(s). The reference tracking error e(t) with Laplace transform E(s)is defined as the difference between the reference signal and the process output and can becomputed as :

Eol(s) = R(s)− Yol(s)

= [1−H(s)Dol(s)]R(s)−H(s)W (s)

= [1− Tol(s)]R(s)−H(s)W (s)

For an optimal reference tracking we like to make the error as small as possible, soTol(s) = H(s)Dol(s) ≈ 1. This can be achieved by setting Dol(s) = H−1(s). Unfortu-nately, this choice is not always feasible because H−1(s) may be unstable or physically notrealizable. In this case an approximation has to be used, resulting in a non-zero referencetracking error. Another problem in the open-loop configuration is that disturbance cannotbe rejected and will be visible in the output signal.

For the closed-loop configuration we obtain Figure 6.13. Again we assume the Laplacetransforms of the reference signal and disturbance signal to be given by R(s) and W (s),

Page 124: System Control Notes

124 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

controller

D(s)

process

H(s)

R(s) U(s)

W (s)

Y (s)

V (s)

e+ −- - - e? - -

?e

6

Figure 6.13: Closed-loop control configuration with or feedback controller D

respectively. In this configuration we also add a measurement error v(t) with Laplacetransform V (s) to the measured output of the system.Let the output of the process be given by Y (s), then we find

Y (s) =H(s)D(s)

1 +H(s)D(s)R(s) +

H(s)

1 +H(s)D(s)W (s)− H(s)D(s)

1 +H(s)D(s)V (s)

The controller output signal U(s) is then given by

U(s) =D(s)

1 +H(s)D(s)R(s)− H(s)D(s)

1 +H(s)D(s)W (s)− D(s)

1 +H(s)D(s)V (s)

The reference tracking error E(s) is given by

E(s) = R(s)− Y (s)

= R(s)−[

H(s)D(s)

1 +H(s)D(s)R(s) +

H(s)

1 +H(s)D(s)W (s)− H(s)D(s)

1 +H(s)D(s)V (s)

]

=1

1 +H(s)D(s)R(s)− H(s)

1 +H(s)D(s)W (s) +

H(s)D(s)

1 +H(s)D(s)V (s)

Define the loop gain function

L(s) = H(s)D(s),

the sensitivity function:

S(s) =1

1 + L(s)=

1

1 +H(s)D(s)

and the complimentary sensitivity function:

T (s) = 1− S(s) =L(s)

1 + L(s)=

H(s)D(s)

1 +H(s)D(s)

then the reference tracking error becomes:

E(s) = S(s)R(s)− S(s)H(s)W (s) + T (s) V (s) (6.12)

Page 125: System Control Notes

6.3. STEADY STATE TRACKING AND SYSTEM TYPE 125

The loop gain is an engineering term used to quantify the gain of a system controlled byfeedback loops. The loop gain function plays an important role in control engineering. Ahigh loop gain may improve the performance of the closed-loop system, but it may alsodestabilize it.The sensitivity function has an important role to play in judging the performance of thecontroller, because it describes how much of the reference signal cannot be tracked andwill still be in the tracking error. The smaller S, the smaller the reference tracking error.The complementary sensitivity function is the counterpart of the sensitivity function (Notethat S(s) + T (s) = 1). The closer T is to 1, the better the reference tracking.

6.3 Steady state tracking and system type

For most closed-loop control systems the primary goal is to produce an output signal thatfollows the reference signal as closely as possible. It is therefore important to know howthe output signal behaves for t −→ ∞. We define the steady state value of a signal x(t) as

xss = limt→∞

x(t).

Final value theorem

A very important property in the analysis of linear time-invariant systems is the final valuetheorem.

The final value theorem gives us the relation between the value of a signal x(t) whent → ∞, and the Laplace transform X(s) of this signal. If limt→∞ x(t) exists, then

limt→∞

x(t) = lims→0

sX(s) (6.13)

Example 6.5 (final value theorem)Given the Laplace transform X(s) = Lx(t):

X(s) =3(s+ 2)

s(s2 + 2s+ 10)

Now we can derive for x(∞) as follows

x(∞) = lims→0

sX(s) = lims→0

s3(s+ 2)

s(s2 + 2s+ 10)=

3(s+ 2)

s2 + 2 s+ 10

∣∣∣∣s=0

=6

10= 0.6

Consider a system with input u(t), output y(t) and transfer function H(s). Often one isinterested in the value y(∞) for a step input u(t) = us(t). From Table 2.1 we find that

U(s) = Lus(t) = 1/s. The output Y (s) is now given by Y (s) = H(s)U(s) = H(s)1

s.

With the final value theorem we derive:

y(∞) = lims→0

s Y (s) = lims→0

sH(s)1

s= H(0)

Page 126: System Control Notes

126 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

This means that H(0) is the value that remains if we put a constant signal on the input.The value H(0 is therefore often referred to as the DC Gain of the system.

Example 6.6 (DC Gain of the system)Consider the system

H(s) =3(s+ 2)

(s2 + 2s+ 10)

The DC gain of this system is given by

DC gain = H(0) =3(s+ 2)

s2 + 2 s+ 10

∣∣∣∣s=0

=6

10= 0.6

Steady state performance

The steady state performance of a control system is judged by the steady state differencebetween the reference and output signals. We consider the stable closed-loop configurationof Figure 6.14 with a loop gain function L(s) = H(s)D(s).

D(s) H(s)e+ −- - - -

6

r e u y

Figure 6.14: Closed-loop control configuration

Steady state error for a reference signal r(t)

Let us assume that r(t) is given for t ≥ 0. Equation (6.12) tells us that for w(t) = 0 andv(t) = 0 we obtain:

E(s) = S(s)R(s)

where E(s) is the Laplace transform of e(t), R(s) is the Laplace transform of r(t), andS(s) = 1/(1 +H(s)D(s)) is the sensitivity function. The final value theorem tells us thatwe can use (6.13) to compute the value of e(t) for t → ∞:

ess = lims→0

s S(s)R(s)

We will study the steady state error ess for different referencesignals r(t).

Page 127: System Control Notes

6.3. STEADY STATE TRACKING AND SYSTEM TYPE 127

Steady state error for a step reference signal

For r(t) = us(t) we find the Laplace transform R(s) = 1/s from Table 2.1 and so

ess = limt→∞

e(t) = lims→0

s S(s)R(s) = lims→0

s S(s) 1/s = S(0)

This means that the steady state error for a step response signal is equal to the sensitivityfunction for s = 0.

Steady state error for a ramp and a parabolic reference signal

For r(t) = ur(t) = t us(t) we find the Laplace transform R(s) = 1/s2 from Table 2.1. Thesteady state error now becomes

ess = lims→0

s S(s)R(s) = lims→0

s S(s) 1/s2 = lims→0

S(s)

s

Similarly, for a parabolic signal r(t) = up(t) = t2/2 us(t) we find the Laplace transformR(s) = 1/s3 from table 2.1. The steady state error for a parabolic reference signal is

ess = lims→0

s S(s)R(s) = lims→0

s S(s) 1/s3 = lims→0

S(s)

s2

Steady state error for a higher-order polynomial signal

Let the reference be given by the higher order polynomial signal

r(t) =tk

k!for t ≥ 0, k ∈ Z

+

The Laplace transform is given by

R(s) = 1/sk+1

The steady state error can be computed as:

ess = lims→0

s S(s)1

sk+1

Note that the behavior of the function 1skS(s) for s → 0 is important. To describe this

behavior we introduce the notion of system type.

Definition 6.3 Consider a system in the closed-loop configuration as in Figure 6.14 with

S(s) =1

1 + L(s)=

1

1 +H(s)D(s).

Assume that for some positive integer value n the sensitivity function S can be written as

S(s) = sn S0(s)

such that S0(0) is neither zero nor infinite, i.e. 0 < |S0(0)| < ∞. Then the system type isequal to n.

Page 128: System Control Notes

128 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

For a closed-loop configuration of type n and a reference signal r(t) = tk

k!us(t), the steady

state error is computed as

ess =1

skS(s)

∣∣∣∣s=0

= S(s) =sn

skS0(s)

∣∣∣∣s=0

and so

ess =

0 if n > kS0(0) if n = k∞ if n < k

If the system type is 0 a step input signal results in a constant tracking error. If the systemtype is 1, then for a ramp input signal the steady state tracking error is constant. If thesystem type is 2 a parabolic input signal result in a constant tracking error. Summarizing,if the system type is n then an input signal r(t) = tn/n!us(t) results in a constant steadystate tracking error.

The relation between system type and loop gain function L(s)

Consider a system in the closed-loop configuration as in Figure 6.14 with

S(s) =1

1 + L(s)=

1

1 +H(s)D(s).

Assume that for some positive integer value n the loop gain function can be written as

L(s) =L0(s)

sn

with L0(0) = Kn finite but not zero. Then

S(s) =1

1 + L(s)=

1

1 + L0(s)sn

=sn

sn + L0(s)

Then for an input signal

r(t) =tk

k!for t ≥ 0

we find a steady state tracking error

ess = lims→0

sn

sn + L0(s)

1

sk−→ ess = lim

s→0

sn

sn +Kn

1

sk

Define the following variables:

Kp = lims→0

L(s)

Kv = lims→0

s L(s)

Ka = lims→0

s2 L(s)

Page 129: System Control Notes

6.3. STEADY STATE TRACKING AND SYSTEM TYPE 129

If a system is of type 0, then we find for a step a steady state tracking error ess = 1/(1+Kp),and for a ramp and parabola the error will diverge to infinity. For a type 1 system thesteady state tracking error for a step is zero, for a ramp we find ess = 1/Kv, and for aparabola the error is infinite. Finally, for a type 2 system, the steady state tracking errorfor a step and for a ramp is zero, and for a parabola we find ess = 1/Ka. This summarizedin Table 6.1.

step ramp parabola

type 01

1 +Kp∞ ∞

type 1 01

Kv∞

type 2 0 01

Ka

Table 6.1: System type and steady state errors for various reference signals

Example 6.7 Consider the feedback configuration of Figure 6.15 for which we study thesteady state error for different types of loop gain function L(s).

L(s)e+ −- - -

6

r e y

Figure 6.15: Closed-loop control configuration

Define the three systems:

System 1 L(s) =10

(s+ 1)(s+ 2)

System 2 L(s) =4

s(s+ 2)

System 3 L(s) =4s+ 1

s2(s+ 4)

System 1 is of type 0 because we compute L0(s) = L(s) = 10(s+1)(s+2)

and Kp = L0(0) = 5.

System 2 is of type 1 because we compute L0(s) = s L(s) = 4s+2

and Kv = L0(0) = 2.

System 3 is of type 2 because we compute L0(s) = s2 L(s) = 4s+1s+4

and Ka = L0(0) = 0.25.

Page 130: System Control Notes

130 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

System 1

System 2

System 3

1

t →Figure 6.16: Step responses for different system types

First we plot the output y(t) for a step reference signal. As can be seen in Figure 6.16systems 2 and 3 can follow the signal with zero steady state error. System 1 of type 0 givesa finite steady state state tracking error ess = 1/(1 +Kp) = 1/6.

Figure 6.17 gives the output y(t) for a ramp reference signal. We see that the response ofsystem 1 diverges from the reference, the response of system 2 follows the reference with afinite error, and the response of system 3 converges to the reference.

Finally Figure 6.18 shows the output y(t) for a parabolic reference signal. None of theresponses converge to the parabolic signal, but system 3 with system type 2 can follow theparabola with a finite error. The response of system 1 and 2 diverge from the parabola.

System type w.r.t. disturbance inputs

In addition to the steady state reference tracking error, another criterion of steady stateperformance is the sensitivity to disturbances acting on the closed-loop system as in Figure6.19.

With W (s) the Laplace transform of w(s) and r(t) = 0, v(t) = 0, then we find

E(s) = −S(s)H(s)W (s) =−H(s)

1 +H(s)D(s)W (s) = Tw(s)W (s)

where Tw(s) = −H(s)/(1+H(s)D(s)) is the transfer function of the system with input wand output e. For a step function w(s) = us(t) we find

ess = limt→∞

e(t) = lims→0

sE(s) = lims→0

s Tw(s)W (s) = lims→0

s Tw(s) 1/s = Tw(0)

Page 131: System Control Notes

6.3. STEADY STATE TRACKING AND SYSTEM TYPE 131

System 1

System 2System 3

t →Figure 6.17: Ramp responses for different system types

We can extend the analysis of the steady state error to the class of disturbance signals

w(t) =tk

k!us(t)

with Laplace transform

W (s) =1

sk+1

The steady state error can be computed as:

ess = lims→0

s Tw(s)W (s) = lims→0

s Tw(s)1

sk+1= lim

s→0Tw(s)

1

sk

Assume that for some positive integer value n the disturbance function Tw can be writtenas

Tw(s) = sn Tw,0(s)

such that Tw,0(0) is neither zero nor infinite, i.e. 0 < |Tw,0(0)| < ∞, then the disturbancesystem type is equal to n. Now the tracking error can be written as

ess =1

skTw(s)

∣∣∣∣s=0

=sn

skTw,0(s)

∣∣∣∣s=0

and so

ess =

0 if n > kTw,0(0) if n = k∞ if n < k

If fact we have the same property as for the reference tracking error. For the disturbancesystem type n we find that a disturbance signal w(t) = tn/n!us(t) results in a constantsteady state tracking error.

Page 132: System Control Notes

132 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

System 2System 3System 1

t →Figure 6.18: Parabola responses for different system types

D He+ −

e- - - -? -6

r(t) = 0 e(t)

w(t)

u(t) y(t)

Figure 6.19: Closed-loop configuration with reference r(t) and disturbance w(t)

Example 6.8 Consider the system

H(s) =1

s(s+ 1)

in the closed-loop configuration of Figure 6.19 with controller

D(s) = 2 .

We compute

Tw(s) =−H(s)

1 +H(s)D(s)=

1

s2 + s+ 2

For n = 0 we find Tw,0(s) = Tw(s) with 0 < Tw(0) = |1/2| < ∞, and so the disturbancesystem type is equal to 0. If we choose a different controller

D(s) =0.1

s+ 3

Page 133: System Control Notes

6.4. PID CONTROL 133

we have

Tw(s) =−H(s)

1 +H(s)D(s)=

10 s

10 s3 + 10 s2 + 30 s+ 1

For n = 1 we find Tw(s) = s1 Tw,0(s) with

Tw,0(s) =10

10 s3 + 10 s2 + 30 s+ 1

and 0 < Tw,0(0) = |10| < ∞, and so the disturbance system type is equal to 1.

6.4 PID control

In most industrial applications, PID controllers are used to enhance the system performanceand to meet the desired specifications. The terms P, I, and D stand for P - Proportional, I- Integral, and D - Derivative. These terms describe three basic mathematical operationsapplied to the error signal e(t) = r(t) − y(t). The proportional value determines thereaction to the current error, the integral value determines the reaction based on theintegral of recent errors, and the derivative value determines the reaction based on therate at which the error has been changing. We will discuss the PID controllers as theyoperate in the closed-loop configuration of Figure 6.19. We start with the P controllerwith only a proportional action. Then we discuss the PI controller (proportional + integralaction) and PD controller (proportional + derivative action), and finally the PID controller(proportional + integral + derivative action).

P control

The controller in the configuration of Figure 6.20 is called the P controller (proportionalcontroller). Typically the proportional action is the main drive in a control loop, as itreduces a large part of the overall error. For a P controller we have D(s) = kp and so thecontrol signal u(t) is proportional to the error signal e(t):

u(t) = kp e(t)

Increasing the value kp may improve the steady state tracking error and the response speed.Unfortunately, it may also lead to excessive values of the control signal u(t), which cannotbe realized in practice. Furthermore high values of kp may lead to instability.

PI control

Another kind of controller is the PI controller (proportional-integral control), in whicha part of the control signal u(t) is proportional to the error signal e(t) and another part isproportional to the integral of the error signal e(t):

u(t) = kp

(

e(t) +1

Ti

∫ t

0

e(τ) dτ)

Page 134: System Control Notes

134 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

-e(t) u(t)kp

Figure 6.20: Proportional control configuration

where Ti is called the integral or reset time and is the time needed to reach kp with a unitinput. The integral action in the PI controller reduces the steady state error in a system.Integration of even a small error over time produces a drive signal large enough to move thesystem toward a zero error and this therefore will improve the steady state performance.Unfortunately integral action may also give undesired oscillatory behavior.

-

-

-?

6ee(t) u(t)

kp

kis

Figure 6.21: Proportional Integral control configuration

If we define ki = kp/Ti we obtain

u(t) = kp

(

e(t) +1

Ti

∫ t

0

e(τ) dτ)

= kp e(t) + ki

∫ t

t0

e(τ) dτ

The transfer function of the PI controller is

D(s) = kp

(

1 +1

Tis

)

= kp +kis

PD control

Instead of an integral action we can also introduce a derivative action, in which the pro-portional part of the control action is added to a multiple of the time derivative of theerror signal e(t):

u(t) = kp

(

e(t) + Tdd e(t)

d t

)

(6.14)

Page 135: System Control Notes

6.4. PID CONTROL 135

where Td is called the derivative time. The derivative action is used to increase dampingand improve the system’s stability. It counteracts the kp (and ki in case of an integralaction) when the output changes quickly. This helps reduce overshoot and avoid unwantedoscillation of a signal. It has no effect on final error. Note that the derivative action alonenever occurs, because if e(t) is constant and different from zero, the controller does notreact.

-

-

-?

6ee(t) u(t)

kp

kds

Figure 6.22: Proportional Derivative control configuration

Consider (6.14). If we define kd = kp Td we obtain

u(t) = kp

(

e(t) + Tdd e(t)

d t

)

= kp e(t) + kdd e(t)

d t

The transfer function of the PD controller is given by

D(s) = kp (1 + Td s)

= kp + kd s

Unfortunately it is impossible to realize a derivative action in practice. The implementationis usually done as

τd u

d t+ u(t) = kp

(

e(t) + Tdd e(t)

d t

)

where τ is very small.

PID control

The most general case is the PID controller in which we combine the proportional actionwith an integral and derivative action:

u(t) = kp

(

e(t) +1

Ti

∫ t

0

e(τ) dτ + Tdd e(t)

d t

)

(6.15)

Page 136: System Control Notes

136 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

-

-

-

- -?

6ee(t) u(t)

kp

kis

kds

Figure 6.23: Proportional Integral Derivative control configuration

Finding good values of kp, Ti, and Td is called tuning. In certain cases by proper choiceswe can modify the dynamics as required (see Example 6.10. For further reading, see [3]).With ki = kp/Ti and kd = kp Td we obtain

u(t) = kp e(t) + ki

∫ t

t0

e(τ) dτ + kdd e(t)

d t

The transfer function of the PID controller is

D(s) = kp

(

1 +1

s Ti

+ s Td

)

= kp +kis+ s kd

Example 6.9 Consider the closed-loop system of Figure 6.24 with plant

H(s) =1

(s+ 1)(10 s+ 1)

and a proportional controller

D(s) = kp

The transfer function from input r(t) to y(t) is given by

T (s) =kp

(s+ 1)(10 s+ 1) + kp

and the transfer function of disturbance w(t) to y(t) is given by

Tw(s) =1

(s+ 1)(10 s+ 1) + kp

Page 137: System Control Notes

6.4. PID CONTROL 137

D(s) H(s)e+ −

e- - - -? -6

r e

w

u y

Figure 6.24: Control configuration

We can analyze the response of y(t) for different values of kp. The results are given inFigure 6.25.a for kp = 5, 10, 25, 50. The left plot is for a step reference (r(t) = us(t),w(t) = 0), and the right plot is for a step disturbance (w(t) = us(t), r(t) = 0). In the plotswe can see that increasing kp leads to a reduction of the steady state error with respect toa step reference signal and a smaller disturbance error. However, larger values of kp alsointroduce oscillatory behavior, which is undesired.Next we choose a Proportional-Integral controller

D(s) = kp(1 +1

Tis)

The transfer function from input r(t) to y(t) is now given by

T (s) =kp(s+ 1/Ti)

s(s+ 1)(10 s+ 1) + kp(s+ 1/Ti)

and the transfer function of disturbance w(t) to y(t) is given by

Tw(s) =s

s(s+ 1)(10 s+ 1) + kp(s+ 1/Ti)

We can analyze the response of y(t) for different values of Ti. The results are given in Figure6.25.b for kp = 25 and Ti = 5, 10, 50. The left plot is for a step reference (r(t) = us(t),w(t) = 0), and the right plot is for a step disturbance (w(t) = us(t), r(t) = 0). In the plotswe can see that decreasing Ti leads to a faster decay of the steady state error with respectto a step reference signal and a step disturbance signal. However, smaller values of Ti alsogive an increase of the overshoot.Next we choose a Proportional-Integral-Derivative controller

D(s) = kp(1 +1

Tis+ Tds)

The transfer function from input r(t) to y(t) is now given by

T (s) =kp(Tds

2 + s+ 1/Ti)

s(s+ 1)(10 s+ 1) + kp(Tds2 + s + 1/Ti)

Page 138: System Control Notes

138 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

kp = 50kp = 25kp = 10

kp = 5

t →5 10 15 20

y→

1.5

1.0

0.5

Reference tracking

(a)

t →5 10 15 20

2.0

1.5

1.0

0.5

Disturbance rejection

kp = 5

kp = 10

kp = 25

kp = 50

y→

Ti = 5Ti = 10

Ti = 50

t →5 10 15 20

1.5

1.0

0.5

Reference tracking

(b)

y→

Ti = 5

Ti = 10

Ti = 50

t →5 10 15 20

0.05

0.04

0.03

0.02

0.01

Disturbance rejection

y→

Td = 0.2Td = 1

Td = 5

t →5 10 15 20

1.5

1.0

0.5

Reference tracking

(c)

y→

Td = 5

Td = 1

Td = 0.2

t →5 10 15 20

0.04

0.03

0.02

0.01

Disturbance rejection

y→

Figure 6.25: Responses for (a) P control, (b) PI control, and (c) PID control

Page 139: System Control Notes

6.4. PID CONTROL 139

and the transfer function of disturbance w(t) to y(t) is given by

Tw(s) =s

s(s+ 1)(10 s+ 1) + kp(Tds2 + s + 1/Ti)

We can analyze the response of y(t) for different values of Td. The results are given inFigure 6.25.c for kp = 25, Ti = 10 and Td = 0.2, 1, 5. The left plot is for a step reference(r(t) = us(t), w(t) = 0), and the right plot is for a step disturbance (w(t) = us(t), r(t) = 0).In the plots we can see that increasing Td introduces damping of the oscillatory behavior.However, too much damping will result in a slower convergence towards the steady state.

Example 6.10 Consider the system

H(s) =1

(s+ p1)(s+ p2)

and the PID controller of (6.15). The closed-loop transfer function becomes

1

1 +H(s)D(s)=

Tis3 + Ti(p1 + p2)s

2 + Tip1p2s

Tis3 + (Tip1 + Tip2 + kpTiTd)s2 + (Tip1p2 + kpTi)s+ kp

Note that the closed-loop system has 3 poles determined by 3 parameters of the controller.We can place poles anywhere we like.

Note that we can use the PID-controller to place the poles on desired locations. Using theproperties of second-order systems we can tune the P, I, or D action in such a way that thesystem properties such as settling time, rise-time, overshoot, and peak-time satisfy certaindesign criteria. We will illustrate this with some examples.

Example 6.11 Given a system

H(s) =1

(5s+ 1)(2s+ 1)

and a PD controller

D(s) = kp(1 + Td s)

in a closed-loop configuration of Figure 6.14. The proportional gain kp has to be tunedsuch that the closed-loop system has an undamped natural frequency ωn = 1 rad/s. Thederivative time constant Td has to be such that the closed-loop system has relative dampingof ζ = 1

2. So the tasks are:

1. Compute kp.

2. Compute Td.

Page 140: System Control Notes

140 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

To answer the question we first compute the closed-loop system.

L(s) = D(s)H(s) =kp(1 + Tds)

(5s+ 1)(2s+ 1)

The characteristic equation is now given by

10s2 + 7 s+ 1 + kp(Td s+ 1) = 0

or

s2 + (0.7 + 0.1kpTd) s+ (0.1 + 0.1kp) = s2 + 2ζωn s + ω2n = 0

1. We have

ω2n =

1 + kp10

= 1 =⇒ kp = 9

2. For the derivative time constant we have

(0.7 + 0.9Td) = 2ζ ωn = 2 · 0.5 · 1 = 1 =⇒ Td =1

3

Finally we can study the steady state error for a system in closed-loop with a PID controller.

Example 6.12 Given a system

H(s) =5

(10s+ 1)(s+ 1)

and a PD controller

D(s) = kp(1 +1

10 s)

in the closed-loop configuration of Figure 6.14. The reference input is a unit ramp signalr(t) = ur(t). For which values of controller gain kp is the steady state error ess < 10% ?To answer the question we use the fact that for a type 1 control system the steady stateerror for a ramp reference signal is given by

ess =1

Kvwith Kv = lim

s→0sD(s)H(s).

There holds:

Kv = lims→0

sD(s)H(s) = lims→0

s

(

kp(10 s+ 1)

10 s

5

(10 s+ 1)(s+ 1)

)

=kp2

.

So Kv > 10 and it follows kp > 20.

Page 141: System Control Notes

6.5. EXERCISES 141

H1(s) H2(s)

H3(s)

H4(s)

e+ −

e+ −

- - - - -

6

6

R(s) Y (s)

Figure 6.26: Block scheme

6.5 Exercises

Exercise 1. Block diagram

Given the block diagram of Figure 6.26:

1. Determine the transfer function H(s) of the system with input R(s) and output Y (s).

Exercise 2. Rise time and settling time

Given a second-ordersystem

H(s) =1

(s+ 1)(s+ 5)

and a proportional controller D(s) = kp in the closed-loop configuration of Figure 6.14.The reference input is a step function (r(t) = us(t)).

1. For which kp do we obtain a stable loop?

2. For which kp do we obtain a rise time smaller than 0.2 seconds?

3. For which kp do we obtain a settling time smaller than 2.3 seconds?

Exercise 3. Steady state error

Given a system

H(s) =3

(s+ 1)(s+ 5)

and a proportional controller D(s) = kp in a closed-loop configuration of Figure 6.14. Thereference input is a unit ramp signal r(t) = ur(t). For which values of controller gain kp isthe steady state error ess < 10% ?

Page 142: System Control Notes

142 CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

Page 143: System Control Notes

Appendix A: The inverse of a matrix

The inverse of an n× n matrix can be computed as follows

M−1 =1

detMadjM

where detM is the determinant of M and adjM is the adjoint matrix of M , which isdefined as

adjM =

det(M11) − det(M21) · · · (−1)n+1 det(Mn1)det(−M12) det(M22) · · · (−1)n+2 det(Mn2)...

.... . .

...det((−1)1+nM1n) (−1)2+n det(M2n) · · · det(Mnn)

where Mij is equal to the matrix M after removing the ith row and jth column.

Example:

M =

1 2 34 4 41 2 1

The entries of the adjoint matrix can be computed as:

M11 = det([

4 42 1

])

= −4, M21 = det([

2 32 1

])

= −4, M31 = det([

2 34 4

])

= 4,

M12 = det([

4 41 1

])

= 0, M22 = det([

1 31 1

])

= −2, M32 = det([

1 34 4

])

= −8,

M13 = det([

4 41 2

]

))

= 4, M23 = det([

1 21 2

])

= 0, M33 = det([

1 24 4

])

= −4,

and so the adjoint matrix is given by

adjM =

−4 4 −40 −2 84 0 −4

143

Page 144: System Control Notes

144 Appendices

With det(M) = 8 the inverse of M is now computed as

M−1 =1

detMadjM =

−0.5 0.5 −0.50 −0.25 1

0.5 0 −0.5

For a 2× 2 matrix this works out as

[m1 m2

m3 m4

]−1

=1

m1m4 −m2m3

[m4 −m2

−m3 m1

]

Page 145: System Control Notes

Appendix B: Laplace transforms

δ(t) 11

a(at− 1 + e−at)

a

s2(s+ a)

11

se−at − e−bt b− a

(s+ a)(s+ b)

t1

s2(1− at)e−at s

(s+ a)2

t22!

s31− e−at(1 + a t)

a2

s(s+ a)2

t33!

s4be−bt − ae−at (b− a)s

(s+ a)(s+ b)

tmm!

sm+1sin at

a

s2 + a2

e−at 1

s+ acos at

s

s2 + a2

t e−at 1

(s+ a)2e−at cos bt

s+ a

(s+ a)2 + b2

1

2!t2 e−at 1

(s+ a)3e−at sin bt

b

(s+ a)2 + b2

1

(m− 1)!tm−1 e−at 1

(s+ a)m1− e−at(cos bt + a

bsin bt)

a2 + b2

s[(s+ a)2 + b2]

1− e−at a

s(s+ a)

(See Franklin et al. [3]).

145

Page 146: System Control Notes

146 Appendices

Page 147: System Control Notes

Appendix C: Answer to exercises

Exercises chapter 1

Answer exercise 1. Signals

a). Define the function:

x(t) = 1/T us(t)− 1/T us(t− T )

The function 1/T us(t) is 0 for t < 0 and 1/T for t ≥ 0. The function 1/T us(t− T )is 0 for t < T and 1/T for t ≥ T . Taking the difference gives us:

x(t) = 0 for 0 < tx(t) = 1/T for 0 ≤ t ≤ Tx(t) = 0 for t > T

This is similar to the definition of the unit rectangular function δT (t).

Answer exercise 2. Plots of signals

a).

1

−1

1 2 3 4

b).

1

−1

1 2 3 4

147

Page 148: System Control Notes

148 Appendices

c).

1

−1

1 2 3 4

Answer exercise 3. Derivative of signals

a). d/dt(

δ1(t)− 2 δ2(t− 1))

= δ(t)− 2 δ(t− 1) + δ(t− 3), for t ∈ R.

b). d/dt(

ur(t)− ur(t− 1)− us(t− 4))

= us(t)− us(t− 1)− δ(t− 4), for t ∈ R.

c). d/dt(

up(t) · us(1− t) + us(t− 1))

= ur(t) · us(1− t) + 0.5δ(t− 1), for t ∈ R.

Answer exercise 4. System properties

memoryless linear time-invariant causal

System a) Yes No No Yes

System b) No Yes Yes Yes

System c) No No Yes Yes

System d) Yes No Yes Yes

Exercises chapter 2

Answer exercise 1. Modeling and transfer function of a linear mechanicalsystem

a). For the tractor:

m1x1(t) = −c1 x1(t) + k(x2(t)− x1(t)) + ft(t)

For the trailer:

m2x2(t) = −c2 x2(t) + k(x1(t)− x2(t))

Page 149: System Control Notes

149

b).(

S2m1 + c1 S + k)

x1(t) = k x2(t) + ft(t)(

S2m2 + c2 S + k)

x2(t) = k x1(t)

From the second equation we derive for x1(t):

x1(t) =s2m2 + c2 s+ k

kx2(t)

Substitution into the first equation gives us:

(s2m2 + c2 s+ k)(s2m2 + c2 s+ k)

kx2(t) = k x2(t) + ft(t)

Multiplication with k results in:

(s2m2 + c2 s+ k)(s2m1 + c1 s+ k)x2(t) = k2x2(t) + k ft(t)

and so(

(s2m2 + c2 s+ k)(s2m1 + c1 s+ k)− k2)

x2(t) = k ft(t)

or(

s4m1m2+ s3(c1m2+ c2 m1)+ s2(km2+km1+ c1c2)+ s k(c2+ c1))

x2(t) = k ft(t)

We can rewrite this as the input-output differential equation:

m1m2d4 x(t)

d t4+ (c1m2 + c2 m1)

d3 x(t)

d t3+ (km2 + km1 + c1c2)

d2 x(t)

d t2

+ k(c2 + c1)d x(t)

d t= k ft(t)

c). Choose state vector x(t):

x(t) =

x1(t)x2(t)x3(t)x4(t)

=

x1(t)x1(t)x2(t)x2(t)

then

˙x1(t) = − c1m1

x1(t) +km1

(x4(t)− x2(t)) +1m1

u(t)˙x2(t) = x1(t)˙x3(t) = − c2

m2x3(t) +

km2

(x2(t)− x4(t))˙x4(t) = x3(t)

Page 150: System Control Notes

150 Appendices

and so

˙x(t) =

− c1m1

− km1

0 km1

1 0 0 00 + k

m2− c2

m2− k

m2

0 0 1 0

x(t) +

1m

000

u(t)

y(t) =[0 0 0 1

]x(t) + 0 u(t)

Answer exercise 2. Modeling and transfer function of a linear electrical system

a). There is one capacitor and one inductor in the network. The system can therefore bedescribed by two differential equations. The equations are as follows:

Ld i3(t)

d t= v1(t)

Cd v1(t)

d t= i1(t)− i3(t)

b). Using the differential operator we obtain

S Li3(t) = v1(t)

S Cv1(t) = i1(t)− i3(t)

Elimination of i3(t):

i3(t) = i1(t)− S C v)1(t)

Substitution into the first equation gives the following differential equation:

SLi3(t) = v1(t)

SLi1(t)− S2LCv1(t) = v1(t)

and so

SLi1(t) = S2LCv1(t) + v1(t)

or

SLi1(t) = (S2LC + 1)v1(t)

We can rewrite this as the input-output differential equation:

Ld i1(t)

d t= LC

d2 v1(t)

d t2+ v1(t)

Page 151: System Control Notes

151

c). Choose

x(t) =

[i3(t)v1(t)

]

, u(t) = i1 , y(t) = v1

then

d i3(t)

d t=

1

Lv1(t)

d v1(t)

d t=

1

C(i1(t)− i3(t))

or

x =

[0 1

L

− 1C

0

][i3(t)v1(t)

]

+

[01C

]

u(t)

y =[0 1

][

i3(t)v1(t)

]

+ 0 u(t)

Exercises chapter 3

Answer exercise 1. Driving car

m v = −fc(t) + fm(t) = −c v(t) + fm(t)

so

v + 3 v(t) = 0.5 us(t)

so f(t) = 0.5 us(t). From Equation 3.4 in the lecture notes we know that for a forcingfunction f(t) = us(t) we find:

ys(t) = 1/σ(1− e−σt) = 1/3(1− e−3t)

The actual forcing is 0.5us(t), and so

v(t) = 1/6(1− e−3t) for t ≥ 0

Answer exercise 2. RLC-circuit

The differential equation of the circuit is given by

d2 v(t)

d t2+

1

RC

d v(t)

d t+

1

LCv(t) =

1

C

d i(t)

d t

In other words

v +1

RCv +

1

LCv(t) = v + 2ζωnv + ω2

nv(t) = v + 2v + 4v(t)

with L = 1 we find 1/C = ω2n = 4, so C = 0.25, and 1/RC = 2 so R = 2.

Page 152: System Control Notes

152 Appendices

Answer exercise 3. Damped natural frequency

We compute

y(t) +K y(t) + 3 y(t) = y(t) + 2ζωny(t) + ω2n y(t)

This gives ωn =√3, and

1− ζ2 = ωd/ωn ≥ 0.5√3

This means that ζ ≤ 0.5, so K = 2ζ√3 ≤ 2 0.5

√3, dus:

K ≤√3

Answer exercise 4. Stability

System a) System b) System c) System d)

Stable Yes Yes No Yesλ = −3 λ = −0.5 λ1 = 1.5 +

5/4 λ1 = (−2 + j√3)/4

λ2 = 1.5−√

5/4 λ2 = (−2− j√3)/4

Answer exercise 5. Damping ratio, (un)damped natural frequency, decay factor

System a): ζ = 0.5 ωn = 3 ωd = 1.5√3 σ = 1.5

System b): ζ = 2 ωn = 0.5 (ωd = j 0.5√3) σ = 1

System c): ζ = 0.2 ωn = 1 ωd = 0.4√6 =

√0.96 σ = 0.2

System d): ζ = 1 ωn = 2 ωd = 0 σ = 2

Note that for question b) the system is overdamped (ζ > 1) and there is no overshoot.This means that ωd becomes complex and do not have a physical meaning.

Answer exercise 6. Response criteria

We use

ts = 4.6/σ , tp = π/ωd , tr = 1.8/ωn , Mp = exp(−σπ

ωd) = exp(− ζπ

1− ζ2)

Page 153: System Control Notes

153

System a): ts = 4.6/1.5 = 46/15 ≈ 3.0667 tr = 1.8/3 = 0.6

tp = π/ωd = 2π√3/9 ≈ 1.209 Mp = exp(− 0.5π√

0.75) ≈ 0.163

System b): ts = 4.6/1 = 4.6 tr = 1.8/0.5 = 3.6

(tp = 2jπ/√3 ≈ j 3.6276) (Mp = exp(− 2π

j√3) ≈ −0.8842− j 0.4671)

System c): ts = 4.6/0.2 = 23 tr = 1.8/1 = 1.8

tp =π

0.4√6= 5π

√6

12≈ 3.2064 Mp = exp(− 0.2π√

0.96) ≈ 0.5266

System d): ts = 4.6/2 = 2.3 tr = 1.8/2 = 0.9

tp =π0= ∞ Mp = exp(−π

0) = exp(−∞) = 0

Note that for question b) the system is overdamped (ζ > 1) and there is no overshoot.This means that tp and Mp become complex and do not have a physical meaning.

Exercises chapter 4

Answer exercise 1. Transfer functions

a)4

s2 + 6 s+ 5

b)2 s3 + 12 s2 + 24 s+ 16

s3 − 13 s+ 12

c)3 s3 + 6 s2 − 21 s+ 12

s4 + 6 s3 + 22 s2 + 30 s+ 13

Answer exercise 2. Poles, zeroes, stability

a):

poles: p1 = −1 en p2 = −5zeros: nonestable: YES

b):

poles: p1 = 1 en p2 = 3 en p3 = −4zeros: z1,2,3 = −2stable: NO (two poles have positive real part!!)

Page 154: System Control Notes

154 Appendices

c):

poles: p1,2 = −2± j 3 en p3,4 = −1zeros: z1 = −4, z2,3 = 1stable: YES

Answer exercise 3: Frequency response

H(s) =1

s2 + 6 s+ 5

Magnitude

M(jω) = |H(jω) |

=1

| − ω2 + j 6ω + 5|

=1

(5− ω2)2 + (6ω)2

=1√

25− 10ω2 + ω4 + 36ω2

=1√

ω4 + 26ω2 + 25

Phase

φ(jω) = ∠ (H(jω)

= ∠

(1

−ω2 + j 6ω + 5

)

= −∠

(1

−ω2 + j 6ω + 5

)

= − arctan

(6ω

5− ω2

)

Response:

M(j 3) =1√

34 + 26 32 + 25=

1√340

=1

2√85

≈ 0.0542

φ(j 3) = − arctan

(6 3

5− 32

)

= − arctan(−4.5) ≈ 1.3521rad

Output:

y(t) = M(j 3) 4 cos(3 t+ φ(j 3)) = 0.2168 cos(3 t+ 1.3521)

Page 155: System Control Notes

155

Answer exercise 4: Time response

H(s) =13

s2 + 6s+ 13

Homogeneous solution:Characteristic equation:

s2 + 6s+ 13 = (s+ 3)2 + 22 = 0

so λ1 = −3 + 2 j, λ2 = −3 − 2 j. Homogeneous solution:

yhom = C1 e−3t cos 2 t+ C2 e

−3t sin 2 t

Particular solution:

ypart = H(0) = 1

Final solution with derivatives:

y(t) = yhom + ypart = 1 + C1 e−3t cos 2t+ C2 e

−3t sin 2t

y(t) = (−3C1 + 2C2) e−3t cos 2t + (−2C1 − 3C2) e

−3t sin 2t

Initial conditions:

y(0) = 1 + C1 = −5 → C1 = −6

y(0) = −3C1 + 2C2 = 2 → C2 = −8

and so

y(t) = 1− 6 e−3t cos 2t− 8 e−3t sin 2t

Answer exercise 5: Time response

H(s) =40

s2 + 10 s+ 25

Homogeneous solution:Characteristic equation:

s2 + 10s+ 25 = (s+ 5)2 = 0

so λ1,2 = −5. Homogeneous solution:

yhom = C1 e−5t + C2 t e

−5t

Page 156: System Control Notes

156 Appendices

Particular solution:

ypart = H(−3)e−3t =40

32 − 10 3 + 25e−3t =

40

4e−3t = 10e−3t

Final solution with derivatives:

y(t) = yhom + ypart = 10e−3t + C1 e−5t + C2 t e

−5t

y(t) = −30e−3t − 5C1e−5t + C2 e

−5t − 5C2 t e−5t

Initial conditions:

y(0) = 10 + C1 = 55 → C1 = 45

y(0) = −30− 5C1 + C2 = −65 → C2 = 190

and so

y(t) = 10e−3t + 45 e−5t + 190 t e−5t

Answer exercise 6: Impulse response

y(t) =

∫ ∞

−∞h(t− τ)u(τ)dτ

With u(τ) = 0 outside the interval 0 ≤ τ < 1, we find:

y(t) =

∫ 1

0

h(t− τ) 1 dτ

For t < 0 we find that t− τ ≤ 0 for the interval 0 ≤ τ < 1, and so

for t < 0 : y(t) =

∫ 1

0

h(t− τ)1dτ =

∫ 1

0

0 dτ = 0

For 0 ≤ t ≤ 1 we find h(t− τ) = 1 for τ < t and h(t− τ) = 0 for τ ≥ t, so:

for 0 ≤ t ≤ 1 : y(t) =

∫ 1

0

h(t− τ)1dτ ==

∫ t

0

1 1 dτ +

∫ 1

t

0 1dτ = t

For t ≥ 1 we find h(t− τ) = 1 for all 0 ≤ τ < 1, and h(t− τ) = 1 for all 0 ≤ τ < 1 and so

for t ≥ 1 : y(t) =

∫ 1

0

h(t− τ)1dτ =

∫ 1

0

1 1 dτ = 1

This means that

y(t) =

0 for t < 0t for 0 ≤ t < 11 for t ≥ 1

or

y(t) = ur(t)− ur(t− 1)

Page 157: System Control Notes

157

Answer exercise 7: State systems

a. We find the eigenvalues λ1 = −3 and λ2 = −4 with the corresponding eigenvectors

v1 =

[13

]

en v2 =

[27

]

.

Both eigenvalues have a negative real part, and so the system is stable.

b. We find

V =

[1 23 7

]

, V −1 =

[7 −2−3 1

]

A′ = V −1AV =

[−3 00 −4

]

B′ = V −1B =

[3−1

]

C ′ = C V =[1 1

]

D′ = D = 0

c. The homogeneous response

x(t) = V eΛ t V −1x(0)

=

[1 23 7

][e−3t 00 e−4t

][5−2

]

=

[1 23 7

][5e−3t

−2e−4t

]

=

[5e−3t − 4e−4t

15e−3t − 14e−4t

]

Page 158: System Control Notes

158 Appendices

d. Forced response:

y(t) = C eA t x(0) +

∫ t

0

eA (t−τ)B u(τ) dτ

= C eA t 0 +

∫ t

0

C V eΛ (t−τ) V −1, B dτ

=

∫ t

0

C ′ eΛ (t−τ) B′ dτ

=

∫ t

0

[1 1

][e−3(t−τ) 0

0 e−4(t−τ)

][3−1

]

d τ

=

∫ t

0

(

3 e−3(t−τ) − e−4(t−τ))

= 3 e−3t

∫ t

0

e3τdτ − e−4t

∫ t

0

e4τdτ

= 3 e−3t 1

3

(

e3t − e3 0)

− e−4t 1

4

(

e4t − e4 0)

=(

1− e−3t)

− 0.25(

1− e−4t)

= 0.75− e−3t + 0.25e−4t

e. Impulse response:

y(t) = C ′ eA′ tB′ us(t) +D′ δ(t)

=[1 1

][e−3t 00 e−4t

][3−1

]

us(t) + 0δ(t)

=(

3 e−3t − e−4t)

us(t)

f. Transfer function:

H(s) = C ′ (sI − A′)−1B′ +D′

=[1 1

][s+ 3 00 s+ 4

]−1[3−1

]

=[1 1

][

1s+3

0

0 1s+4

][3−1

]

=3

s+ 3− 1

s+ 4

=3(s+ 4)

(s+ 3)(s+ 4)− (s+ 3)

(s+ 3)s+ 4

=2s+ 9

s2 + 7 s+ 12

Page 159: System Control Notes

159

Exercises chapter 5

Answer exercise 1. Pendulum system

a)

mℓθ = −fez + fe

fez = fz sin θ = mg sin θ

We find:

mℓθ(t) = −mg sin θ(t) + fe(t)

Choose x(t) =

[

θ(t)θ(t)

]

, y(t) = θ, u(t) = fe(t), then

x1(t) = −g

ℓsin x2(t) +

1

mℓu(t)

x2(t) = x1(t)

y(t) = x2(t)

b)With u0 = fe,0 = 0.5mg we find

0 = −g

ℓsin x2,0 +

g

ℓ0.5

0 = x1,0

and so x1,0 = 0 and sin x2,0 = 0.5 which gives x2,0 = y0 = π/6.c)

A =

[∂f1/∂x1 ∂f1/∂x2

∂f2/∂x1 ∂f2/∂x2

]

(x0,u0)

=

[0 −g

ℓcos x(t)

1 0

]

(x0,u0)

=

[0 − g

2ℓ

√3

1 0

]

B =

[∂f1/∂u∂f2/∂u

]

(x0,u0)

=

[1mℓ

0

]

C =[∂g/∂x1 ∂g/∂x2

]

(x0,u0)=

[0 1

]

D = ∂g/∂u|(x0,u0)= 0

Page 160: System Control Notes

160 Appendices

Answer exercise 2. Electrical circuit

a)Three relevant equations:

v31(t) = R i3(t)

i1(t) = i2(t) + i3(t)

Cv1(t) = i2(t)

We derive:

Cv1(t) = − 1

Rv31(t) + i1(t)

Choose x(t) = v1(t), y(t) = i2, u(t) = i1(t), then

x(t) = − 1

RCx3(t) +

1

Cu(t)

y(t) = C v1(t) = C x(t) = − 1

Rx3(t) + u(t)

b)With u0 = i1,0 = 4 we find

0 = − 1

RCx30 +

1

Cu0

and so x0 = (Ruo)1/3 = (8)1/3 = 2 en y0 = − 1

Rx3(t) + u(t) = 0

c)

A =∂f

∂x

∣∣∣∣(x0,u0)

= − 3

RCx2(t)

∣∣∣∣(x0,u0)

= − 3

RCx20 = −24

B =∂f

∂u

∣∣∣∣(x0,u0)

=1

C= 4

C =∂h

∂x

∣∣∣∣(x0,u0)

= − 3

Rx2(t)

∣∣∣∣(x0,u0)

= − 3

Rx20 − 24

D =∂h

∂u

∣∣∣∣(x0,u0)

= 1

Exercises chapter 6

Answer exercise 1. Block scheme

The transfer function from x2 to y is given by:Y (s)

X2(s)=

H2

1 +H2H3.

Page 161: System Control Notes

161

H1(s) H2(s)

H3(s)

H4(s)

e+ −

e+ −

- - - - -

6

6

R X1 X2 Y

In series with H1 this gives:Y (s)

X1(s)= H1

H2

1 +H2H3

=H1H2

1 +H2H3

.

Finally the transfer function becomes:

Y (s)

R(s)=

H1H2

1 +H2H3

1 + H4H1H2

1 +H2H3

=H1H2

1 +H2H3 +H1H2H4

Answer exercise 2. Rise time and settling time

The closed-loop transfer function is given by

kps2 + 6s+ 5 + kp

=kp

(s2 + 2ζωns + ω2n)

=kp

(s+ σ)2 + ω2d

=kp

(s+ 3)2 + kp − 4

a). We find

λ1,2 = −3 ±√

4− kp

To obtain stability we need 4− kp < 32, so kp > −5.

b). To find

tr =1.8

ωn

< 0.2

we have to make ωn > 9. We find ωn =√

5 + kp > 9. and so kp > 76.

c). To find

ts =4.6

σ< 2.3

we have to make σ > 2. However, we already found that σ = 3, which means thatts < 2.3 for all kp > 0.

Page 162: System Control Notes

162 Appendices

Answer exercise 3. Steady-state error

E(s) =(s+ 1)(s+ 5)

(s+ 1)(s+ 5) + 3kp

1

s2

ess = lims→0

sE(s)

= lims→0

((s+ 1)(s+ 5)

(s+ 1)(s+ 5) + 3kp

)(1

s

)

= lims→0

(5

5 + 3 kp

)(1

s

)

= ±∞⇓

There is no kp that makes the steady-state error ess < 10%.

Page 163: System Control Notes

Index

angle, 18angular acceleration, 18angular velocity, 18, 20autonomous system, 12

basic elements,, 17basic signals,, 17block diagrams, 113Bounded-Input-Bounded-Output (BIBO) sta-

bility, 97

capacitor, 19causality, 14closed-loop control, 122convolution, 76critically damped system, 52current, 19, 20

damped harmonic function, 11damped natural frequency, 54damper, 18damping ratio, 50, 54dynamical system, 12dynamical systems, 17

electrical system, 19electromechanical system, 20elementary signals, 8equilibrium point, 107

feedback control, 113feedback controller, 123feedforward controller, 123final value theorem, 125first-order system, 43fluid capacitor, 21fluid flow system, 21

fluid mass flow rate, 21fluid pressure, 21fluid resistor, 21force, 18, 20forcing function,, 43frequency response, 71

harmonic function, 10heat energy flow, 21heat flow system, 21homogeneous solution, 45, 51, 64, 84

first-order system, 45LTI system, 64second-order system, 51state system, 84

impulse response, 45, 53first-order system, 45second-order system, 53

impulse response model, 76impulse response of a state system, 93inductor, 19inertia, 18inhomogeneous solution of a state system, 86initial conditions, 63input, 12input-output differential equation, 32input-output system, 12

Laplace transform, 32linear time-invariant systems, 61linearity, 14linearization, 108loop gain function, 117, 124

mass, 18mechanical system, 17

163

Page 164: System Control Notes

164 INDEX

memoryless, 13modal transformation, 83model, 17

Newton’s law, 23Newton’s law for rotation, 25nonlinear dynamical systems, 103nonlinear state system, 106

open-loop control, 122output, 12overdamped system, 52overshoot, 54

P control, 133partial fraction expansion, 74particular solution, 45, 51, 63, 66

first-order system, 45LTI system, 66second-order system, 51

PD control, 134peak time, 54PI control, 133PID control, 135poles of a system, 63position, 18

rectangular function, 8relation between system descriptions, 88resistor, 19rise time, 54rotational damper, 18rotational electromechanical systems, 20rotational mechanical system, 18rotational spring, 18

second-order system, 47settling time, 54signal, 7singularity functions, 12spring, 18stability, 14, 46, 58

first-order system, 46second-order system, 58

stability: Bounded-Input-Bounded-Output (BIBO),97

stability: convolution system, 97stability: LTI input-output system, 96stability: LTI state system, 97state of a system, 36state system, 36, 79state transformation, 81steady state, 107steady state tracking, 125step response, 45, 50

first-order system, 45second-order system, 50

system, 12system type, 127

temperature, 21the homogeneous solution, 63thermal capacitor, 21thermal resistor, 21time response of an LTI system, 63time-invariance, 14torque, 18, 20transducer, 20transfer function of a state system, 88translational electromechanical system, 20translational mechanical system, 18

undamped natural frequency, 50, 54underdamped system, 52unit impulse function, 8unit parabolic function, 10unit ramp function, 10unit step function, 9

velocity, 20voltage, 19, 20

zeros of a system, 63

Page 165: System Control Notes

Bibliography

[1] K.J. Astrom and R.M. Murray. Feedback Systems; An Introduction for Scientists andEngineers. Princeton Univeristy Press, Princeton, New Jersey, USA, 2009.

[2] C.M. Close, D.K. Frederick, and J.C. Newell. Modeling and Analysis of DynamicSystems. John Wiley & Sons, New York, USA, 2002.

[3] G.F. Franklin, J.D. Powel, and A. Emami-Naeini. Feedback Control of Dynamic Sys-tems. Pearson/Prentice Hall, New Jersey, USA, 2006.

[4] T. Kailath. Linear Systems. Prentice Hall, New Jersey, USA, 1980.

[5] H. Kwakernaak and R. Sivan. Modern Signals and Systems. Prentice Hall, New Jersey,USA, 1991.

[6] A.V. Oppenheim and Alan S. Willsky. Signals and Systems. Prentice Hall, New Jersey,USA, 1983.

[7] D. Rowell and D.N. Wormley. System Dynamics, an Introduction. Prentice Hall, NewJersey, USA, 1997.

[8] V. Verdult and T.J.J. van den Boom. Analysis of Continuous-time and Discrete-timeDynamical System. Lecture Notes for the course ET2-039, faculty EWI, TU Delft,2002.

165