contol fundamental system

788

Upload: romie1254

Post on 14-Oct-2014

606 views

Category:

Documents


5 download

TRANSCRIPT

MATLAB and Simulink are trademarks of The MathWorks, Inc. and are used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This books use or discussion of MATLAB and Simulink software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular peda-gogical approach or particular use of the MATLAB and Simulink software.CRC PressTaylor & Francis Group6000 Broken Sound Parkway NW, Suite 300Boca Raton, FL 33487-2742 2011 by Taylor and Francis Group, LLCCRC Press is an imprint of Taylor & Francis Group, an Informa businessNo claim to original U.S. Government worksPrinted in the United States of America on acid-free paper10 9 8 7 6 5 4 3 2 1International Standard Book Number-13: 978-1-4200-7363-8 (Ebook-PDF)This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or uti-lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy-ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.Visit the Taylor & Francis Web site athttp://www.taylorandfrancis.comand the CRC Press Web site athttp://www.crcpress.com ContentsPreface to the Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiEditorial Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiEditor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvContributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiSECTION I Mathematical Foundations1 Ordinary Linear Differential and Difference Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-1B.P. Lathi2 The Fourier, Laplace, and z-Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1Edward W. Kamen3 Matrices and Linear Algebra. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-1Bradley W. Dickinson4 Complex Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1C.W. GraySECTION II Models for Dynamical Systems5 Standard Mathematical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1William S. Levine, James T. Gillis, Graham C. Goodwin, Juan C. AgeroJuan I. Yuz, Harry L. Trentelman, and Richard Hill6 Graphical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-1Dean K. Frederick, Charles M. Close, and Norman S. NiseSECTIONIII Analysis and DesignMethods for Continuous-TimeSystems7 Analysis Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-1Raymond T. Stefani and William A. Wolovich8 Stability Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1Robert H. Bishop, Richard C. Dorf, Charles E. Rohrs, Mohamed Mansour,and Raymond T. Stefaniviiviii Contents9 Design Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1Jiann-Shiou Yang, William S. Levine, Richard C. Dorf, Robert H. Bishop, John J. DAzzo,Constantine H. Houpis, Karl J. str om, Tore H agglund, Katsuhiko Ogata, Masako Kishida,Richard D. Braatz, Z. J. Palmor, Mario E. Salgado, and Graham C. GoodwinSECTION IV Digital Control10 Discrete-Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-1Michael Santina and Allen R. Stubberud11 Sampled-Data Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1A. Feuer and Graham C. Goodwin12 Discrete-Time Equivalents of Continuous-Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-1Michael Santina and Allen R. Stubberud13 Design Methods for Discrete-Time, Linear Time-Invariant Systems . . . . . . . . . . . . . . . 13-1Michael Santina and Allen R. Stubberud14 Quantization Effects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-1Michael Santina, Allen R. Stubberud, and Peter Stubberud15 Sample-Rate Selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-1Michael Santina and Allen R. Stubberud16 Real-Time Software for Implementation of Feedback Control . . . . . . . . . . . . . . . . . . . . . . 16-1David M. Auslander, John R. Ridgely, and Jason C. Jones17 Programmable Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-1Gustaf OlssonSECTIONV Analysis and DesignMethods for Nonlinear Systems18 Analysis Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18-1Derek P. Atherton19 Design Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19-1R.H. Middleton, Stefan F. Graebe, Anders Ahln, and Jeff S. ShammaIndex. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Index-1Preface to theSecond EditionAs you may know, the first edition of The Control Handbook was very well received. Many copies weresold and a gratifying number of people took the time to tell me that they found it useful. To the publisher,these are all reasons to do a second edition. To the editor of the first edition, these same facts are a modestdisincentive. The risk that a second edition will not be as good as the first one is real and worrisome. Ihave tried very hard to insure that the second edition is at least as good as the first one was. I hope youagree that I have succeeded.I have made two major changes in the second edition. The first is that all the Applications chaptersare new. It is simply a fact of life in engineering that once a problem is solved, people are no longer asinterested in it as they were when it was unsolved. I have tried to find especially inspiring and excitingapplications for this second edition.Secondly, it has become clear to me that organizing the Applications book by academic discipline isno longer sensible. Most control applications are interdisciplinary. For example, an automotive controlsystem that involves sensors to convert mechanical signals into electrical ones, actuators that convertelectrical signals into mechanical ones, several computers and a communication network to link sensorsand actuators to the computers does not belong solely to any specific academic area. You will notice thatthe applications are now organized broadly by application areas, such as automotive and aerospace.One aspect of this new organization has created a minor and, I think, amusing problem. Severalwonderful applications did not fit into my new taxonomy. I originally grouped them under the titleMiscellaneous. Several authors objected to the slightly pejorative nature of the term miscellaneous.I agreed with them and, after some thinking, consulting with literate friends and with some of thelibrary resources, I have renamed that section Special Applications. Regardless of the name, they areall interesting and important and I hope you will read those articles as well as the ones that did fit myorganizational scheme.There has also been considerable progress in the areas covered in the Advanced Methods book. This isreflected inthe roughly two dozenarticles inthis second editionthat are completely new. Some of these arein two new sections, Analysis and Design of Hybrid Systems and Networks and Networked Controls.There have even been a few changes in the Fundamentals. Primarily, there is greater emphasis onsampling and discretization. This is because most control systems are now implemented digitally.I have enjoyed editing this second edition and learned a great deal while I was doing it. I hope that youwill enjoy reading it and learn a great deal from doing so.William S. Levineixx Preface to the Second EditionMATLAB

and Simulink

are registered trademarks of The MathWorks, Inc. For productinformation, please contact:The MathWorks, Inc.3 Apple Hill DriveNatick, MA, 01760-2098 USATel: 508-647-7000Fax: 508-647-7001E-mail: [email protected]: www.mathworks.comAcknowledgmentsThe people who were most crucial to the second edition were the authors of the articles. It took a greatdeal of work to write each of these articles and I doubt that I will ever be able to repay the authors fortheir efforts. I do thank them very much.The members of the advisory/editorial board for the second edition were a very great help in choosingtopics and finding authors. I thank them all. Two of them were especially helpful. Davor Hrovat tookresponsibility for the automotive applications and Richard Braatz was crucial in selecting the applicationsto industrial process control.It is a great pleasure to be able to provide some recognition and to thank the people who helpedbring this second edition of The Control Handbook into being. Nora Konopka, publisher of engineeringand environmental sciences for Taylor & Francis/CRC Press, began encouraging me to create a secondedition quite some time ago. Although it was not easy, she finally convinced me. Jessica Vakili and KariBudyk, the project coordinators, were an enormous help in keeping track of potential authors as wellas those who had committed to write an article. Syed Mohamad Shajahan, senior project executive atTechset, very capably handled all phases of production, while Richard Tressider, project editor for Taylor& Francis/CRC Press, provided direction, oversight, and quality control. Without all of them and theirassistants, the second edition would probably never have appeared and, if it had, it would have been farinferior to what it is.Most importantly, I thank my wife Shirley Johannesen Levine for everything she has done for me overthe many years we have been married. It would not be possible to enumerate all the ways in which shehas contributed to each and everything I have done, not just editing this second edition.William S. LevinexiEditorial BoardFrank AllgwerInstitute for Systems Theory andAutomatic ControlUniversity of StuttgartStuttgart, GermanyTamer BasarDepartment of Electrical andComputer EngineeringUniversity of Illinois at UrbanaChampaignUrbana, IllinoisRichard BraatzDepartment of Chemical EngineeringMassachusetts Institute of TechnologyCambridge, MassachusettsChristos CassandrasDepartment of Manufacturing EngineeringBoston UniversityBoston, MassachusettsDavor HrovatResearch and Advanced EngineeringFord Motor CompanyDearborn, MichiganNaomi LeonardDepartment of Mechanical andAerospace EngineeringPrinceton UniversityPrinceton, New JerseyMasayoshi TomizukaDepartment of MechanicalEngineeringUniversity of California, BerkeleyBerkeley, CaliforniaMathukumalli VidyasagarDepartment of BioengineeringThe University of Texas at DallasRichardson, TexasxiiiEditorWilliamS. Levine received B.S., M.S., and Ph.D. degrees fromthe Massachusetts Institute of Technology.He then joined the faculty of the University of Maryland, College Park where he is currently a researchprofessor in the Department of Electrical and Computer Engineering. Throughout his career he hasspecialized in the design and analysis of control systems and related problems in estimation, filtering, andsystem modeling. Motivated by the desire to understand a collection of interesting controller designs,he has done a great deal of research on mammalian control of movement in collaboration with severalneurophysiologists.He is co-author of Using MATLABto Analyze andDesignControl Systems, March1992. SecondEdition,March 1995. He is the coeditor of The Handbook of Networked and Embedded Control Systems, publishedby Birkhauser in 2005. He is the editor of a series on control engineering for Birkhauser. He has beenpresident of the IEEE Control Systems Society and the American Control Council. He is presently thechairman of the SIAM special interest group in control theory and its applications.He is a fellow of the IEEE, a distinguished member of the IEEE Control Systems Society, and arecipient of the IEEE Third Millennium Medal. He and his collaborators received the Schroers Awardfor outstanding rotorcraft research in 1998. He and another group of collaborators received the awardfor outstanding paper in the IEEE Transactions on Automatic Control, entitled Discrete-Time PointProcesses in Urban Traffic Queue Estimation.xvContributorsJuan C. AgeroCentre for Complex Dynamic Systemsand ControlThe University of NewcastleCallaghan, New South Wales, AustraliaAnders AhlnDepartment of TechnologyUppsala UniversityUppsala, SwedenKarl J. strmDepartment of Automatic ControlLund Institute of TechnologyLund, SwedenDerek P. AthertonSchool of EngineeringThe University of SussexBrighton, United KingdomDavid M. AuslanderDepartment of Mechanical EngineeringUniversity of California, BerkeleyBerkeley, CaliforniaRobert H. BishopCollege of EngineeringThe University of Texas at AustinAustin, TexasRichard D. BraatzDepartment of Chemical EngineeringUniversity of Illinois at UrbanaChampaignUrbana, IllinoisCharles M. CloseDepartment of Electrical, Computer, andSystems EngineeringRensselaer Polytechnic InstituteTroy, New YorkJohn J. DAzzoDepartment of Electrical andComputer EngineeringAir Force Institute of TechnologyWright-Patterson Air Force Base, OhioBradley W. DickinsonDepartment of Electrical EngineeringPrinceton UniversityPrinceton, New JerseyRichard C. DorfCollege of EngineeringUniversity of California, DavisDavis, CaliforniaA. FeuerElectrical Engineering DepartmentTechnionIsrael Institute of TechnologyHaifa, IsraelDean K. FrederickDepartment of Electrical, Computer,and Systems EngineeringRensselaer Polytechnic InstituteTroy, New YorkJames T. GillisThe Aerospace CorporationLos Angeles, CaliforniaGraham C. GoodwinCentre for Complex Dynamic Systemsand ControlThe University of NewcastleCallaghan, New South Wales, AustraliaStefan F. GraebePROFACTOR GmbHSteyr, Austriaxviixviii ContributorsC. W. GrayThe Aerospace CorporationEl Segundo, CaliforniaTore HgglundDepartment of Automatic ControlLund Institute of TechnologyLund, SwedenRichard HillMechanical Engineering DepartmentUniversity of Detroit MercyDetroit, MichiganConstantine H. HoupisDepartment of Electrical andComputer EngineeringAir Force Institute of TechnologyWright-Patterson Air Force Base, OhioJason C. JonesSunPower CorporationRichmond, CaliforniaEdward W. KamenSchool of Electrical and ComputerEngineeringGeorgia Institute of TechnologyAtlanta, GeorgiaMasako KishidaDepartment of Chemical EngineeringUniversity of Illinois at UrbanaChampaignUrbana, IllinoisB. P. LathiDepartment of Electrical andElectronic EngineeringCalifornia State UniversitySacramento, CaliforniaWilliam S. LevineDepartment of Electrical EngineeringUniversity of MarylandCollege Park, MarylandMohamed MansourAutomatic Control LaboratorySwiss Federal Institute of TechnologyZurich, SwitzerlandR. H. MiddletonThe Hamilton InstituteNational University of Ireland, MaynoothMaynooth, IrelandNorman S. NiseElectrical and Computer Engineering DepartmentCalifornia State Polytechnic UniversityPomona, CaliforniaKatsuhiko OgataDepartment of Mechanical EngineeringUniversity of MinnesotaMinneapolis, MinnesotaGustaf OlssonDepartment of Industrial ElectricalEngineering and AutomationLund UniversityLund, SwedenZ. J. PalmorFaculty of Mechanical EngineeringTechnionIsrael Institute of TechnologyHaifa, IsraelJohn R. RidgelyDepartment of Mechanical EngineeringCalifornia Polytechnic State UniversitySan Luis Obispo, CaliforniaCharles E. RohrsRohrs ConsultingNewton, MassachusettsMario E. SalgadoDepartment of Electronic EngineeringFederico Santa Mara Technical UniversityValparaso, ChileMichael SantinaThe Boeing CompanySeal Beach, CaliforniaJeff S. ShammaDepartment of Aerospace Engineering andEngineering MechanicsThe University of Texas at AustinAustin, TexasContributors xixRaymond T. StefaniElectrical Engineering DepartmentCalifornia State UniversityLong Beach, CaliforniaAllen R. StubberudDepartment of Electrical Engineering andComputer ScienceUniversity of California, IrvineIrvine, CaliforniaPeter StubberudDepartment of Electrical andComputer EngineeringThe University of Nevada, Las VegasLas Vegas, NevadaHarry L. TrentelmanResearch Institute of Mathematics andComputer ScienceUniversity of GroningenGroningen, The NetherlandsWilliam A. WolovichSchool of EngineeringBrown UniversityProvidence, Rhode IslandJiann-Shiou YangDepartment of Electrical and ComputerEngineeringUniversity of MinnesotaDuluth, MinnesotaJuan I. YuzDepartment of Electronic EngineeringFederico Santa Mara Technical UniversityValparaso, ChileIMathematicalFoundationsI-1 iiiii i1Ordinary LinearDifferential andDifference Equations1.1 Differential Equations ........................................ 1-1Role of Auxiliary Conditions in Solution ofDifferential Equations Classical Solution Method of Convolution1.2 Difference Equations ........................................ 1-13Causality Condition Initial Conditions andIterative Solution Classical Solution A Comment on Auxiliary Conditions Method of ConvolutionReferences .................................................................... 1-22B.P. LathiCalifornia State University1.1 Differential EquationsA function containing variables and their derivatives is called a differential expression, and an equationinvolving differential expressions is called a differential equation. A differential equation is an ordinarydifferential equation if it contains only one independent variable; it is a partial differential equationif it contains more than one independent variable. We shall deal here only with ordinary differentialequations.In the mathematical texts, the independent variable is generally x, which can be anything such as time,distance, velocity, pressure, and so on. In most of the applications in control systems, the independentvariable is time. For this reason we shall use here independent variable t for time, although it can standfor any other variable as well.The following equation

d2ydt2

43dydt 5y2(t) =sin tis an ordinary differential equation of second order because the highest derivative is of second order.An nth-order differential equation is linear if it is of the forman(t)dnydtn an1(t)dn1ydtn1 a1(t)dydt a0(t)y(t) =r(t) (1.1)where the coefficients ai(t) are not functions of y(t). If these coefficients (ai) are constants, the equationis linear with constant coefficients. Many engineering (as well as nonengineering) systems can be modeledby these equations. Systems modeled by these equations are known as linear time-invariant (LTI) systems.1-1 iiiii i1-2 Control System FundamentalsIn this chapter we shall deal exclusively with linear differential equations with constant coefficients.Certain other forms of differential equations are dealt with elsewhere in this volume.1.1.1 Role of Auxiliary Conditions in Solution of Differential EquationsWe now show that a differential equation does not, in general, have a unique solution unless someadditional constraints (or conditions) on the solution are known. This fact should not come as a surprise.A function y(t) has a unique derivative dy/dt, but for a given derivative dy/dt, there are infinite possiblefunctions y(t). If we are given dy/dt, it is impossible to determine y(t) uniquely unless an additional pieceof information about y(t) is given. For example, the solution of a differential equationdydt =2 (1.2)obtained by integrating both sides of the equation isy(t) =2t c (1.3)for any value of c. Equation 1.2 specifies a function whose slope is 2 for all t. Any straight line with a slopeof 2 satisfies this equation. Clearly the solution is not unique, but if we place an additional constrainton the solution y(t), then we specify a unique solution. For example, suppose we require that y(0) =5;then out of all the possible solutions available, only one function has a slope of 2 and an intercept withthe vertical axis at 5. By setting t =0 in Equation 1.3 and substituting y(0) =5 in the same equation, weobtain y(0) =5 =c andy(t) =2t 5which is the unique solution satisfying both Equation 1.2 and the constraint y(0) =5.In conclusion, differentiation is an irreversible operation during which certain information is lost.To reverse this operation, one piece of information about y(t) must be provided to restore the originaly(t). Using a similar argument, we can show that, given d2y/dt2, we can determine y(t) uniquely onlyif two additional pieces of information (constraints) about y(t) are given. In general, to determine y(t)uniquely from its nth derivative, we need n additional pieces of information (constraints) about y(t).These constraints are also called auxiliary conditions. When these conditions are given at t =0, they arecalled initial conditions.We discuss here two systematic procedures for solving linear differential equations of the form inEquation 1.1. The first method is the classical method, which is relatively simple, but restricted to a certainclass of inputs. The second method (the convolution method) is general and is applicable to all typesof inputs. A third method (Laplace transform) is discussed elsewhere in this volume. Both the methodsdiscussed here are classified as time-domain methods because with these methods we are able to solvethe above equation directly, using t as the independent variable. The method of Laplace transform (alsoknown as the frequency-domain method), on the other hand, requires transformation of variable t into afrequency variable s.In engineering applications, the form of linear differential equation that occurs most commonly isgiven bydnydtn an1dn1ydtn1 a1dydt a0y(t) =bmdmfdtm bm1dm1fdtm1 b1dfdt b0f (t) (1.4a)where all the coefficients ai and bi are constants. Using operational notation D to represent d/dt, thisequation can be expressed as(Dnan1Dn1 a1Da0)y(t) =(bmDmbm1Dm1 b1Db0)f (t) (1.4b) iiiii iOrdinary Linear Differential and Difference Equations 1-3orQ(D)y(t) =P(D)f (t) (1.4c)where the polynomials Q(D) and P(D), respectively, areQ(D) =Dnan1Dn1 a1Da0P(D) =bmDmbm1Dm1 b1Db0Observe that this equation is of the form of Equation 1.1, where r(t) is in the form of a linear combinationof f (t) and its derivatives. In this equation, y(t) represents an output variable, and f (t) represents an inputvariable of an LTI system. Theoretically, the powers mand n in the above equations can take on any value.Practical noise considerations, however, require [1] mn.1.1.2 Classical SolutionWhen f (t) 0, Equation 1.4a is known as the homogeneous (or complementary) equation. We shall firstsolve the homogeneous equation. Let the solution of the homogeneous equation be yc(t), that is,Q(D)yc(t) =0or(Dnan1Dn1 a1Da0)yc(t) =0We first show that if yp(t) is the solution of Equation 1.4a, then yc(t) yp(t) is also its solution. Thisfollows from the fact thatQ(D)yc(t) =0If yp(t) is the solution of Equation 1.4a, thenQ(D)yp(t) =P(D)f (t)Addition of these two equations yieldsQ(D)[yc(t) yp(t)] =P(D)f (t)Thus, yc(t) yp(t) satisfies Equation 1.4a and therefore is the general solution of Equation 1.4a. We callyc(t) the complementary solution and yp(t) the particular solution. In system analysis parlance, thesecomponents are called the natural response and the forced response, respectively.1.1.2.1 Complementary Solution (The Natural Response)The complementary solution yc(t) is the solution ofQ(D)yc(t) =0 (1.5a)or(Dnan1Dn1 a1Da0)yc(t) =0 (1.5b)Asolution to this equation can be found in a systematic and formal way. However, we will take a short cutby using heuristic reasoning. Equation 1.5b shows that a linear combination of yc(t) and its n successivederivatives is zero, not at some values of t, but for all t. This is possible if and only if yc(t) and all its n iiiii i1-4 Control System Fundamentalssuccessive derivatives are of the same form. Otherwise their sum can never add to zero for all values of t.We know that only an exponential function ethas this property. So let us assume thatyc(t) =cetis a solution of Equation 1.5b. NowDyc(t) = dycdt =cetD2yc(t) = d2ycdt2 =c2et Dnyc(t) = dnycdtn =cnetSubstituting these results in Equation 1.5b, we obtainc(nan1n1 a1a0)et=0For a nontrivial solution of this equation,nan1n1 a1a0 =0 (1.6a)This result means that cetis indeed a solution of Equation 1.5, provided that satisfies Equation 1.6a.Note that the polynomial in Equation 1.6a is identical to the polynomial Q(D) in Equation 1.5b, with replacing D. Therefore, Equation 1.6a can be expressed asQ() =0 (1.6b)When Q() is expressed in factorized form, Equation 1.6b can be represented asQ() =(1)(2) (n) =0 (1.6c)Clearly has n solutions: 1, 2, . . . , n. Consequently, Equation 1.5 has n possible solutions:c1e1t, c2e2t, . . . , cnent, with c1, c2, . . . , cn as arbitrary constants. We can readily show that a generalsolution is given by the sum of these n solutions, so thatyc(t) =c1e1tc2e2t cnent(1.7)where c1, c2, . . . , cn are arbitrary constants determined by n constraints (the auxiliary conditions) on thesolution. To prove this fact, assume that y1(t), y2(t), . . ., yn(t) are all solutions of Equation 1.5. ThenQ(D)y1(t) =0Q(D)y2(t) =0 Q(D)yn(t) =0Multiplying these equations by c1, c2, . . . , cn, respectively, and adding them together yieldsQ(D)[c1y1(t) c2y2(t) cnyn(t)] =0This result shows that c1y1(t) c2y2(t) cnyn(t) is also a solution of the homogeneous Equation 1.5. iiiii iOrdinary Linear Differential and Difference Equations 1-5The polynomial Q() is known as the characteristic polynomial. The equationQ() =0 (1.8)is called the characteristic or auxiliary equation. From Equation 1.6c, it is clear that 1, 2, . . ., nare the roots of the characteristic equation; consequently, they are called the characteristic roots. Theterms characteristic values, eigenvalues, and natural frequencies are also used for characteristic roots.The exponentials eit(i =1, 2, . . . , n) in the complementary solution are the characteristic modes (alsoknown as modes or natural modes). There is a characteristic mode for each characteristic root, and thecomplementary solution is a linear combination of the characteristic modes.Repeated RootsThe solution of Equation 1.5 as given in Equation 1.7 assumes that the n characteristic roots 1, 2, . . . , nare distinct. If there are repeated roots (the same root occurring more than once), the form of the solutionis modified slightly. By direct substitution we can show that the solution of the equation(D)2yc(t) =0is given byyc(t) =(c1c2t)etIn this case, the root repeats twice. Observe that the characteristic modes in this case are etand tet.Continuing this pattern, we can show that for the differential equation(D)ryc(t) =0 (1.9)the characteristic modes are et, tet, t2et, . . . , tr1et, and the solution isyc(t) =(c1c2t crtr1)et(1.10)Consequently, for a characteristic polynomialQ() =(1)r(r1) (n)the characteristic modes are e1t, te1t, . . . , tr1et, er1t, . . . , ent. and the complementary solution isyc(t) =(c1c2t crtr1)e1tcr1er1t cnent1.1.2.2 Particular Solution (The Forced Response): Method of Undetermined CoefficientsThe particular solution yp(t) is the solution ofQ(D)yp(t) =P(D)f (t) (1.11)It is a relatively simple task to determine yp(t) when the input f (t) is such that it yields only a finitenumber of independent derivatives. Inputs having the form etor trfall into this category. For example,ethas only one independent derivative; the repeated differentiation of etyields the same form, that is, et.Similarly, the repeated differentiation of tryields only r independent derivatives. The particular solutionto such an input can be expressed as a linear combination of the input and its independent derivatives.Consider, for example, the input f (t) =at2bt c. The successive derivatives of this input are 2at band 2a. In this case, the input has only two independent derivatives. Therefore the particular solution can The term eigenvalue is German for characteristic value. iiiii i1-6 Control System FundamentalsTABLE 1.1Input f (t) Forced Response1. et ,=i (i =1, 2, , n) et2. et =i tet3. k (a constant) (a constant)4. cos(t ) cos(t )5. (trr1tr1 1t 0)et(rtrr1tr1 1t 0)etbe assumed to be a linear combination of f (t) and its two derivatives. The suitable form for yp(t) in thiscase is thereforeyp(t) =2t21t 0The undetermined coefficients 0, 1, and 2 are determined by substituting this expression for yp(t) inEquation 1.11 and then equating coefficients of similar terms on both sides of the resulting expression.Although this method can be used only for inputs with a finite number of derivatives, this class ofinputs includes a wide variety of the most commonly encountered signals in practice. Table 1.1 showsa variety of such inputs and the form of the particular solution corresponding to each input. We shalldemonstrate this procedure with an example.Note: By definition, yp(t) cannot have any characteristic mode terms. If any term p(t) shown in theright-hand column for the particular solution is also a characteristic mode, the correct form of the forcedresponse must be modified to tip(t), where i is the smallest possible integer that can be used and stillcan prevent tip(t) from having a characteristic mode term. For example, when the input is et, the forcedresponse (right-hand column) has the formet. But if ethappens to be a characteristic mode, the correctform of the particular solution is tet(see Pair 2). If tetalso happens to be a characteristic mode, thecorrect form of the particular solution is t2et, and so on.Example 1.1:Solve the differential equation(D23D2)y(t) =Df (t) (1.12)if the inputf (t) =t25t 3and the initial conditions are y(0) =2 and y(0) =3.The characteristic polynomial is232 =(1)(2)Therefore the characteristic modes are etand e2t. The complementary solution is a linear combi-nation of these modes, so thatyc(t) =c1etc2e2tt 0Here the arbitrary constants c1 and c2 must be determined from the given initial conditions. iiiii iOrdinary Linear Differential and Difference Equations 1-7The particular solution to the input t25t 3 is found from Table 1.1 (Pair 5 with =0) to beyp(t) =2t21t 0Moreover, yp(t) satisfies Equation 1.11, that is,(D23D2)yp(t) =Df (t) (1.13)NowDyp(t) = ddt(2t21t 0) =22t 1D2yp(t) = d2dt2(2t21t 0) =22andDf (t) = ddt[t25t 3] =2t 5Substituting these results in Equation 1.13 yields223(22t 1) 2(2t21t 0) =2t 5or22t2(2162)t (203122) =2t 5Equating coefficients of similar powers on both sides of this expression yields22 =02162 =2203122 =5Solving these three equations for their unknowns, we obtain 0 =1, 1 =1, and 2 =0. Therefore,yp(t) =t 1 t > 0The total solution y(t) is the sum of the complementary and particular solutions. Therefore,y(t) =yc(t) yp(t)=c1etc2e2tt 1 t > 0so that y(t) =c1et2c2e2t1Setting t =0 and substituting the given initial conditions y(0) =2 and y(0) =3 in these equations,we have2 =c1c213 =c12c21The solution to these two simultaneous equations is c1 =4 and c2 =3. Therefore,y(t) =4et3e2tt 1 t 0 iiiii i1-8 Control System Fundamentals1.1.2.3 The Exponential Input etThe exponential signal is the most important signal inthe study of LTI systems. Interestingly, the particularsolution for an exponential input signal turns out to be very simple. From Table 1.1 we see that theparticular solution for the input ethas the form et. We now show that =Q()/P(). To determinethe constant , we substitute yp(t) =etin Equation 1.11, which gives usQ(D)[et] =P(D)et(1.14a)Now observe thatDet= ddt

et

=etD2et= d2dt2

et

=2et Dret=retConsequently,Q(D)et=Q()etand P(D)et=P()etTherefore, Equation 1.14a becomesQ()et=P()et(1.14b)and = P()Q()Thus, for the input f (t) =et, the particular solution is given byyp(t) =H()ett > 0 (1.15a)whereH() = P()Q() (1.15b)This is an interesting and significant result. It states that for an exponential input et, the particularsolution yp(t) is the same exponential multiplied by H() =P()/Q(). The total solution y(t) to anexponential input etis then given byy(t) =nj=1cjejtH()etwhere the arbitrary constants c1, c2, . . ., cn are determined from auxiliary conditions.Recall that the exponential signal includes a large variety of signals, suchas a constant ( =0), a sinusoid( =j), and an exponentially growing or decaying sinusoid ( = j). Let us consider the forcedresponse for some of these cases.1.1.2.4 The Constant Input f(t) = CBecause C =Ce0t, the constant input is a special case of the exponential input Cetwith =0. Theparticular solution to this input is then given byyp(t) =CH()etwith =0=CH(0) (1.16) This is true only if is not a characteristic root. iiiii iOrdinary Linear Differential and Difference Equations 1-91.1.2.5 The Complex Exponential Input ejtHere =j, andyp(t) =H(j)ejt(1.17)1.1.2.6 The Sinusoidal Input f(t) = cos 0tWe knowthat the particular solution for the input ejtis H(j)ejt. Since cos t =(ejtejt)/2,the particular solution to cos t isyp(t) = 12[H( j)ejtH(j)ejt]Because the two terms on the right-hand side are conjugates,yp(t) =Re[H( j)ejt]ButH( j) =[H( j)[ejH( j)so thatyp(t) =Re {[H( j)[ej[tH( j)]}=[H( j)[ cos [t H( j)] (1.18)This result can be generalized for the input f (t) =cos(t ). The particular solution in this case isyp(t) =[H( j)[ cos[t H( j)] (1.19)Example 1.2:Solve Equation 1.12 for the following inputs:(a) 10e3t(b) 5 (c) e2t(d) 10 cos(3t 30).The initial conditions are y(0) =2, y(0) =3.The complementary solution for this case is already found in Example 1.1 asyc(t) =c1etc2e2tt 0For the exponential input f (t) =et, the particular solution, as found in Equation 1.15 is H()et,whereH() = P()Q() = 23 2(a) For input f (t) =10e3t, =3, andyp(t) =10H(3)e3t=10 3(3)23(3) 2e3t=15e3tt > 0The total solution (the sum of the complementary and particular solutions) isy(t) =c1etc2e2t15e3tt 0and y(t) =c1et2c2e2t45e3tt 0 iiiii i1-10 Control System FundamentalsThe initial conditions are y(0) =2 and y(0) =3. Setting t =0 in the above equations andsubstituting the initial conditions yieldsc1c215 =2 and c12c245 =3Solution of these equations yields c1 =8 and c2 =25. Therefore,y(t) =8et25e2t15e3tt 0(b) For input f (t) =5 =5e0t, =0, andyp(t) =5H(0) =0 t > 0The complete solution is y(t) =yc(t) yp(t) =c1etc2e2t. We then substitute the initialconditions to determine c1 and c2 as explained in Part a.(c) Here =2, which is also a characteristic root. Hence (see Pair 2, Table 1.1, or the comment atthe bottom of the table),yp(t) =te2tTo find , we substitute yp(t) in Equation 1.11, giving us(D23D2)yp(t) =Df (t)or(D23D2)[te2t] =De2tButDte2t=(1 2t)e2tD2te2t=4(t 1)e2tDe2t=2e2tConsequently,(4t 4 3 6t 2t)e2t=2e2tore2t=2e2tThis means that =2, so thatyp(t) =2te2tThe complete solution is y(t) =yc(t) yp(t) =c1etc2e2t2te2t. We then substitute theinitial conditions to determine c1 and c2 as explained in Part a.(d) For the input f (t) =10 cos (3t 30), the particular solution [see Equation 1.19] isyp(t) =10[H( j3)[ cos[3t 30H( j3)]whereH( j3) = P( j3)Q( j3) = j3( j3)23( j3) 2= j37 j9 = 27 j21130 =0.263ej37.9Therefore,[H( j3)[ =0 263, H( j3) =37.9andyp(t) =10(0.263) cos (3t 3037.9)=2.63 cos (3t 7.9)The complete solution is y(t) =yc(t) yp(t) =c1etc2e2t2.63 cos(3t 7.9). We then substi-tute the initial conditions to determine c1 and c2 as explained in Part a. iiiii iOrdinary Linear Differential and Difference Equations 1-111.1.3 Method of ConvolutionIn this method, the input f (t) is expressed as a sum of impulses. The solution is then obtained as a sumof the solutions to all the impulse components. The method exploits the superposition property of thelinear differential equations. From the sampling (or sifting) property of the impulse function, we havef (t) =

t0f (x)(t x) dx t 0 (1.20)The right-hand side expresses f (t) as a sum (integral) of impulse components. Let the solution ofEquation 1.4 be y(t) =h(t), when f (t) =(t) and all the initial conditions are zero. Then use of thelinearity property yields the solution of Equation 1.4 to input f (t) asy(t) =

t0f (x)h(t x) dx (1.21)For this solution to be general, we must add a complementary solution. Thus, the general solution isgiven byy(t) =nj=1cjejt

t0f (x)h(t x) dx (1.22)where the lower limit 0 is understood to be 0 in order to ensure that impulses, if any, in the input f (t)at the origin are accounted for side of Equation 1.22 is well known in the literature as the convolutionintegral. The function h(t) appearing in the integral is the solution of Equation 1.4 for the impulsive input[f (t) =(t)]. It can be shown that [1]h(t) =P(D)[yo(t)u(t)] (1.23)where yo(t) is a linear combination of the characteristic modes subject to initial conditionsy(n1)o (0) =1yo(0) =y(1)o (0) = =y(n2)o (0) =0(1.24)The function u(t) appearing on the right-hand side of Equation 1.23 represents the unit step function,which is unity for t 0 and is 0 for t < 0.The right-hand side of Equation 1.23 is a linear combination of the derivatives of yo(t)u(t). Evaluatingthese derivatives is clumsy and inconvenient because of the presence of u(t). The derivatives will gen-erate an impulse and its derivatives at the origin [recall that ddtu(t) =(t)]. Fortunately when mn inEquation 1.4, the solution simplifies toh(t) =bn(t) [P(D)yo(t)]u(t) (1.25)Example 1.3:Solve Example 1.1.2.6, Part a using method of convolution.We first determine h(t). The characteristic modes for this case, as found in Example 1.1.2.2, are etand e2t. Since yo(t) is a linear combination of the characteristic modesyo(t) =K1etK2e2tt 0Therefore, yo(t) =K1et2K2e2tt 0 iiiii i1-12 Control System FundamentalsThe initial conditions according to Equation 1 24 are yo(0) =1 and yo(0) =0. Setting t =0 in theabove equations and using the initial conditions, we obtainK1K2 =0 and K12K2 =1Solution of these equations yields K1 =1 and K2 =1. Therefore,yo(t) =ete2tAlso, inthis case, thepolynomial P(D) =Dis of first order, andb2 =0. Therefore, fromEquation1.25h(t) =[P(D)yo(t)]u(t) =[Dyo(t)]u(t)= ddt(ete2t)u(t)=(et2e2t)u(t)and

t0f (x)h(t x) dx =

t010e3xe(tx)2e2(tx) dx=5et20e2t15e3tThe total solution is obtained by adding the complementary solution yc(t) =c1etc2e2ttothis component. Therefore,y(t) =c1etc2e2t5et20e2t15e3tSetting the conditions y(0) =2 and y(0) =3 in this equation (and its derivative), we obtainc1 =3, c2 =5 so thaty(t) =8et25e2t15e3tt 0which is identical to the solution found by the classical method.1.1.3.1 Assessment of the Convolution MethodThe convolutionmethodis more laborious comparedtothe classical method. However, insystemanalysis,its advantages outweigh the extra work. The classical method has a serious drawback because it yields thetotal response, which cannot be separated into components arising from the internal conditions and theexternal input. In the study of systems it is important to be able to express the system response to an inputf (t) as an explicit function of f (t). This is not possible in the classical method. Moreover, the classicalmethod is restricted to a certain class of inputs; it cannot be applied to any input.If we must solve a particular linear differential equation or find a response of a particular LTI sys-tem, the classical method may be the best. In the theoretical study of linear systems, however, it ispractically useless. General discussion of differential equations can be found in numerous texts on thesubject [2]. Another minor problem is that because the classical method yields total response, the auxiliary conditions must be onthe total response, which exists only for t 0. In practice we are most likely to know the conditions at t =0 (beforethe input is applied). Therefore, we need to derive a new set of auxiliary conditions at t =0 from the known conditionsat t =0. The convolution method can handle both kinds of initial conditions. If the conditions are given at t =0, weapply these conditions only to yc(t) because by its definition the convolution integral is 0 at t =0. iiiii iOrdinary Linear Differential and Difference Equations 1-131.2 Difference EquationsThe development of difference equations is parallel to that of differential equations. We consider here onlylinear difference equations with constant coefficients. An nth-order difference equation can be expressedin two different forms; the first form uses delay terms such as y[k 1], y[k 2], f [k 1], f [k 2], . . .,and so on, and the alternative form uses advance terms such as y[k 1], y[k 2], . . . , and so on. Bothforms are useful. We start here with a general nth-order difference equation, using advance operator formy[k n] an1y[k n 1] a1y[k 1] a0y[k]=bmf [k m] bm1f [k m1] b1f [k 1] b0f [k] (1.26)1.2.1 Causality ConditionThe left-hand side of Equation 1.26 consists of values of y[k] at instants k n, k n 1, k n 2, andso on. The right-hand side of Equation 1.26 consists of the input at instants k m, k m1, k m2,and so on. For a causal equation, the solution cannot depend on future input values. This shows thatwhen the equation is in the advance operator form of the Equation 1.26, causality requires mn. For ageneral causal case, m=n, and Equation 1.26 becomesy[k n] an1y[k n 1] a1y[k 1] a0y[k]=bnf [k n] bn1f [k n 1] b1f [k 1] b0f [k] (1.27a)where some of the coefficients onbothsides canbe zero. However, the coefficient of y[k n] is normalizedto unity. Equation 1.27a is valid for all values of k. Therefore, the equation is still valid if we replace k byk nthroughout the equation. This yields the alternative form(the delay operator form) of Equation1.27ay[k] an1y[k 1] a1y[k n 1] a0y[k n]=bn f [k] bn1f [k 1] b1 f [k n 1] b0 f [k n] (1.27b)We designate the form of Equation 1.27a the advance operator form, and the form of Equation 1.27b thedelay operator form.1.2.2 Initial Conditions and Iterative SolutionEquation 1.27b can be expressed asy[k] =an1y[k 1] an2y[k 2] a0y[k n]bn f [k] bn1 f [k 1] b0 f [k n] (1.27c)This equation shows that y[k], the solution at the kth instant, is computed from 2n 1 piecesof information. These are the past n values of y[k] : y[k 1], y[k 2], . . . , y[k n] and the presentand past n values of the input: f [k], f [k 1], f [k 2], . . . , f [k n]. If the input f [k] is known fork =0, 1, 2, . . ., then the values of y[k] for k =0, 1, 2, . . . can be computed from the 2n initial conditionsy[1], y[2], . . . , y[n] and f [1], f [2], . . . , f [n]. If the input is causal, that is, if f [k] =0 for k < 0,then f [1] =f [2] =. . . =f [n] =0, and we need only n initial conditions y[1], y[2], . . . , y[n].This allows us to compute iteratively or recursively the values y[0], y[1], y[2], y[3], . . . , and so on. For For this reason, Equation 1.27 is called a recursive difference equation. However, in Equation 1.27, if a0 =a1 =a2 = =an1 =0, then it follows from Equation 1.27c that determination of the present value of y[k] does not require the pastvalues y[k 1], y[k 2], . . ., and so on. For this reason, when ai =0, (i =0, 1, . . . , n 1), the difference Equation 1.27is nonrecursive. This classification is important in designing and realizing digital filters. In this discussion, however,this classification is not important. The analysis techniques developed here apply to general recursive and nonrecursiveequations. Observe that a nonrecursive equation is a special case of recursive equation with a0 =a1 =. . . =an1 =0. iiiii i1-14 Control System Fundamentalsinstance, to find y[0] we set k =0 in Equation 1.27c. The left-hand side is y[0], and the right-hand sidecontains terms y[1], y[2], . . . , y[n], and the inputs f [0], f [1], f [2], . . . , f [n]. Therefore, to beginwith, we must know the n initial conditions y[1], y[2], . . . , y[n]. Knowing these conditions and theinput f [k], we can iteratively find the response y[0], y[1], y[2], . . . , and so on. The following exampledemonstrates this procedure. This method basically reflects the manner in which a computer would solvea difference equation, given the input and initial conditions.Example 1.4:Solve iterativelyy[k] 0.5y[k 1] =f [k] (1.28a)with initial condition y[1] =16 and the input f [k] =k2(starting at k =0). This equation can beexpressed asy[k] =0.5y[k 1] f [k] (1 28b)If we set k =0 in this equation, we obtainy[0] =0.5y[1] f [0]=0.5(16) 0 =8Now, setting k =1 in Equation 1.28b and using the value y[0] =8 (computed in the first step) andf [1] =(1)2=1, we obtainy[1] =0.5(8) (1)2=5Next, setting k =2 in Equation 1.28b and using the value y[1] =5 (computed in the previous step)and f [2] =(2)2, we obtainy[2] =0.5(5) (2)2=6.5Continuing in this way iteratively, we obtainy[3] =0.5(6.5) (3)2=12.25y[4] =0.5(12.25) (4)2=22.125 This iterative solution procedure is available only for difference equations; it cannot be applied todifferential equations. Despite the many uses of this method, a closed-form solution of a differenceequation is far more useful in the study of system behavior and its dependence on the input andthe various system parameters. For this reason, we shall develop a systematic procedure to obtain aclosed-form solution of Equation 1.27.1.2.2.1 Operational NotationIn difference equations it is convenient to use operational notation similar to that used in differentialequations for the sake of compactness and convenience. For differential equations, we use the operator Dto denote the operation of differentiation. For difference equations, we use the operator E to denote the iiiii iOrdinary Linear Differential and Difference Equations 1-15operation for advancing the sequence by one time interval. Thus,Ef [k] f [k 1]E2f [k] f [k 2] Enf [k] f [k n](1.29)A general nth-order difference Equation 1.27a can be expressed as(Enan1En1 a1E a0)y[k] =(bnEnbn1En1 b1E b0)f [k] (1.30a)orQ[E]y[k] =P[E] f [k] (1.30b)where Q[E] and P[E] are nth-order polynomial operators, respectively,Q[E] =Enan1En1 a1E a0 (1.31a)P[E] =bnEnbn1En1 b1E b0 (1.31b)1.2.3 Classical SolutionFollowing the discussion of differential equations, we can show that if yp[k] is a solution of Equation 1.27or Equation 1.30, that is,Q[E]yp[k] =P[E]f [k] (1.32)then yp[k] yc[k] is also a solution of Equation 1.30, where yc[k] is a solution of the homogeneousequationQ[E]yc[k] =0 (1.33)As before, we call yp[k] the particular solution and yc[k] the complementary solution.1.2.3.1 Complementary Solution (The Natural Response)By definitionQ[E]yc[k] =0 (1.33a)or(Enan1En1 a1E a0)yc[k] =0 (1.33b)oryc[k n] an1yc[k n 1] a1yc[k 1] a0yc[k] =0 (1.33c)We can solve this equation systematically, but even a cursory examination of this equation points to itssolution. This equation states that a linear combination of yc[k] and delayed yc[k] is zero not for somevalues of k, but for all k. This is possible if and only if yc[k] and delayed yc[k] have the same form. Only an iiiii i1-16 Control System Fundamentalsexponential function khas this property as seen from the equationkm=mkThis shows that the delayed kis a constant times k. Therefore, the solution of Equation 1.33 must beof the formyc[k] =ck(1.34)To determine c and , we substitute this solution in Equation 1.33. From Equation 1.34, we haveEyc[k] =yc[k 1] =ck1=(c)kE2yc[k] =yc[k 2] =ck2=(c2)k Enyc[k] =yc[k n] =ckn=(cn)k(1.35)Substitution of this in Equation 1.33 yieldsc(nan1n1 a1 a0)k=0 (1.36)For a nontrivial solution of this equation(nan1n1 a1 a0) =0 (1.37a)orQ[] =0 (1.37b)Our solution ck[Equation 1.34] is correct, provided that satisfies Equation 1.37a. Now, Q[] is annth-order polynomial and can be expressed in the factorized form (assuming all distinct roots):( 1)( 2) ( n) =0 (1.37c)Clearly has n solutions 1, 2, , n and, therefore, Equation 1.33 also has n solutionsc1k1, c2k2, . . . , cnkn. In such a case we have shown that the general solution is a linear combinationof the n solutions. Thus,yc[k] =c1k1c2k2 cnkn (1.38)where 1, 2, . . . , n are the roots of Equation 1.37a and c1, c2, . . . , cn are arbitrary constants determinedfrom n auxiliary conditions. The polynomial Q[] is called the characteristic polynomial, andQ[] =0 (1.39)is the characteristic equation. Moreover, 1, 2, , n, the roots of the characteristic equation, are calledcharacteristic roots or characteristic values (also eigenvalues). The exponentials ki (i =1, 2, . . . , n) are thecharacteristic modes or natural modes. A characteristic mode corresponds to each characteristic root,and the complementary solution is a linear combination of the characteristic modes of the system.Repeated RootsFor repeated roots, the form of characteristic modes is modified. It can be shown by direct substitutionthat if a root repeats r times (root of multiplicity r), the characteristic modes corresponding to this rootare k, kk, k2k, . . . , kr1k. Thus, if the characteristic equation isQ[] =( 1)r( r1)( r2) ( n) (1.40)the complementary solution isyc[k] =(c1c2k c3k2 crkr1)k1cr1kr1cr2kr2 cnkn (1.41) iiiii iOrdinary Linear Differential and Difference Equations 1-17TABLE 1.2Input f [k] Forced Response yp[k]1. rkr ,=i (i =1, 2, . . . , n) rk2. rkr =i krk3. cos (k ) cos (k )4.

mi=0iki

rk

mi=0iki

rk1.2.3.2 Particular SolutionThe particular solution yp[k] is the solution ofQ[E]yp[k] =P[E] f [k] (1.42)We shall find the particular solution using the method of undetermined coefficients, the same methodused for differential equations. Table 1.2 lists the inputs and the corresponding forms of solution withundetermined coefficients. These coefficients can be determined by substituting yp[k] in Equation 1.42and equating the coefficients of similar terms.Note: By definition, yp[k] cannot have any characteristic mode terms. If any term p[k] shown in theright-hand column for the particular solution should also be a characteristic mode, the correct form ofthe particular solution must be modified to kip[k], where i is the smallest integer that will prevent kip[k]from having a characteristic mode term. For example, when the input is rk, the particular solution in theright-hand column is of the form crk. But if rkhappens to be a natural mode, the correct form of theparticular solution is krk(see Pair 2).Example 1.5:Solve(E25E 6)y[k] =(E 5)f [k] (1.43)if the input f [k] =(3k 5)u[k] and the auxiliary conditions are y[0] =4, y[1] =13.The characteristic equation is25 6 =( 2)( 3) =0Therefore, the complementary solution isyc[k] =c1(2)kc2(3)kTo find the form of yp[k] we use Table 1.2, Pair 4 with r =1, m=1. This yieldsyp[k] =1k 0Therefore,yp[k 1] =1(k 1) 0 =1k 10yp[k 2] =1(k 2) 0 =1k 210Also,f [k] =3k 5andf [k 1] =3(k 1) 5 =3k 8 iiiii i1-18 Control System FundamentalsSubstitution of the above results in Equation 1.43 yields1k 2105(1k 10) 6(1k 0) =3k 8 5(3k 5)or21k 3120 =12k 17Comparison of similar terms on two sides yields21 = 123120 = 17 =1 = 62 = 352This meansyp[k] =6k 352The total response isy[k] =yc[k] yp[k]=c1(2)kc2(3)k6k 352 k 0(1.44)To determine arbitrary constants c1 and c2 we set k =0 and 1 and substitute the auxiliary conditionsy[0] =4, y[1] =13 to obtain4 =c1c2 35213 =2c13c2 472=c1 =28c2 = 132Therefore,yc[k] =28(2)k 132 (3)k(1.45)andy[k] =28(2)k 132 (3)k. .. .yc[k]6k 352. .. .yp[k](1.46)1.2.4 A Comment on Auxiliary ConditionsThis method requires auxiliary conditions y[0], y[1], , y[n 1] because the total solution is valid onlyfor k 0. But if we are given the initial conditions y[1], y[2], , y[n], we can derive the conditionsy[0], y[1], , y[n 1] using the iterative procedure discussed earlier.1.2.4.1 Exponential InputAs in the case of differential equations, we can show that for the equationQ[E]y[k] =P[E] f [k] (1.47)the particular solution for the exponential input f [k] =rkis given byyp[k] =H[r]rkr ,=i (1.48)whereH[r] = P[r]Q[r] (1.49) iiiii iOrdinary Linear Differential and Difference Equations 1-19The proof follows from the fact that if the input f [k] =rk, then from Table 1.2 (Pair 4), yp[k] =rk.Therefore,Eif [k] =f [k i] =rki=rirkand P[E] f [k] =P[r]rkEjyp[k] =rkj=rjrkand Q[E]y[k] =Q[r]rkso that Equation 1.47 reduces toQ[r]rk=P[r]rkwhich yields =P[r]/Q[r] =H[r].This result is valid only if r is not a characteristic root. If r is a characteristic root, the particular solutionis krkwhere is determined by substituting yp[k] in Equation 1.47 and equating coefficients of similarterms on the two sides. Observe that the exponential rkincludes a wide variety of signals such as a constantC, a sinusoid cos(k ), and an exponentially growing or decaying sinusoid [[kcos(k ).1.2.4.2 A Constant Input f (k) =CThis is a special case of exponential Crkwith r =1. Therefore, from Equation 1.48 we haveyp[k] =C P[1]Q[1](1)k=CH[1] (1.50)1.2.4.3 A Sinusoidal InputThe input ejkis an exponential rkwith r =ej. Hence,yp[k] =H[ej]ejk= P[ej]Q[ej]ejkSimilarly for the input ejkyp[k] =H[ej]ejkConsequently, if the inputf [k] =cos k = 12(ejkejk)yp[k] = 12H[ej]ejkH[ej]ejkSince the two terms on the right-hand side are conjugatesyp[k] =ReH[ej]ejkIfH[ej] =[H[ej][ejH[ej]thenyp[k] =Re[H[ej]

ej(kH[ej])=[H[ej][ cos(k H[ej])(1.51)Using a similar argument, we can show that for the inputf [k] =cos(k )yp[k] =[H[ej][ cos(k H[ej])(1.52) iiiii i1-20 Control System FundamentalsExample 1.6:Solve(E23E 2)y[k] =(E 2)f [k]for f [k] =(3)ku[k] and the auxiliary conditions y[0] =2, y[1] =1.In this caseH[r] = P[r]Q[r] = r 2r23r 2and the particular solution to input (3)ku[k] is H[3](3)k; that is,yp[k] = 3 2(3)23(3) 2 (3)k= 52 (3)kThe characteristic polynomial is (23 2) =( 1)( 2). The characteristic roots are 1 and 2.Hence, the complementary solution is yc[k] =c1c2(2)kand the total solution isy[k] =c1(1)kc2(2)k 52 (3)kSetting k =0 and 1 in this equation and substituting auxiliary conditions yields2 =c1c2 52 and 1 =c12c2 152Solution of these two simultaneous equations yields c1 =5.5, c2 =5. Therefore,y[k] =5.5 6(2)k 52 (3)kk 01.2.5 Method of ConvolutionIn this method, the input f [k] is expressed as a sum of impulses. The solution is then obtained as a sumof the solutions to all the impulse components. The method exploits the superposition property of thelinear difference equations. A discrete-time unit impulse function [k] is defined as[k] =1 k =00 k ,=0 (1.53)Hence, an arbitrary signal f [k] can be expressed in terms of impulse and delayed impulse functions asf [k] =f [0][k] f [1][k 1] f [2][k 2] f [k][0] k 0 (1.54)The right-hand side expresses f [k] as a sum of impulse components. If h[k] is the solution ofEquation 1.30 to the impulse input f [k] =[k], then the solution to input [k m] is h[k m]. Thisfollows from the fact that because of constant coefficients, Equation 1.30 has time-invariance property.Also, because Equation 1.30 is linear, its solution is the sum of the solutions to each of the impulse iiiii iOrdinary Linear Differential and Difference Equations 1-21components of f [k] on the right-hand side of Equation 1.54. Therefore,y[k] =f [0]h[k] f [1]h[k 1] f [2]h[k 2] f [k]h[0] f [k 1]h[1] All practical systems with time as the independent variable are causal, that is, h[k] =0 for k < 0. Hence,all the terms on the right-hand side beyond f [k]h[0] are zero. Thus,y[k] =f [0]h[k] f [1]h[k 1] f [2]h[k 2] f [k]h[0]=km=0f [m]h[k m](1.55)The general solution is obtained by adding a complementary solution to the above solution. Therefore,the general solution is given byy[k] =nj=1cjkj km=0f [m]h[k m] (1.56)The last sum on the right-hand side is known as the convolution sum of f [k] and h[k].The function h[k] appearing in Equation 1.30 is the solution of Equation 1.30 for the impulsive input(f [k] =[k]) when all initial conditions are zero, that is, h[1] =h[2] = =h[n] =0. It can beshown that [2] h[k] contains an impulse and a linear combination of characteristic modes ash[k] = b0a0[k] A1k1A2k2 Ankn (1.57)where the unknown constants Ai are determined from n values of h[k] obtained by solving the equationQ[E]h[k] =P[E][k] iteratively.Example 1.7:Solve Example 1.5 using convolution method. In other words solve(E23E 2)y[k] =(E 2)f [k]for f [k] =(3)ku[k] and the auxiliary conditions y[0] =2, y[1] =1.The unit impulse solution h[k] is given by Equation 1.57. In this case, a0 =2 and b0 =2. Therefore,h[k] =[k] A1(1)kA2(2)k(1 58)To determine the two unknown constants A1 and A2 in Equation 1 58, we need two values of h[k],for instance h[0] and h[1]. These can be determined iteratively by observing that h[k] is the solutionof (E23E 2)h[k] =(E 2)[k], that is,h[k 2] 3h[k 1] 2h[k] =[k 1] 2[k] (1 59)subject to initial conditions h[1] =h[2] =0. We now determine h[0] and h[1] iteratively fromEquation 1.59. Setting k =2 in this equation yieldsh[0] 3(0) 2(0) =0 0 =h[0] =0Next, setting k =1 in Equation 1.59 and using h[0] =0, we obtainh[1] 3(0) 2(0) =1 0 =h[1] =1 iiiii i1-22 Control System FundamentalsSetting k =0 and 1 in Equation 1.58 and substituting h[0] =0, h[1] =1 yields0 =1 A1A2 and 1 =A12A2Solution of these two equations yields A1 =3 and A2 =2. Therefore,h[k] =[k] 3 2(2)kand from Equation 1.56y[k] =c1c2(2)kkm=0(3)m[k m] 3 2(2)km=c1c2(2)k1.5 4(2)k2.5(3)kThe sums in the above expression are found by using the geometric progression sum formulakm=0rm= rk11r 1 r ,=1Setting k =0 and 1 and substituting the given auxiliary conditions y[0] =2, y[1] =1, we obtain2 =c1c21.5 4 2.5 and 1 =c12c21.5 8 7 5Solution of these equations yields c1 =4 and c2 =2. Therefore,y[k] =5.5 6(2)k2.5(3)kwhich confirms the result obtained by the classical method.1.2.5.1 Assessment of the Classical MethodThe earlier remarks concerning the classical method for solving differential equations also apply todifference equations. General discussion of difference equations can be found in texts on the subject [3].References1. Birkhoff, G. and Rota, G.C., Ordinary Differential Equations, 3rd ed., John Wiley & Sons, New York,1978.2. Lathi, B.P., Linear Systems and Signals, Berkeley-Cambridge Press, Carmichael, CA, 1992.3. Goldberg, S., Introduction to Difference Equations, John Wiley & Sons, New York, 1958. iiiii i2The Fourier, Laplace,and z-Transforms2.1 Introduction........................................................ 2-12.2 Fundamentals of the Fourier, Laplace, andz-Transforms....................................................... 2-2Laplace Transform Rational Laplace Transforms Irrational Transforms Discrete-Time FT z-Transform Rational z-Transforms2.3 Applications and Examples.............................. 2-15Spectrum of a Signal Having a Rational LaplaceTransform Numerical Computation of the FT Solution of Differential Equations Solution ofDifference Equations Defining TermsReferences .................................................................... 2-26Further Reading........................................................... 2-26Edward W. KamenGeorgia Institute of Technology2.1 IntroductionThe study of signals and systems can be carried out in terms of either a time-domain or a transform-domain formulation. Both approaches are often used together in order to maximize our ability to dealwith a particular problem arising in applications. This is very much the case in controls engineeringwhere both time-domain and transform-domain techniques are extensively used in analysis and design.The transform-domain approach to signals and systems is based on the transformation of functions usingthe Fourier, Laplace, and z-transforms. The fundamental aspects of these transforms are presented in thissection along with some discussion on the application of these constructs.The development in this chapter begins with the Fourier transform (FT), which can be viewed as ageneralization of the Fourier series representation of a periodic function. The FT and Fourier series arenamed after Jean Baptiste Joseph Fourier (17681830), who first proposed in a 1807 paper that a series ofsinusoidal harmonics could be used to represent the temperature distribution in a body. In 1822 Fourierwrote a book on his work, which was translated into English many years later (see [1]). It was also duringthe first part of the 1800s that Fourier was successful in constructing a frequency-domain representationfor aperiodic (nonperiodic) functions. This resulted in the FT, which provides a representation of afunction f (t) of a real variable t in terms of the frequency components comprising the function. Muchlater (in the 1900s), an FTtheory was developed for functions f (k) of an integer variable k. This resulted inthe discrete-time Fourier transform (DTFT) and the N-point discrete Fourier transform (N-point DFT),both of which are briefly considered in this section.2-1 iiiii i2-2 Control System FundamentalsAlso during the early part of the 1800s, Pierre Simon Laplace (17491827) carried out his work on thegeneralization of the FT, which resulted in the transform that now bears his name. The Laplace transformcan be viewed as the FT with the addition of a real exponential factor to the integrand of the integraloperation. This results in a transform that is a function of a complex variable s = j. Although themodification to the FT may not seem to be very major, in fact the Laplace transform is an extremelypowerful tool in many application areas (such as controls) where the utility of the FTis somewhat limited.In this section, a brief presentation is given on the one-sided Laplace transform with much of the focuson rational transforms.The discrete-time counterpart tothe Laplace transformis the z-transformwhichwas developedprimar-ily during the 1950s (e.g., see [24]). The one-sided z-transform is considered, along with the connectionto the DTFT.Applications and examples involving the Fourier, Laplace, and z-transforms are given in the secondpart of this section. There the presentation centers on the relationship between the pole locations of arational transform and the frequency spectrum of the transformed function; the numerical computationof the FT; and the application of the Laplace and z-transforms to solving differential and differenceequations. The application of the transforms to systems and controls is pursued in other chapters in thishandbook.2.2 Fundamentals of the Fourier, Laplace, and z-TransformsLet f (t) be a real-valued function of the real-valued variable t; that is, for any real number t, f (t) is a realnumber. The function f (t) can be viewed as a signal that is a function of the continuous-time variable t(in units of seconds) and where t takes values fromto . The FT F() of f (t) is defined byF() =

f (t)ejtdt, 0Multiplication by a power of t L[tnf (t)] =(1)n dndsn F(s), n =1, 2, . . .Multiplication by etL[f (t)et] =F(s ) for any real or complex number Multiplication by sin(0t) L[f (t) sin(0t)] =(j/2)[F(s j0) F(s j0)]Multiplication by cos(0t) L[f (t) cos(0t)] =(1/2)[F(s j0) F(s j0)]Differentiation in the time domain L ddt f (t)=sF(s) f (0)Second derivative L d2dt2 f (t)

=s2F(s) sf (0) ddt f (0)nth derivative L dndtn f (t)=snF(s) sn1f (0) sn2 ddt f (0) dn1dtn1 f (0)Integration L t0f ()d= 1s F(s)Convolution in the time domain L[f (t) g(t)] =F(s)G(s)Initial-value theorem f (0) = limssF(s)Final-value theorem If f (t) has a finite limit f () as t , then f () = lims0sF(s)In Equations 2.20 and 2.21, m and n are positive integers and the coefficients bm, bm1, . . . , b1, b0 andan1, . . . , a1, a0 are real numbers. In Equation 2.19, it is assumed that N(s) and D(s) do not have anycommon factors. If there are common factors, they should always be cancelled. Also note that the poly-nomial D(s) is monic; that is, the coefficient of snis equal to 1. A rational function F(s) can always bewritten with a monic denominator polynomial D(s). The integer n, which is the degree of D(s), is calledthe order of the rational function F(s). It is assumed that n m, in which case F(s) is said to be a properrational function. If n > m, F(s) is said to be strictly proper.Given a rational transform F(s) =N(s)/D(s) with N(s) and D(s) defined by Equations 2.20 and 2.21,let z1, z2, . . . , zm denote the roots of the polynomial N(s), and let p1, p2, . . . , pn denote the roots of D(s);that is, N(zi) =0 for i =1, 2, . . . , m and D(pi) =0 for i =1, 2, . . . , n. In general, zi and pi may be real orcomplex numbers, but if any are complex, they must appear in complex conjugate pairs. The numbersz1, z2, . . . , zm are calledthe zeros of the rational functionF(s) since F(s) =0 whens =zi for i =1, 2, . . . , m;and the numbers p1, p2, . . . , pn are called the poles of F(s) since the magnitude [F(s)[ becomes infinite ass approaches pi for i =1, 2, . . . , n.If F(s) is strictly proper (n > m) and the poles p1, p2, . . . , pn of F(s) are distinct (nonrepeated), thenF(s) has the partial fraction expansionF(s) = c1s p1 c2s p2 cns pn, (2.22)where the ci are the residues given byci =[(s pi)F(s)]s=pi, i =1, 2, . . . , n (2.23)For a given value of i, the residue ci is real if and only if the corresponding pole pi is real, and ci is complexif and only if pi is complex. iiiii iThe Fourier, Laplace, and z-Transforms 2-7TABLE 2.4 Common Laplace Transformsf (t) Laplace Transform F(s)u(t) =unit-step function 1su(t) u(t T) for any T > 0 1 eTss(t) =unit impulse 1(t t0) for any t0 > 0 et0stn, t 0 n'sn1 , n =1, 2, . . .eat 1s atneat n'(s a)n1 , n =1, 2, . . .cos(t) ss22sin(t) s22cos2t s222s(s242)sin2t 22s(s242)sinh(at) as2a2cosh(at) ss2a2eatcos(t) s a(s a)22eatsin(t) (s a)22t cos(t) s22(s22)2t sin(t) 2s(s22)2teatcos(t) (s a)22(s a)222teatsin(t) 2(s a)(s a)222From Equation 2.22 we see that the inverse Laplace transform f (t) is given by the following sum ofexponential functions:f (t) =c1ep1tc2ep2t cnepnt(2.24)If all the poles p1, p2, . . . , pn of F(s) are real numbers, then f (t) is a sum of real exponentials given byEquation 2.24. If F(s) has a pair of complex poles p = j, then f (t) contains the termce(j)t ce(j)t(2.25) iiiii i2-8 Control System Fundamentalswhere c is the complex conjugate of c. Then writing c in the polar form c =[c[ej, we havece(j)t ce(j)t=[c[eje(j)t[ c[eje(j)t=[c[etej(t)ej(t) (2.26)Finally, using Eulers formula, Equation 2.26 can be written in the formce(j)t ce(j)t=2[c[etcos(t ) (2.27)From Equation 2.27 it is seen that if F(s) has a pair of complex poles, then f (t) contains a sinusoidalterm with an exponential amplitude factor et. Note that if =0 (so that the poles are purely imaginary),Equation 2.27 is purely sinusoidal.If one of the poles (say p1) is repeated r times and the other n r poles are distinct, F(s) has the partialfraction expansionF(s) = c1s p1 c2(s p1)2 cr(s p1)r cr1s pr1 cns pn(2.28)In Equation 2.28, the residues cr1, cr2, . . . , cn are calculated as in the distinct-pole case; that is,ci =[(s pi)F(s)]s=pi, i =r 1, r 2, . . . , n (2.29)and the residues c1, c2, . . . , cr are given byci = 1(r i)' dridsri(s p1)rF(s)s=p1(2.30)Then, taking the inverse transform of Equation 2.28 yieldsf (t) =c1ep1tc2tep1t cr(r 1)'tr1ep1tcr1epr1t cnepnt(2.31)The above results reveal that the analytical form of the function f (t) depends directly on the poles ofF(s). In particular, if F(s) has a nonrepeated real pole p, then f (t) contains a real exponential term of theform ceptfor some real constant c. If a real pole p is repeated r times, then f (t) contains terms of theform c1ept, c2tept, . . . , crtr1eptfor some real constants c1, c2, . . . , cr. If F(s) has a nonrepeated complexpair j of poles, then f (t) contains a term of the form cetcos(t ) for some real constants cand . If the complex pair j is repeated r times, f (t) contains terms of the form c1etcos(t 1), c2tetcos(t 2), . . . , crtr1etcos(t r) for some real constants c1, c2, . . . , cr and 1, 2, . . . , r.These results are summarized in Table 2.5.TABLE 2.5 Relationship between the Poles of F(s) and the Form of f (t)Pole Locations of F(s) Corresponding Terms in f (t)Nonrepeated real pole at s =p ceptReal pole at s =p repeated r timesri=1citi1eptNonrepeated complex pair at s = j cetcos(t )Complex pair at s = j repeated r timesri=1citi1etcos(t i) iiiii iThe Fourier, Laplace, and z-Transforms 2-9If F(s) is proper, but not strictly proper (so that n =m in Equations 2.20 and 2.21), then using longdivision F(s) can be written in the formF(s) =bn R(s)D(s) (2.32)where the degree of R(s) is strictly less than n. Then R(s)/D(s) can be expanded via partial fractions aswas done in the case when F(s) is strictly proper. Note that for F(s) given by Equation 2.32, the inverseLaplace transform f (t) contains the impulse bn(t). Hence, having n =m in F(s) results in an impulsiveterm in the inverse transform.From the relationship between the poles of F(s) and the analytical form of f (t), it follows that f (t)converges to zero as t if and only if all the poles p1, p2, . . . , pn of F(s) have real parts that are strictlyless than zero; that is, Re(pi) < 0 for i =1, 2, . . . , n. This condition is equivalent to requiring that all thepoles be located in the open left half-plane (OLHP), which is the region in the complex plane to the left ofthe imaginary axis.It also follows from the relationship between the poles of F(s) and the form of f (t) that f (t) has a finitelimit f () as t if and only if all the poles of F(s) have real parts that are less than zero, except thatF(s) may have a nonrepeated pole at s =0. In mathematical terms, the conditions for the existence of afinite limit f () areRe(pi) < 0 for all poles pi ,=0 (2.33)If pi =0 is a pole of F(s), then pi is nonrepeated (2.34)If the conditions in Equations 2.33 and 2.34 are satisfied, the limiting value f () is given byf () =[sF(s)]s=0 (2.35)The relationship in Equation 2.35 is a restatement of the final-value theorem (given in Table 2.3) in thecase when F(s) is rational and the poles of F(s) satisfy the conditions in Equations 2.33 and 2.34.2.2.3 Irrational TransformsThe Laplace transform F(s) of a function f (t) is said to be an irrational function of s if it is not rational;that is, F(s) cannot be expressed as a ratio of polynomials in s. For example, F(s) =et0s/s is irrationalsince the exponential function et0scannot be expressed as a ratio of polynomials in s. In this case, theinverse transform f (t) is equal to u(t t0), where u(t) is the unit-step function.Given any function f (t) with transform F(s) and given any real number t0 > 0, the transform of thetime-shifted (or time-delayed) function f (t t0)u(t t0) is equal to F(s)et0s. Time-delayed signals arisein systems with time delays, and thus irrational transforms appear in the study of systems with timedelays. Also, any function f (t) that is of finite duration in time has a transform F(s) that is irrational. Forinstance, suppose thatf (t) =(t)[u(t t0) u(t t1)], 0 t0 < t1 (2.36)so that f (t) =(t) for t0 t < t1, and f (t) =0 for all other t. Then f (t) can be written in the formf (t) =0(t t0)u(t t0) 1(t t1)u(t t1) (2.37)where 0(t) =(t t0) and 1(t) =(t t1). Taking the Laplace transform of Equation 2.37 yieldsF(s) =0(s)et0s1(s)et1s(2.38)where 0(s) and 1(s) are the transforms of 0(t) and 1(t), respectively. Note that by Equation 2.38, thetransform F(s) is an irrational function of s. iiiii i2-10 Control System FundamentalsTo illustrate the above constructions, suppose thatf (t) =eat[u(t 1) u(t 2)] (2.39)Writing f (t) in the form of Equation 2.37 givesf (t) =eaea(t1)u(t 1) e2aea(t2)u(t 2) (2.40)Then, transforming Equation 2.40 yieldsF(s) =[e(sa)e2(sa)] 1s a (2.41)Clearly, F(s) is an irrational function of s.2.2.4 Discrete-Time FTLet f (k) be a real-valued function of the integer-valued variable k. The function f (k) can be viewed as adiscrete-time signal; in particular, f (k) may be a sampled version of a continuous-time signal f (t). Moreprecisely, f (k) may be equal to the sample values f (kT) of a signal f (t) with t evaluated at the sampletimes t =kT, where T is the sampling interval. In mathematical terms, the sampled signal is given byf (k) =f (t)[t=kT =f (kT), k =0, 1, 2, . . . (2.42)Note that we are denoting f (kT) by f (k). The FT of a function f (k) of an integer variable k is defined byF() =k=f (k)ejk, 0). As ,the pole p1 moves along the negative real axis to the origin of the complex plane and the pole p2 goes toalong the negative axis of the complex plane. For > 1, F(s) can be expanded by partial fractions asfollows:F(s) = c(s p1)(s p2) = cp1p2 1s p1 1s p2 (2.73)Taking the inverse Laplace transform givesf (t) = cp1p2[ep1tep2t] (2.74)and thus f (t) is a sum of two decaying real exponentials. Since both poles lie in the OLHP, the FT F() isgiven byF() = c2n2j(2n) (2.75)For the case whenc =2n =100 and =2, the plot of the magnitude spectrum[F()[ is giveninFigure 2.3.In this case, the spectral content of the signal f (t) rolls off to zero at the rate of 40dB/decade, starting withthe peak magnitude of 1 at =0.10.90.80.70.60.5IF()l0.40.30.20.100 1 2B3dB(rad/s)3 4 5FIGURE 2.2 Magnitude spectrum of the exponential function et. iiiii i2-18 Control System Fundamentals10.90.80.70.60.5IF()l0.40.30.20.100 5 10(rad/s)15 20FIGURE 2.3 Magnitude spectrum of the signal with transform F(s) =100/(s240s 100).When =1, the poles p1 and p2 of F(s) are both equal to n, and F(s) becomesF(s) = c(s n)2 (2.76)Taking the inverse transform givesf (t) =ctent(2.77)Since n is assumed to be strictly positive, when =1 both the poles are in the OLHP; in this case, theFT isF() = c(jn)2 (2.78)As varies from 1 to 1, the poles of F(s) trace out a circle in the complex plane with radius n. Theloci of pole locations is shown in Figure 2.4. Note that the poles begin at n when =1, then split apartand approach the j-axis at jn as 0 and then move to n as 1. For 1 0 (2.80)Note that d is equal to the imaginary part of the pole p1 given by Equation 2.72. Using Table 2.4, wehave that the inverse transform of F(s) isf (t) = cdentsin dt (2.81)From Equation 2.81, it is seen that f (t) now contains a sinusoidal factor. When 0 B3dB.Bandlimited signal: A signal f (t) whose FT F() is zero (or approximately zero) for all > B, where Bis a finite positive number.Irrational function: A function F(s) of a complex variable s that cannot be expressed as a ratio of poly-nomials in s.Magnitude spectrum: The magnitude [F()[ of the FT of a function f (t).One-sided (or unilateral) transform: A transform that operates on a function f (t) defined for t 0.Open left half-plane (OLHP): The set of all complex numbers having negative real part.Open unit disk: The set of all complex numbers whose magnitude is less than 1.Phase spectrum: The angle F() of the FT of a function f (t).Poles of a rational function N(s)/D(s): The values of s for which D(s) =0, assuming that N(s) and D(s)have no common factors.Proper rational function: A rational function N(s)/D(s) where the degree of N(s) is less than or equal tothe degree of D(s).Rational function: A ratio of two polynomials N(s)/D(s) where s is a complex variable.Region of convergence: The set of all complex numbers for which a transform exists (i.e., is well defined)in the ordinary sense.Residues: The values of the numerator constants in a partial fraction expansion of a rational function.Strictly proper rational function: A rational function N(s)/D(s) where the degree of N(s) is strictly lessthan the degree of D(s).Two-sided (or bilateral) transform: A transform that operates on a function f (t) defined for< t 1,det A =nk=1(1)ikaikik(A) or det A =nk=1(1)ikakiki(A) (3.11)These are the Laplace expansions for the determinant corresponding to the ith row and ith column ofA respectively. In these formulas, the quantity ik(A) is the determinant of the (n 1) (n 1) squarematrix obtained by deleting the ith row and kth column of A, and similarly for ki(A).The quantities ik(A) and ki(A) are examples of (n 1) (n 1) minors of A; for any k, 1 k n 1, an (n k) (n k) minor of A is the determinant of an (n k) (n k) square matrix obtainedby deleting some set of k rows and k columns of A.For any n, det In =1. For A R22, the Laplace expansions lead to the well-known formula:det A =a11a22a12a21. iiiii iMatrices and Linear Algebra 3-53.3.6.1 Properties of the DeterminantMany properties of determinants can be verified directly from the Laplace expansion formulas. Forexample, consider the elementary row and column operations: replacing any row of a matrix by its sumwith another row does not change the value of the determinant, and, likewise, replacing any columnof a matrix by its sum with another column does not change the value of the determinant; replacing arow (or a column) of a matrix with a nonzero multiple of itself changes the determinant by the samefactor; interchanging two rows (or columns) of a matrix changes only the sign of the determinant (i.e.,the determinant is multiplied by 1).If A Rnnand z R, then det(zA) =zndet A. If A and B are matrices for which both products ABand BA are defined, then det(AB) =det(BA). If in addition, both matrices are square thendet(AB) =det(BA) =det A det B =det B det A (3.12)This is the product rule for determinants.3.3.7 Determinants and Matrix Inverses3.3.7.1 Characterization of InvertibilityThe determinant of an invertible matrix and the determinant of its inverse are reciprocals. If A is inver-tible, thendet(A1) =1/ det A (3.13)This result indicates that invertibility of matrices is related to existence of multiplicative inverses inthe underlying ring R. In ring-theoretic terminology, the units of R are those ring elements havingmultiplicative inverses. When R is a field, all nonzero elements are units, but for R=R[s] (or C[s]),the ring of polynomials with real (or complex) coefficients, only the nonzero constants (i.e., the nonzeropolynomials of degree 0) are units.Determinants provide a characterization of invertibility as follows:The matrix A Rnnis invertible if and only if det A is a unit in R.When Ris a field, all nonzero ring elements are units and the criterion for invertibility takes a simplerform:When Ris a field, the matrix A Rnnis invertible if and only if det A ,=0.3.3.8 Cramers Rule and PLU FactorizationCramers rule provides a general formula for the elements of A1in terms of a ratio of determinants:(A1)ij =(1)ijji(A)/ det A (3.14)where ji(A) is the (n 1) (n 1) minor of A in which the jth row and ith column of A are deleted.If A is a 1 1 matrix over R, then it is invertible if and only if it is a unit; when A is invertible,A1=1/A. (For instance, the 1 1 matrix s over the ring of polynomials, R[s], is not invertible; however,as a matrix over R(s), the field of rational functions, it is invertible with inverse 1/s.)If A R22, then A is invertible if and only if det A ==a11a22a21a12 is a unit. When A isinvertible,A =a11 a12a21 a22 and A1= a22/ a12/a21/ a11/ (3.15)A 2 2 polynomial matrix has a polynomial matrix inverse just in case equals a nonzero constant.Cramers rule is almost never used for computations because of its computational complexity andnumerical sensitivity. When a matrix of real or complex numbers needs to be inverted, certain matrix iiiii i3-6 Control System Fundamentalsfactorization methods are employed; such factorizations also provide the best methods for numericalcomputation of determinants.Inversion of upper and lower triangular matrices is done by a simple process of back-substitution; theinverses have the same triangular form. This may be exploited in combination with the product rule forinverses (and for determinants) since any invertible matrix A Rnn(R can be replaced by another fieldF) can be factored into the formA =PLU (3.16)where the factors on the right side are, respectively, a permutation matrix, a lower triangular matrix, andan upper triangular matrix. The computation of this PLU factorization is equivalent to the process ofGaussian elimination with pivoting [6]. The resulting expression for the matrix inverse (usually kept inits factored form) isA1=U1L1P1(3.17)whereas det A =det P det L det U. (det P =1, since P is a permutation matrix.)3.3.9 Matrix TranspositionAnother operation on matrices that is useful in a number of applications is matrix transposition. If A isan mn matrix with (A)ij =aij, the transpose of A, denoted AT, is the n m matrix given by(AT)ij =aji (3.18)Thus, the transpose of a matrix is formed by interchanging its rows and columns.If a square matrix A satisfies AT=A, it is called a symmetric matrix. If a square matrix A satisfiesAT=A, it is called a skew-symmetric matrix.For matrices whose elements may possibly be complex numbers, a generalization of transposition isoften more appropriate. The Hermitian transpose of matrix A, denoted AH, is formed by interchangingrows and columns and replacing each element by its complex conjugate:(AH)ij =aji (3.19)The matrix A is Hermitian symmetric if AH=A.3.3.9.1 Properties of TranspositionSeveral relationships between transposition and other matrix operations are noteworthy. For anymatrix, (AT)T=A; for A Rmnand z R, (zA)T=zAT. With respect to algebraic operations,(AB)T=ATBTand (AB)T=BTAT. (The products AATand ATAare always defined.) With respectto determinants and matrix inversion, if A is a square matrix, det(AT) =det A, and if A is an invertiblematrix, ATis also invertible, with (AT)1=(A1)T. A similar list of properties holds for Hermitiantransposition.3.3.9.2 Orthogonal and Unitary MatricesEven for 2 2 matrices, transposition appears to be a much simpler operation than inversion. Indeed,the class of matrices for which AT=A1is quite remarkable. A real matrix whose transpose is also itsinverse is known as an orthogonal matrix. (This terminology is in common use, although it would bepreferable to use real unitary matrix as will become apparent later.) The set of n n orthogonal matrices,along with the operation of matrix multiplication, is a group; it is a subgroup of the group of invertiblematrices, GL(R, n). For complex matrices, when A satisfies AH=A1, it is called a unitary matrix; theunitary matrices form a subgroup of GL(C, n). iiiii iMatrices and Linear Algebra 3-73.3.10 Block MatricesIt is sometimes convenient to partition the rows and columns of a matrix so that the matrix elementsare grouped into submatrices. For example, a matrix A Rmnmay be partitioned into n columns(submatrices in Rm1) or into m rows (submatrices in R1n). More generallyA =A11 A1q......Ap1 Apq (3.20)where all submatrices in each block row have the same number of rows and all submatrices in each blockcolumn have the same number of columns; that is, submatrix Aij is minj, with m1 mp =m andn1 nq =n. Such a matrix A is said to be a p q block matrix, and it is denoted by A =(Aij) forsimplicity.Matrix addition can be carried out blockwise for p q block matrices with conformable partitions,where the corresponding submatrices have the same number of rows and columns. Matrix multiplicationcan also be carried out blockwise provided the left factors column partition is compatible with the rightfactors row partition: it is required that if A =(Aij) is a pAqA block matrix with block column i havingni columns, and B =(Bij) is a pBqB block matrix with block row j having mj rows, then when qA =pBand, in addition, ni =mi for each i, the product matrix C =ABis a pAqB block matrix C =(Cij), whereblock Cij is given byCij =rk=1AikBkj (3.21)where r =qA =pB.For square matrices written as p p block matrices having square diagonal blocks Aii, the determi-nant has a blockwise representation. For a square 2 2 block matrix,det A =detA11 A12A21 A22=det A11 det(A22A21A111 A12) (3.22)provided det A11 ,=0. If this block matrix is invertible, its inverse may be expressed as a conformableblock matrix:A1=A11 A12A21 A221=S11 S12S21 S22 (3.23)and assuming A11 is invertible, the blocks of the inverse matrix are: S11 =A111 A111 A121A21A111 ;S21 =1A21A111 ; S12 =A111 A121; S22 =1=(A22A21A111 A12)1.3.3.11 Matrix Polynomials and the Cayley--Hamilton TheoremIf A Rnn, define A0=In, and Arequal to the product of r factors of A, for integer r 1. When A isinvertible, A1has already been introduced as the notation for the inverse matrix. Nonnegative powersof A1provide the means for defining Ar=(A1)r.For any polynomial, p(s) =p0skp1sk1 pk1s pk, with coefficients pi R, the matrixpolynomial p(A) is defined as p(A) =p0Akp1Ak1 pk1ApkI. When the ring of scalars,R, is a field (and in some more general cases), n n matrices obey certain polynomial equations ofthe form p(A) =0; such a polynomial p(s) is an annihilating polynomial of A. The monic annihilatingpolynomial of least degree is called the minimal polynomial of A; the minimal polynomial is the (monic)greatest common divisor of all annihilating polynomials. The degree of the minimal polynomial of ann n matrix is never larger than n because of the following remarkable result. iiiii i3-8 Control System Fundamentals3.3.11.1 Cayley--Hamilton TheoremLet A Rnn, where Ris a field. Let (s) be the nth degree monic polynomial defined by(s) =det(sI A) (3.24)Then (A) =0.The polynomial (s) =det(sI A) is called the characteristic polynomial of A.3.3.12 Equivalence for Polynomial MatricesMultiplication by A1transforms an invertible matrix A to a simple form: AA1=IAA1=I. ForA Rnnwith det A ,=0 but det A not equal to a unit in R, transformations of the form A .PAQ,where P, Q Rnnare invertible matrices, produce det A .det P det A det Q, that is, the determinantis multiplied by the invertible element det P det Q R. Thus, invertible matrices P and Qcan be soughtto bring the product PAQ to some simplified form even when A is not invertible; PAQ and A are saidto be related by R-equivalence.For equivalence of polynomial matrices (see [5] for details), where R=R[s] (or C[s]), let P(s) andQ(s) be invertible n