teria de control, dorf

1564

Click here to load reader

Upload: enver-espinal-santos

Post on 08-Sep-2015

78 views

Category:

Documents


27 download

DESCRIPTION

Libro de Control y automatización

TRANSCRIPT

  • THE

    CONTROL (VOLUME I )

    EDITOR WILLIAM S. LEVINE

    JAICO PUBLISHING HOUSE MUMBAI DELHI CALCUTTA

    BANGALORE HYDERABAD CHENNAI

    @ CRC PRESS IEEE PRESS A CRC Press Handbook published in Cooperation with IEEE Press.

  • @ CRC Press, INC.

    All rights reserved. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including pl~otocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher.

    Published in arrangement with : CRC Press, Inc.,

    2000 Corporate Blvd. N. W., Boca Raton,

    Florida 3343 1.

    THE CONTROL HANDBOOK ISBN 8 1-7224-785-0

    (2 Volume Set)

    Jaico First Impression : 1999

    Published by : Ashwin J. Shah,

    Jaico Publishing House, 121, M.G. Road,

    Mumbai - 400 023

    Printed by : Efficient Offset Printers

    215, Shahzada Bagh Industrial Complex Phase 11, Delhl- 1 10 035

  • Preface

    The purpose of The Control Handbook is to put the tools of control theory and practice into the hands of the reader. This means that the tools are not just described. Their use is explained and illustrated. Of course, one cahnot expect to become an expert on a subject as vast and complicated as control from just one book, no matter how large. References are given to more detailed and specialized works on each of the tools.

    One of the major challenges in compiling this book is the breadth and diversity of the subject. Control technology is remarkably varied. Control system implementations range from float valves to microprocessors. Control system applications include regulating the amount of water in a toilet tank, controlling the flow and generation of electrical power over huge geographic regions, regulating the behavior of gasoline engines, controlling the thickness of rolled products as varied as paper and sheet steel, and hundreds of controllers hidden in consumer products of all kinds. The different applications often require unique sensors and actuators. It quickly became obvious that it would be impossible to include a thorough and useful description of actuation and sensing in this handbook. Sensors and actuators are covered in another handbook in this series, the Measurement and Instrumentation Handbook. The Control Handbook thoroughly covers control theory and implementations from the output of the sensor to the input to the actuator-those aspects of control that are universal.

    The book is organized in three major sections, Fundamentals, Advanced Methods, and Applications. The Fundamentals are just

    what the name implies, the basics of control engineering. Note that this section includes major subsections on digital control and modeling of dynamical systems. There are also chapters on specification of control systems, techniques for dealing with the most common and important control system nonlinearities, and digital implementation of control systems.

    The section on Advanced Methods consists of chapters dealing with more difficult and more specialized control problems. Thus, this section contains subsections devoted to the analysis and design of multiple-input multiple-output systems, adaptive control, nonlinear control, stochastic control, and the control of distributed parameter systems.

    The Applications section is included for several reasons. First, these chapters illustrate the diversity of control systems. Second, they provide examples of how the theory can be applied to specific practical problems. Third, they contain important information about aspects of control that are not fully captured by the theory, such as techniques for.protecting against controller failure and the role of cost and complexity in specifying controller designs.

    The Control Handbook is designed to be used as a traditional handbook. That is, if you have a question about some topic in control you should be able to find an article dealing with that topic in the book. However, I believe the handbook can also be used in several other ways. It is a picture of the present state-of-the-art. Browsing through it is a way to discover a great deal about control. Reading it carefully is a way to learn the subject of control.

  • Acknowledgments

    I want to thank, first of all, Professor Richard C. Dorf, Editor-in Chief of the Electrical Engineering Handbook Series, for inviting me to edit The Control Handbook.

    Several people helped make the job of editor much easier and more pleasant than I expected it to be. I cannot imagine how the book could have been completed without Joel Claypool, Engineering Publisher for CRC Press. He had a good solution to every problem and a calm confidence in the ultimate completion of the book that was very comforting and ultimately justified. His assistants, Michelle Veno and Marlaine Beratta could not have been more efficient or more helpfill. Susan Fox did an excellent job as production editor. My editorial assistant, and daughter, Eleanor J. Levine proved to be both gifted at her job and fun to work with. Mrs. Patricia Keehn did the typing quickly, accurately and elegantly - as she always does.

    Control is an extremely broad and diverse subject. No one person, and certainly not this one, could possibly have the breadth and depth of knowledge necessary to organize this handbook. The Advisory Board provided sound advice on every aspect of the book Professor Mark Spong volunteered to organize the section on robotics and did a sterling job.

    My wife, Shirley Johannesen Levine, deserves a substantial share of the credit for everything I have done.

    Last, but most important, I would like to thank the authors of the chapters in this book. Only well respected experts were asked to write articles. Such people are always overworked. I am very grateful to all of them for finding the time and energy to contribute to the handbook

  • Advisory Board

    Professor Karl J. Astrom Lund Institute of Technology

    Professor Michael Athans Massachusetts Institute of Technology

    Professor John Baillieu1 Boston University

    Professor Robert R. Bitmead Australian National ~aboraiory

    Professor Petar Kokotovie University of California-Santa Barbara

    Dr. Michael J. Piovoso El du Pont de Nemours & Co.

    Professor Wilson J. Rugh The Johns Hopkins University

  • Contributors

    Eyad H. Abed Department of Electrical Engineering and the Institute for Systems Research, University of Maryland, College Park, MD

    Anders Ahlen Systems and Control Group, Department of Technology, Uppsala University, Uppsala, Sweden

    Albert N. Andry, Jr. Teledyne E!ectronic Devices, Marina del Rey, CA

    Panos J. Antsaklis Department of Electrical Engineering, University of Notre Dame, Notre Dame, IN

    Brian Armstrong Department of Electrical Engineering and Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI

    Karl J. ~ s t r o m Department of Automatic Control, Lund Institute of Technology, Lund, Sweden

    Michael Athans Massachusetts Institute of Technology, Cambridge, MA

    Derek P. Atherton School of Engineering, The University of Sussex

    David M. Auslander Mechanical Engineering Department, University of California at Berkeley

    J. Baillieul Boston University

    V. Balakrishnan Purdue University

    Gary J. Balas Aerospace Engineering and Mechanics, University of Minnesota, Minnesota, MN

    Maria Domenica Di Benedetto Dipartimento di Ingegneria Elettrica, Universita de L'Aquila, Monteluco di Roio (L'Aquila)

    W.L. Bialkowski EnTech Control Engineering Inc.

    Robert H. Bishop The University of Texas at Austin

    E Blanchini Dipartimento di Matematica e Informatica, Universita di Udine, Udine, Italy

    Okko H. Bosgra Mechanical Engineering Systems and Control Group, Delft University of Technology, Delft, The Netherlands

    S. Boyd Department of Electrical Engineering, Stanford University, Stanford, CA

    Richard D. Braatz University of Illinois, Department of Chemical Engineering, Urbana, IL

    Herman Bruyninckx Katholieke Universiteit Leuven, Department of Mechanical Engineering, Leuven, Belgium

    Christopher I. Byrnes Department of Systems Sciences and Mathematics, Washington University, St.Louis, MO

    Franqois E. Cellier Department of Electrical and Computer Engineering, The University of Arizona, Tucson, AZ

    Alan Chao Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA

    Y. Cho Department of Electrical Engineering, Stanford University, Stanford, CA

    David W. Clarke Department of Engineering Science, Parks Road, Oxford, UK

    Charles M. Close Electrical, Computer, and Systems Engineering Department, Rensselaef Polytechnic Institute, Troy, NY

    J. A. Cook Ford Motor Company, Scientific Research Laboratory, Control Systems Department, Dearborn, MI

    Vincent T. Coppola Department of Aerospace Engineering, The University of Michigan, Ann Arbor, MI

    Bruce G. Coury The Johns Hopkins University, Applied Physics Laboratory, Laurel, MD

    John J. D'Azzo Air Force Institute of Technology

    Munther A. Dahleh Lab. for Information and Decision Systems, M.I.T., Cambridge, MA

    C. Davis Semiconductor Process and Design Center, Texas Instruments, Dallas, TX

    Edward J. Davison Department of Electrical & Computer Engineering, University of Toronto, Toronto, Ontario, Canada

    R. A. DeCarlo School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN

    David R Delchamps Cornell University, Ithaca, NY

    Bradley W. Dickinson Princeton University

  • Rik W. De Doncker Silicon Power Corporation, Malvern, PA

    G. Franklin Department of Electrical Engineering, Stanford University, Stanford, CA

    Simon Grocott Space Engineering Research Center, Massachusetts Institute of Technology, Cambridge, MA

    Dean K. Frederick Electrical, Computer, and Systems Engineering Department, Rensselaer Polytechnic Institute, Troy, NY

    John A. Gubner University of Wisconsin-Madison

    Richard C. Dorf University of California, Davis

    Joel Douglas

    t epartment of Electrical Engineering and omputer Science, Massachusetts Institute of Technology, Cambridge, MA

    Randy A. Freeman University of California. Santa Barbara

    P. Gyugyi Department of Electrical Enpeering, Stanford University, Stanford, CA

    S. V. Drakunov Department of Electrical Engineering, Tulane University, New Orleans, LA

    James S. Freudenberg Dept. Electrical Engineering & Computer Science, University of Michigan, Ann Arbor, MI

    David Haessig GEC-Marconi Systems Corporation, Wayne, N1

    T.E. Duncan Department of Mathematics, University of Kansas, Lawrence, KS

    Bernard Friedland Department of Electrical and.Computer Engineering, New Jersey Institute of Technology, Newark, NJ

    Tore Hagglund Department of Automatic Control, Lund Institute of Technology, Lund, Sweden

    John M. Edmunds UMIST, Manchester, England

    T.T. Georgiou Department of Electrical Engineering, University of Minnesota

    Fumio Hamano California State University, Long Reach

    Hilding Elmqvist Dynasirn AB, Research Park Ideon, Lund, Sweden

    Jay A. Farrell College of E?gineering, University of California, Riverside

    James T. Gillis The Aerospace Corp., Los Angeles, CA

    R. A. Hess University of California, Davis

    G.C. Goodwin Department of Electrical and Computer Engineering, University of Newcastle, Newcastle, Australia

    Gene H. Hostetter

    Clifford C. Federspiel Johnson Controls, Inc., Milwaukee, W1

    Stefan F. Graebe PROFACTOR GmbH, Steyr, Austria

    Constantine H. Houpis Air Force Institute of Technology, Wright-Patterson AFB, OH

    Xiangbo Feng Department of Systems Engineering, Case Western Reserve University, Cleveland, OH

    C. W. Gray The Aerospace Corporation, El Segundo, CA

    Petros Ioannou University of Southern California, EE-Systems, MC-2562, Los Angeles, CA

    A. Feuer Electrical Engineering Department, Technion-Israel Institute of Technology, Haifa. Israel

    M.J. Grimble Industrial Control Centre, University of Strathclyde, Glasgow, Scotland, U.K.

    Alberto Isidori Dipartimento di Informatica e Sistemistica, Universiti di Roma "La Sapienza", Rome, and Department of Systems Sciences and Mathematics, Washington University, St.Louis, MO

    Bruce A. Francis Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada

    J. W. Grizzle Department of EECS, Control Systems Laboratory, University of Michigan, Ann Arbor. MI

    Thomas M. Jahns GE Corporate R&D, Schenectady, NY

  • Hodge Jenkins The George W. Woodruff School of Mechanical Engineering, The Georgia Institute of Technology, Atlanta, GA

    Miroslav Krstik Department of Mechanical Engineering, University of Maryland, College Park, MD

    E L. Lewis Automation and Robotics Research Institute, The University of Texas at Arlington, Ft. Worth, TX

    Christopher P. Jobling Department of Electrical and Electronic Engineering, University of Wales, Swansea, Singleton Park, Wales, UK

    Vladimir KuEera Institute of Information Theory and Automation, Prague, Academy of Sciences of the Czech Republic

    M. K. Liubakka Advanced Vehicle Technology, Ford Motor Company, Dearborn, MI

    M.A. Johnson Industrial Control Centre, University of Strathclyde, Glasgow, Scotland, U.K.

    P. R. Kumar Department of Electrical and Computer Engineering and Coordinated Science Laboratory, University of Illinois, Urbana, IL

    Lennart Ljung Department of Electrical Engineering, Linkoping University, Sweden

    Thomas R. Kurfess The George W. Woodruff School of Mechanical Engineering, The Georgia Institute of Technology, Atlanta, GA

    Jason C. Jones Mechanical Engineering Department, University of California at Berkeley

    Douglas P. Looze Dept. Electrical and Computer Engineering, University of Massachusetts, Amherst, MA

    S. M. Joshi NASA Langley Research Center

    Harry G. Kwatny Drexel University

    Kenneth. A. Loparo Department of Systems Engineering, Case Western Reserve University, Cleveland, OH

    Leonard Lublin Space Engineering Research Center, Massachusetts Institute of Technology, Cambridge, MA

    V. Jurdjevic Department of Mathematics, University of Toronto, Ontario, Canada

    J. E. Lagnese Department of Mathematics, Georgetown University, Washington, DC

    T. Kailath Department of Electrical Engineering, Stanford University, Stanford, CA

    Franqoise Lamnabhi-Lagarrigue Laboratoire des Signaux et Systkmes CNRS, Supelec, Gif-sur-Yvette, France

    Claudio Maffezzoni Politecnico Di Milano

    Einar V. Larsen GE Power Systems Engineering, Schenedady, NY

    Mohamed Mansour Swiss Federal Institute of Technology (ETH)

    Edward W. Kamen School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA

    B.P. Lathi California State University, Sacramento

    N. Harris McClamroch Department of Aerospace Engineering, The University of Michigan, Ann Arbor, MI

    M. R. Katebi Industrial Control Centre, Strathclyde University, Glasgow, Scotland

    R. H. Middleton Department of Electrical and Computer Engineering, University of Newcastle, NSW, Australia

    A. G. Kelkar NASA Langley Research Center

    A. J. Laub Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA

    M. Moslehi Semiconductor Process and Design Center. Texas Instruments, Dallas, TX

    Hassan K. Khalil Michigan State University

    B. Lehman Northeastern University

    Neil Munro UMIST, Manchester, England

    Petar V. KokotoviC University of California, Santa Barbara

    G. Leugering Fakultat f i r Mathematik und Physik, University of Bayreuth, Postfach Bayreuth, Germany

    Karlene A. Kosanovich Department of Chemical Engineering, University of South Carolina, Columbia, SC

    William S. Levine Department of Electrical Engineering, University of Maryland, College Park, MD

    Norman S. Nise California State Polytechnic University, Pomona

  • S. Norman Department of Electrical Engineering, Stanford University, Stanford, CA

    Katsuhiko Ogata University of Minnesota

    Gustaf Olsson Dept. of lndustrial Electrical Engineering and Automation, Lund Institute of Technology, Lund, Sweden

    A.W. Ordys Industrial Control Centre, University of Strathclyde, Glasgow, Scotland, U.K.

    Martin Otter Institute for Robotics and System Dynamics, German Aerospace Research Establishment Oberpfaffenhofen (DLR), Wessling, Germany

    M. Pachter Department of Electrical and Computer Engineering, Air Force Institute of Technology, Wright-Patterson AFB, OH

    Andy Packard Mechanical Engineering, University of California, Berkeley, CA

    Z.J. Palmor Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa, Israel

    P. Park Department of Electrical Engineering, Stanford University, Stanford, CA

    John J. Paserba GE Power Systems Engineering, Schenectady, NY

    B. Pasik-Duncan Department of Mathematics, University of Kansas, Lawrence, KS

    Kevin M. Passino Department of Electrical Engineering, Ohio State University, Columbus, OH

    Stephen D. Patek Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA

    R.V. Pate1 Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada

    A.W. Pike Industrial Control Centre, University of Strathclyde, Glasgow, Scotland, U.K.

    Michael J. Piovoso ~ u p o n t Central Science & Engineering, Wilmington, DE

    L. Praly Centre Automatique et Systemes, ~ c o l e Des Mines de Paris

    Jorg Raisch Institut fur Systemdynamik und Regelungstechnik, UniversitPtStuttgart, Stuttgart, FR Germany

    D.S. Rhode Advanced Vehicle Technology, Ford Motor Company, Dearborn, MI

    John R. Ridgely Mechanical Engineering Department, University of California at Berkeley

    C. Magnus Rimvall ABB Corporate Research and Development, Heidelberg, Germany

    Charles E. Rohrs Tellabs, Mishawaka, IN

    David L. Russell Department of Mathematics, Virginia Tech, Blacksburg, VA

    Juan J. Sanchez-Gasca GE Power Systems Engineering, Schenectady, New York

    Mohammed S. Santina The Aerospace Corporation, Los Angeles, CA

    K. Saraswat Department of Electrical Engineering, Stanford University, Stanford, CA

    C. Schaper Department of Electrical Engineering, Stanford University, Stanford, CA

    Gerrit Schootstra Philips Research Laboratories, Eindhoven, The Netherlands

    Joris De Schutter Katholieke Universiteit Leuven, Department of Mechanical Engineering, Leuven, Belgium

    John E. Seem Johnson Controls, Inc., Milwaukee, WI

    Thomas I. Seidman Department of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, MD

    M. E. Sezer Bilkent University, Ankara, Turkey

    S. Shakoor lndustrial Control Centre, University of Strathclyde, Glasgow, Scotland, U.K.

    Jeff S. Shamma Center for Control and Systems Research, Department of Aerospace Engineering and Engineering Mechanics, The University of Texas at Austin, Austin, TX

    Eliezer Y. Shapiro HR Textron, Valencia, CA

    F. Greg Shinskey Process Control Consultant, North Sandwich, NH

    Adam Shwartz Electrical Engineering, Technion-Israel Institute of Technology, Haifa, Israel

    D. D. Siljak Santa Clara University, Santa Clara, CA

    Kenneth M. Sobel Department of Electrical Engineering, The City College of New York, New York, NY

  • Torsten Soderstrom Systems and Control Group, Uppsala University, Uppsala, Sweden

    E. Sontag Department of ~athematics, Rutgers University

    Mark W. Spong The Coordinated Science Laboratory, University of Illinois at Urbana-Champaign

    Raymond T. Stefani Electrical ~ n ~ i n e e r i n g Department, California State University, Long Beach

    Maarten Steinbuch Philips Research Laboratories, Eindhoven, The Netherlands

    Allen R. Stubberud University of California, Irvine, Irvine, CA

    J. Sun Ford Motor Company, Scientific Research Laboratory, Control Systems Department, Dearborn, MI

    Jacob Tal Galil Motion Control, Inc.

    David G. Taylor Georgia Institute of Technology, School of Electrical and Computer Engineering, Atlanta, GA

    A.R. Tee1 Department of Electrical Engineering, University of Minnesota

    R. Tempo CENS-CNR, Politecnico di Torino, Torino, Italy

    Alberto Tesi Dipartimento di Sistemi e Informatica, Universiti di Firenze, Firenze, Italy

    A. L. Tits University of Maryland

    P.M. Van Dooren Department of Mathematical Engineering, Universitk Catholique de Louvain, Belgium

    George C. Verghese Massachusetts Institute of Technology

    Hua 0. Wang United Technologies Research Center, East Hartford, CT

    John Ting-Yung Wen Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute

    Trevor Williams Department of Aerospace Engineering and Engineering Mechanics, University of Cincinnati, Cincinnati, OH

    J. R. Winkelman Advanced Vehicle Technology, Ford Motor Company, Dearborn, MI

    Carlos Canudas de Wit Laboratoire dlAutomatique de Grenoble, ENSIEG, Grenoble, France

    William A. Wolovich Brown University

    Jiann-Shiou Yang Department of Electrical and Computer Engineering, University of Minnesota, Duluth, MN

    Stephen Yurkovich Department of Electrical Engineering, The Ohio State University, Columbus, OH

    S. H. i ak School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN

  • Contents

    PART A FUNDAMENTALS OF CONTROL

    SECTION I Mathematical Foundations

    . . . . . . . . . . . . . . . . . . . . . . . 1 Ordinary Linear Differential and Difference Equations B.P. Lathi 3

    . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 The Fourier. Laplace. and z-Transforms Edward W Kamen 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Matrices and Linear Algebra Bradley W Dickinson 33

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Complex Variables C . W Gray 51

    SECTION I1 Models for Dynamical Systems

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Standard Mathematical Models 65 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Input-Output Models William S . Levine 65

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 State Space James T. Gillis 72

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Graphical Models 85 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Block Diagrams Dean K . Frederick and Charles M . Close 85

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Signal-Flow Graphs Norman S . Nise 93

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Determining Models 99 7.1 Modeling from Physical Principles Franfois E . Cellier. Hilding Elmqvist. and Martin Otter . . . . . . . . . . . . . 99 7.2 System Identification When Noise Is Negligible William S . Levine . . . . . . . . . . . . . . . . . . . . . . . . . 108

    SECTION I11 Analysis and Design Methods for Continous-Time Systems

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Analysis Methods 115 8.1 Time Response of Linear Time-Invariant Systems Raymond T. Stefani . . . . . . . . . . . . . . . . . . . . . . . 115 8.2 Controllability and Observability William A . Wolovich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Stability Tests 131 9.1 The Routh-Hurwitz Stability Criterion Robert H . Bishop and Richard C . Dorf . . . . . . . . . . . . . . . . . . 131

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 The Nyquist Stability Test Charles E Rohrs 135 9.3 Discrete-Time and Sampled-Data Stability Tests Mohamed Mansour . . . . . . . . . . . . . . . . . . . . . . . . 146 9.4 Gain Margin and Phase Margin Raymond T. Stefani . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Design Methods 157 10.1 Specification of Control Systems Jiann-Shiou Yang and William S . Levine . . . . . . . . . . . . . . . . . . . . . 158

    . 10.2 Design Using Performance Indices Richard C . Dorfand Robert H Bishop . . . . . . . . . . . . . . . . . . . . . 169 10.3 Nyquist. Bode. and Nichols Plots John J . D'Azzo and ConstantineH . Houpis . . . . . . . . . . . . . . . . . . . . 173 10.4 The Root Locus Plot Williuw S . Levine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 10.5 PID Control ~ a r l J . Astrom and Tore Hagglund . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 10.6 State Space - Pole Placement Katsuhiko Ogata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

  • 10.7 Internal Model Control Richard D . Braatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 . .

    10.8 Time-Delay Compensation - Smith Predictor and its Modifications Z J Palmor . . . . . . . . . . . . . . . . . 224

    SECTION IV Digital Control

    . . . . . . . . . . . . 11 Discrete-Time Systems Mohammed S Santina, Allen R Stubberud, and Gene H Hostetter 239

    12 Sampled Data Systems A . Feuer and G.C. Goodwin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

    13 Discrete-Time Equivalents to Continuous-Time Systems Mohammed S . Santina. Allen R . Stubberud. and Gene H . Hostetter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265

    14 Design Methods for Discrete.Time. Linear Time-Invariant Systems Mohammed S . Santina. Allen R . Stubberud. and Gene H . Hostetter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

    . . . . . . . . . . . . 15 Quantization Effects Mohammed S . Santina, Allen R Stubberud, and Gene H Hostetter 301

    16 Sample-Rate Selection Mohammed S . Santina. Allen R . Stubberud. and Gene H . Hostetter . . . . . . . . . 313

    17 Real-Time Software for Implementation of Feedback Control David M . Auslander. John R . Ridgely, and Jason C . Jones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Programmable Controllers Gustaf Olsson 345

    SECTION V Analysis and Design Methods for Nonlinear Systems

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Analysis Methods Derek P. Atherton 363 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Design Methods 377

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1 Dealing with Actuator Saturation R . H . Middleton 377 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2 Bumpless Transfer Stefan E Graebe and Anders Ahlkn 381

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.3 Linearization and Gain-Scheduling Jeff S . Sharnma 388

    SECTION VI Software for Control System Analysis and Design

    2 1 Numerical and Computational Issues in Linear Control and System Theory A . J . Laub, R . V Patel, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . and P.M. Van Dooren 399

    . . . . . . . . 22 Software for Modeling and Simulating Control Systems Martin Otter and Frantois E Cellier 415

    . . . . . . . . . . 23 Computer-Aided Control Systems Design C Magnus Rimvall and Christopher l? Jobling 429

    PART B ADVANCED METHODS OF CONTROL

    SECTION VII Analysis Methods for MIMO Linear Systems

    . . . . . . . . . 24 Multivariable Poles. Zeros. and Pole-Zero Cancellations Joel Douglasand MichaelAthans 445

  • . . . . . . . . . . . . . . . . . . . . . . 25 Fundamentals of Linear Time-Varying Systems Edward W Kamen 451 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Geometric Theory of Linear Systems Fumio Hamano 469

    . . . . . . . . . . . . . . . . . . . . . . 27 Polynomial and Matrix Fraction Descriptions David E Delchamps 481

    . . . . . . . . . . . . . . 28 Robustness Analysis with Real Parametric Uncertainty R Tempo and E Blanchini 495

    29 MIMO Frequency Response Analysis and the Singular Value Decomposition Stephen D . Patek and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Arhans 507

    30 Stability Robustness to Unstructured Uncertainty for Linear Time Invariant Systems Alan Chao and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Arhans 519

    . . . . . . . . 3 1 Tradeoffs and Limitations in Feedback Systems Douglas R Looze and James S . Freudenberg 537

    . . . . . . . . . . . . . . . . . . . . . 32 Modeling Deterministic Uncertainty Jorg Raisch and Bruce A Francis 551

    . 33 The Use of Multivariate Statistics in Process Control Michael J Piovoso and Karlene A . Kosanovich . . . 561

    SECTION VIII Kalman Filter and Observers

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Linear Systems and White Noise William S Levine 575 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Kalman E iltering Michael Arhans 589

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Riccati Equations and their Solution Vladimfr Kutera 595 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Observer; Bernard Friedland 607

    SECTION IX Design Methods for MIMO LTI Systems

    . . . . . . . . . . . 38 Eigenstructure Assignment Kenneth M . Sobel, Eliezer Y: Shapiro. andAlbert N Andry. Jr 621

    . . . . . . . . . . . . . . . . . . . 39 Linear Quadratic Regulator Control Leonard Lublin and Michael Athans 635

    . . . . . . . . . . . . . . 40 7-t2 (LQG) and 7-tm Control Leonard Lublin. Simon Grocott. and Michael Arhans 651 . . . . . . . . . . . . . . . . . . . 4 1 e l Robust Control: Theory. Computation and Design Munther A Dahleh 663

    42 The Structured Singular Value ( p ) Framework Gary J . Balas and Andy Packard . . . . . . . . . . . . . . . 671

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Algebraic Design Methods Vladimfr Kubra 689 44 Quantitative Feedback Theory (QFT) Technique Constantine H . Houpis . . . . . . . . . . . . . . . . . . . 701 45 The Inverse Nyquist Array and Characteristic Locus Design Methods Neil Munro and

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . John M Edmunds 719

    46 Robust Servomechanism Problem Edward J . Davison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731

    47 Numerical Optimization-Based Design V Balakrishnan and A . L . Tits . . . . . . . . . . . . . . . . . . . . 749

  • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 optimal control E L ~ e w i s 759

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Decentralized Control M E Sezer and D D Siljak 779

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Decoupling Trevor Williams and Panos J Antsaklis 795 . . . . . . . . . . . . . . . 5 1 Predictive Control A M! Pike. M J Crimble. M.A. Johnson. A . M! Ordys. and S Shakoor 805

  • Contents

    SECTION X Adaptive Control

    . . . . . . . . . . . . . . . . . . . . 52 Automatic Tuning of PID Controllers Tore Hagglund and Karl 1 Astrom 817

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Self-Tuning Control David W; Clarke 827 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Model Reference Adaptive Control Petros loannou 847

    SECTION XI Analysis and Design of Nonlinear Systems

    55 Analysis and Design of Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55.1 The Lie Bracket and Control b! Jurdjevlc 861

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55.2 Two-Time-Scale and Averaging Methods Hassnn K KI~alil 873 . . . . . . . . . . 55.3 Volterra and Fliess Series Expansions for Nonlinear Systems Francoise Lamnabhi-Lagarrigue 879

    . . 56 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889 56.1 Lyapunov Stability Hassari K . Khalil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . . 889 56.2 Input-Output Stability A.R. Teel, TT Georgiou, L . Praly. and E . Sontag . . . . . . . . . . . . . . . . . . . . . . . 895

    57 Design Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909 . . . . . . . . . . 57.1 Feedback Linearization of Nonlinear Systems Alberto Isidori and Maria Domenica Di Benedetto 909

    . . . . . . . . . . . . . . . . . . . . . . . . . 57.2 Nonlinear Zero Dynamics Alberto Isidori a t ~ d Q~ristopher 1 Byrnes 917 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.3 Nonlinear Output Regulation Alberto lsidori 923

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.4 Lyapunov Design Randy A Freenrat~ ntld Petar V Kokotovit 932 . . . . . . . . . . . . . . 57.5 Variable Structure, Sliding-Mode Controller Design R A DeCarlo, S IT Zak, and S V Drakunov 941

    . . . . . . . . . . . . . . . . . . . 57.6 Control of Bifurcations and Chaos Eyad H Abed, Hua 0 Wang. and Albert0 Tesi 951 . . . . . . . . . . . . . . . . . . . . . . . . 57.7 Open-Loop Control Using Osiillntory Inputs J Baillieu! and B Lehman 967

    . . . . . . . . . . . . . . . . . . . . . . . . . 57.8 Adaptive Nonli~ear Control Miroslav Krstit and Petar V Kokotovit 980 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57.9 Intelligent Control Kevin M Passitlo i 994

    57.10Fuzzy Control Kevin . Passirlo atid Stephen Yurkovich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001 57.1 1 Neural Control Jay A . Farrell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017

    SECTION XI1 System Identification -

    58 System Identification Lennart Ljung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1033

    SECTION XI11 Stochastic Control

    59 Discrete Time Markov Processes Adam Shwartz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057

    . 60 Stochastic Differential Equations John A Gubner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067

  • 6 1 Linear Stochastic Input-Output Models Torsten Soderstrom . . . . . . . . . . . . . . . . . . . . . . . . . 1079 62 Minimum Variance Control M . R . Katebi and A . W Ordys . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , 63 Dynamic Programming l? R Kumar 1097

    . . . . . . . . . . . . . . . . . . . . . . 64 Stability of Stochastic Systems Kenneth A Loparo and Xiangbo Feng 1105 . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Stochastic Adaptive Control T E Duncan and B Pasik-Duncan 1127

    SECTION XIV Control of Distributed Parameter Systems

    66 Controllability of Thin Elastic Beams and Plates J . E . Lagnese and G . Leugering . . . . . . . . . . . . . . . 1139 67 Control of the Heat Equation Thomas I . Seidman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157

    . . . . . . . . . . . . . . . . . . . . 68 Observability of Linear Distributed-Parameter Systems David L Russell 1169

    PART C APPLICATIONS OF CONTROL

    SECTION XV Process Control

    69 Water Level Control for the Toilet Tank: A Historical Perspective Bruce G . Coury . . . . . . . . . . . . . . 1179

    . . . . . . . . . . . . . 70 Temperature Control in Large Buildings Cliford C . Federspiel and John E . Seem 1191 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1 Control of pH E Greg Shinskey 1205

    . . . . . . . . . . . . . . . . . . . . . . . . 72 Control of the Pulp and Paper Making Process W L . Bialkowski 1219 .

    73 Control for Advanced Semiconductor Device Manufacturing: A Case History T Kailath. C . Schaper. Y Cho. l? Gyugyi. S . Norman. l? Park, S . Boyd. G . Franklin. K . Saraswat. M . Moslehi.

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . and C . Davis 1243

    SECTION XVI Mechanical Control Systems

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Automotive Control Systems 1261 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

    . . . 74.1 Engine Control J A Cook. J . W Grizzle and J Sun 1261 . . . . . . . 74.2 Adaptive Automotive Speed Control M . K . Liubakka. D.S. Rhode. J . R . Winkelman. and P. V Kokotovit 1274

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Aerospace Controls 1287 . . . . . . . . . . . . . . . . . . . . . . . . . . . 75.1 Flight Control of Piloted Aircraft M . Pachter and C . H . Houpis 1287

    . . . . . . . . . . . . . . . . . . . . 75.2 Spacecraft Attitude Control Vincent T Coppola and N . Harris McClamroch 1303 . . . . . . . . . . . . . . . . . . . . . . . . . . 75.3 Control of Flexible Space Structures S . M . Joshi and A G . Kelkar 1316

    .

    . . . . . . . . . . . . . . . . . . . . . . 75.4 Line-of-Sight Pointing and Stabilization Control System David Haessig 1326

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Control of Robots and Manipulators 1339 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76.1 Motion Control of Robot Manipulators Mark W Spong 1339

    . . . . . . . . . . . . . . . . . . 76.2 Force Control of Robot Manipulators Joris De Schutter and Herman Bruyninckv 1351 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76.3. Control of Nonholonomic Systems John Ting-Yung Wen 1359 I

  • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Miscellaneous Mechanical Control Systems 1369 . . . . . . . . . . . . . . . . 77.1 Friction Modeling and Compensation Brian Armstrong and Carlos Canudas de Wit 1369

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77.2 Motion Control Systems Jacob Tal 1382 . . . . . . . . . . . . . . . . . . . . . . . . 77.3 Ultra-High Precision Control Thomas R . Kurfess and Hodge Jenkins 1386

    77.4 Robust Control of a Compact Disc Mechanism Maarten Steinbuch. Gerrit Schootstra. and Okko H . Bosgra . . . . . 1405

    SECTION XVII Electrical and Electronic Control Systems

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Power Electronic Controls 1413 . . . . . . . . . . . . . . . . . . . . . 78.1 Dynamic Modeling and Control in Power Electronics George C . Verghese 1413

    . . . . . . . . . . . . . . . 78.2 Motion Control with Electric Motors by Input-Output Linearization David G . Taylor 1424 . . . . . . . . . . . . . . . . . . . . . 78.3 Control of Electrical Generators Thomas M . Jahns and Rik W De Doncker 1437

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Control of Electrical Power 1453 . . . . . . . . . . . . . . . 79.1 Control of Electric Power Generating Plants Harry G . Kwatny and Claudia Maffezzoni 1453

    . . . . . . . . . . . . 79.2 Control of Power Transmission John J . Paserba, Juan J . Sanchez.Gasca. and Einar V Larsen 1483

    SECTION XVIII Control Systems Including Humans

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Human-in-the-Loop Control R . A . Hess 1497

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index 1507 . .

  • PART A FUNDAMENTALS

    OF CONTROL

  • SECTION I Mathematical Foundations

  • Ordinary Linear Differential

    B.P. Lathi California State University, Sacramento

    1.1 Differential Equations

    and Difference Equations

    1.1 Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Classical Solution Method of Convolution

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Difference Equations 9 Initial Conditions and Iterative Solution Classical Solution Method of Convolution .

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References. 15.

    A function containing variables and their derivatives is called a differential expression, and an equation involving differential ex- pressions is called a differential equation. A differential equation is an ordinary differential equation if it contains only one inde- pendent variable; it is apartial differential equation if it contains more thah one independent variable. We shall deal here only with ordinary differential equations.

    In the mathematical texts, the independent variable is generally x, which can be anything such as time, distance, velocity, pressure, and so on. In most of the applications in control systems, the independent variable is time. For this reason we shall use here independent variable t for time, although it can stand for any other variable as well.

    The following equation

    dy (%I4 31 + sy2(t) = sint is an ordinary differential equation of second order because the highest derivative is ofthe second order. An nth-order differential equation is linzar if it is of the form

    where the coefficients a , (t) are not functions of y(t). If these coefficients (ai) are constants, the equation is linear with con- stant coefficients. Many engineering (as well as nonengineering) systems can be modeled by these equations. Systems modeled by these equations are known as linear time-invariant (LTI) sys- tems. In this chapter we shall deal exclusively with linear differ- ential equations with constant coefficients. Certain other forms of differential equations are dealt with elsewhere in this volume.

    0-8493-8570-9/961$0.OOtS.50 @ 1996 by CRC Press, Inc.

    Role of Auxiliary Conditions in Solution of Differential Equations We now show that a differential equation does not, in gen-

    eral, have a unique solution unless some additional constraints (or conditions) on the solution are known. This fact should not come as a surprise. A function y(t) has a unique derivative dy ld t , but for a given derivative d y l d t there are infinite possible functions y(t). If we are given d y l d t , it is impossible to de- termine y(t) uniquely unless an additional piece of information about y(t) is given. For example, the solution of a differential equation

    obtained by integrating both sides of the equation is

    for any value ofc. Equation 1.2 specifies a function whose slope is 2 for all t. Any straight line with a slope of 2 satisfies this equation. Clearly the solution is not unique, but if we place an additional constraint on the solution y (t), then we specify a unique solution. For example, suppose we require that y(0) = 5; then out of all the possible solutions available, only one function has a slope of 2 and an intercept with the vertical axis at 5. By setting t = 0 in Equation 1.3 and substituting y(0) = 5 in the same equation, we obtain y (0) = 5 = c and

    which is the unique solution satisfying both Equation 1.2 and the constraint y(0) = 5.

    In conclusion, differentiation is an irreversible operation dur- ing which certain information is lost. To reverse this operation, one piece of information about y (t) must be provided to restore the original y(t). Using a similar argument, we can show that,

  • THE CONTROL HANDBOOK

    given d2 y/dt2, we can determine y(t) uniqueiy only if two addi- tional pieces of information (constraints) about y(t) are given. In general, to determine y (t) uniquely from its nth derivative, we needn additional pieces of information (constraints) about y (t). These constraints are also called auxiliary conditions. When these conditions are given at t = 0, they are called initial conditions.

    We discuss here two systematic procedures for solving lin- ear differential equations of the form in Equation 1.1. The first method is the classical method, which is relatively simple, but restricted to a certain class of inputs. The second method (the convolution method) is general and is applicable to all types of in- puts. A third method (Laplace transform) is discussed elsewhere in this volume. Both the methods discussed here are classified as time-domain methods because with these methods~e are able to solve the above equation directly, using t as the independent variable. The method of Laplace transform (also known as the ji-equency-domain method), on the other hand, requires trans- formation of variable t into a frequency variable s.

    In engineering applications, the form of linear differential equation that occurs most commonly is given by

    where all the coefficients aj and bi are constants. Using op- erational notation D to represent dldt , this equation can be expressed as

    (Dn +an-, D"-' + . . . + a l D + ao)y(t) = (bm Dm + bm-1 D"-' + . . . + bl D + bo) f (t) (1.4b)

    where the polynomials Q(D) and P(D), respectively, are

    Observe that this equation is of the form of Equation 1.1, where r (t) is in the form of a linear combination off (t) and its deriva- tives. In this equation, y (t) represents an output variable, and f (t) represents an input variable of an LTI system. Theoretically, the powers m and n in the above equations can take on any value. Practical n ~ i s e considerations, however, require (11 m 5 n.

    1.1.1 Classical Solution When f (t) = 0, Equation 1.4 is known as the homogeneous (or complementary) equation. We shall first solve the homogeneous equation. Let thesolution of the homogeneous equation be yc(t), that is,

    Q(D)yc(t) = 0

    We first show that if yp (t) is the solution of Equation 1.4, then yc(t) + yp(t) is also its solution. This follows from the fact that

    If y,,(t) is the solution of Equation 1.4, then

    Addition of these two equations yields

    Thus, yc(t) + yp(t) satisfies Equation 1.4 and therefore is the general solution of Equation 1.4. We call yc(t) the complementary solution and yp(t) the particular solution. In system analysis parlance, these components are called the natural response and the forced response, respectively.

    Complementary Solution (The Natural Response) The complementary solution yc(t) is the solution of

    A solution to this equation can be found in a systematic and formal way. However, we will take a short cut by using heuristic reasoning. Equation 1.5b shows that a linear combination of yc(t) and its n successive derivatives is zero, not at some values oft , but for all t. This is possible ifand only if yc(t) and all its n successive derivatives are of the same form. Otherwise their sum can never add to zero for all values oft . We know that only an exponential function eA' has this property. So let us assume that

    is a solution to Equation 1.5b. Now

    Substituting these results in Equation 1.5b, we obtain

    For a nontrivial solution of this equation,

    This result means that ceAt is indeed a solution of Equation 1.5 provided that A satisfies Equation 1.6a. Note that the polynomial

  • \ I . I . DIFFERENTIAL EQUATIONS

    in Equation 1.6a is identical to the polynomial Q(D) in Equa- tion 1.5b, with A replacing D. Therefore, Equation 1.6a can be expressed as

    Q(k) = 0 ( 1.6b) When Q(A) is expressed in factorized form, Equation 1.6b can be represented as

    Clearly 1 has n solutions: A', A2, . . ., A,,. Consequently, Equa- tion 1.5 hasn possible solutions: c;eAl', czeA2', . . . , c,,eAn', with c l , c2, . . . , c, as arbitrary consta~l~s. We can readily show that a general solution is given by the sum of these n solutions', so that

    where c l , c2, . . . , C, are arbitrary constants determined by 11 constraints (the auxiliary conditions) on the solution.

    The polynomial Q(h) is known as the characteristic polyno- mial. The equation

    is called the characteristic or auxiliary equation. From Equa- tion 1.6c, it is clear that k l , A2, . . ., A, are the roots of the characteristic equation; consequently, they are called the char- acteristic roots. The terms characteristic values, eigenvalues, and natural frequencies are also used for characteristic roots 2. The exponentials eAi'(i = 1 ,2 , . . . , n) in the complementary solu- tion are the characteristic modes (also known as modes or natural modes). There is a characteristic mode for each characteristic root, and the complementary solution is a linear combination of the characteristic modes.

    Repeated Roots The solution of Equation 1.5 as given in Equation 1.7 as-

    sumes that the n characteristic roots k ~ , k ~ , . . . , k,, are distinct. If there are repeated roots (same root occurring more than once), the form of the solution is modified slightly. By direct substitu- tion we can show that the solution of the equation

    is given by Y C ( ~ ) = (cl + czt)eA'

    In this case the root A repeats twice. Observe that the character- istic modes in this case are eA' and teA'. Continuing this pattern, we can show that for the differential equation

    the characteristic modes are eA', teA', t2eA', . . . , tr-'eAr , and the solution is

    Consequently, for a characteristic polynomial

    the characteristic modes are eAlr , teAl', . . . , tr-'eA', eAp+l', . . . , eAn'. and the complementary solution is

    Particular Solut ion ( T h e Forced Response): M e t h o d of Undete rmined Coefficients The particular solution y,,(t) is the solution of

    It is a relatively simple task to determine y,, (t) when the input f (t) is such that it yields only a finite number of independent derivatives. Inputs having the form e f t or t r fall into this cat- egory. For example, e f t has only one independent derivative; the repeated differentiation of e f t yields the same form, that is, e f t . Similarly, the repeated differentiation of t r yields only r independent derivatives. The particular solution to such an in- put can be expressed as a linear combination of the input and its independent derivatives. Consider, for example, the input f (t) = a t 2 + bt + c. The successive derivatives of this input are 2at + b and 2a. In this case, the input has only two independent derivatives. Therefore the particular solution can be assumed to be a linear combination o f f (t) and its two derivatives. The suitable form for y,, (t) in this case is therefore

    'TO prove this fact, assume that yl (t), y2(t), . . ., yn (t) are all solutions of Equation 1.5. Then

    Q(D)YI(~) = 0 Q(D)yr(r) = 0

    . . . . . . . . . . . . . . .

    Q(D)yn(t) = 0 Multiplying these equations by el, c2. . . . , c,, respectively, and adding them together yields

    Q(D) [ ~ I Y I ( ~ ) + CZYZ(~) + .. . + cn~n(t)l = 0 This result shows that clyl (r) + c2y2(t) + . -. + cnyn(t) is also a solution of the homogeneous Equation 1.5 he term eigenvalue is German for characteristic value.

    The undetermined coefficients 80, 81, and 8 2 are determined by substituting this expression for y,,(t) in Equation 1.11 and then equating coefficients of similar terms on both sides of the resulting expression.

    Although this method can be used only for inputs with a finite number of derivatives, this class of inputs includes a wide variety of the most commonly encountered signals in practice. Table 1.1 shows a variety of such inputs and the form of the particular solution corresponding to each input. We shall demonstrate this procedure with an example.

    Note: By definition, y,,(t) cannot have any characteristic mode terms. If any term p(t) shewn in the right-hand column for the

  • THE CONTROL HANDBOOK

    and TABLE 1.1

    Input f (t) Forced Response

    l . e t t C # A i ( i = 1 , 2 , Bett . . , n)

    2.eCt 5 = hi Btett 3. k (a constant) B (a constant) 4. cos (wt + 8 ) B cos (wt + 9 ) 5. (tr + crr-ltr-' + . . . (Brtr + pr-11'-' + . . .

    + o l ~ t + cro) eFt + Bit + Bo)eSt

    particular solution is also a characteristic mode, the correct form of the forced response must be modified 10 t i p(t), where i is the smallest possible integer that can be used and still can prevent tip(t) from having a characteristic mode term. For example, when the input is ect, the forced response (right-hand column) has the form Be O Here the arbitrary constants cl and c2 must be determined from the given initial conditions.

    The particular solution to the input t2 + 5t + 3 is found from Table 1.1 (Pair 5 with 5 = 0) to be

    yp(t) = Bzt2 +Bit + Bo

    Moreover, y,, (t) satisfies Equation 1.11, that is,

    ( D ~ + 3 0 + 2) yp(t) = Df (t) (1.13)

    Now

    d Dyp(O = - (B2t2 + Bit + 80) = 282t + Pi d t

    Substituting these results in Equation 1.13 yields

    Equating coefficients of similar powers on both sides of this ex- pression yields

    Solving these three equations for their unknowns, weobtain 80 = 1, PI = 1, and B2 = 0. Therefore,

    The total solution y(t) is the sum of the complementary and particular solutions. Therefore,

    ~ ( t ) = yC(tj + yp(t) = cle-' + c2e-2t + t + 1 r > o

    so that j ( t ) = -cle-' - + 1

    Setting t = 0 and substituting the given initial conditions y (0) = 2 and y (0) = 3 in these equations, we have

    The solution to these two simultaneous equations is cl = 4 and c2 = -3. Therefore,

    y (t) = 4e-' - 3eW2' + t + 1 t > O

    The Exponential Input ect The exponential signal is the most important signal in the

    study of LTI systems.' Interestingly, the particular solution for an exponential input signal turns out to be very simple. From Ta- ble 1.1 we see that the particular solution for the input ec' has the form Bett. We now show that = Q ( { ) / P ( < ) 3 . To determine

    3 ~ h i s is true only if 5 is not a characteristic root.

  • 1 . 1 . DIFFERENTIAL EQUATIONS

    the constant p , we substitute yp( t ) = pee' in Eq~ation 1 . 1 1 , which gives us

    Now observe that

    Consequently,

    Q(D)eer = Q(

  • and

    THE CONTROL HANDBOOK

    where

    y ( t ) = -cle-' - 2 ~ 2 e - ~ ' + 45e-3' t z 0 The initial conditions are (o+) = 2 and (o+) = 3. Setting t = 0 in the above equations and substituting the initial conditions yields

    C I + c2 - 15 = 2 and - cl - 2c2 + 45 = 3 Solution of these equations yields cl = -8 and c2 = 25. There- fore,

    y ( t ) = -8e-' + 25e-2' - 15e-~' t z 0

    (b) For input f ( t ) = 5 = 5e0', 5- = 0, and

    The complete solution is y ( t ) = y,(t) + y p ( t ) = cle-' + ~ ~ e - ~ ' . We then substitute the initial conditions to determine cl and cz as explained in Part a.

    (c) Here 5- = -2, which is also a characteristic root. Hence (see Pair 2, Table 1.1, or the comment at the bottom of the table),

    To find p , we substitute yp ( t ) in Equation 1.1 1 , giving us

    But

    Consequently,

    p(4t - 4 + 3 - 6t + 2t)eT2' = -2e-2'

    or -pe-2' = -2,-2'

    This means that B = 2, so that

    y p ( t ) = 2te-2'

    The complete solution is y ( t ) = y,(t) + y p ( t ) = clew' + c2e-2' + 2te-2t. We then substitute the initial conditions to determine cl and c2 as explained in Part a.

    (d) For the input f ( t ) = locos (3t + 30), the particular solution [see Equation 1.191 is

    y p ( t ) = 101H(j3)1 cos [ ~ t + 30" + L ~ ( j 3 ) ]

    Therefore.

    and

    yp( t ) = lO(0.263) cos (3t + 30' - 37.9O) = 2.63 cos (3t - 7.9O)

    The complete solution is y ( t ) = y,(t) + y p ( t ) = cle-' + ~ z e - ~ ' + 2.63 cos (3t - 7.9'). We then substitute the initial con- ditions to determine cl and c2 as explained in Part a.

    1.1.2 Method of Convolution In this method, the input f ( t ) is expressed as a sum of impulses. The solution is then obtained as a sum of the solutions to all the impulse components. The method exploits the superposition property of the linear differential equations. From the sampling (or sifting) property of the impulse function, we have

    The right-hand side expresses f ( t ) as a sum (integral) of impulse components. Let the solution of Equation 1.4 be y ( t ) = h ( t ) when f ( t ) = 6 ( t ) and all the initial conditions are zero. Then use of the linearity property yields the solution of Equation 1.4 to input f ( t ) as

    For this solution to be general, we must add a complementary solution. Thus, the general solution is given by

    where the lower limit 0 is understood to be 0- in order to ensure that impulses, if any, in the input f ( t ) at the origin are accounted for. side of (1.22) is well known in the literature as the convolu- tion integral. The function h ( t ) appearing in the integral is the solution of Equation 1.4 for the impulsive input [ f ( t ) = S ( t ) ] . It can be shown that ( 1 )

    where y,(t) is a linear combination of the characteristic modes subject to initial conditions

  • 1.2. DIFFERENCE EQUATIONS

    The function u ( t ) appearing on the right-hand side of Equa- tion 1.23 represents the unit step function, which is unity for t 2 0 and is 0 for t 4 0 .

    The right-hand side of Equation 1.23 is a linear combination of the derivatives of y , ( r ) u ( r ) . Evaluating these derivatives is clumsy and inconvenient because of the presence of u(r) . The derivatives will generate an impulse and its derivatives at the origin [recall that $lc(r) = S ( t ) ] . Fortunately when in 5 r l in Equation 1.4, the solution simplifies to

    EXAMPLE 1.3:

    Solve Example 1.2, Part a using method of convolution. We first determine h ( t ) . The characteristicmodes for thiscase,

    as found in Example 1 . 1 , are e-' and eW2'. Since y , ( t ) is a linear combination of the characteristic modes

    Therefore,

    The initial conditions according to Equation 1.24 are j,(O) = 1 and y,,(O) = 0 . Setting t = 0 in the above equations and using the initial conditions, we obtain

    K I + K z = O and - K t - 2 K 2 = 1

    Sol~~t ion of these equations yields K I = 1 and K2 = - 1. There- fore,

    y o ( t ) = e-' - Also in this case the polynomial P ( D ) = D is of the first-order, and b2 = 0 . Therefore, from Equation 1.25

    and

    The total solution is obtained by adding the complementary so- lution y , ( t ) = cle-' + c ~ c - ~ ' to this component. Therefore,

    Setting the conditions y (O+) = 2 and y ( o + ) = 3 in this equation (and its derivative), we obtain cl = - 3 , c2 = 5 so that

    whlch is ident~cal to the solution found by the classical methdd.

    Assessment of the Convolution Method The convolution method is more laborious compared to

    the classical method. However, in system analysis, its advantages outweigh the extra work. The classical method has a serious drawback because it yields the total response, which cannot be separated into components arising from the internal conditions and the external input. In the study of systems it is important to be able to express the system response to an input f ( t ) as an explicit function o f f ( t ) . This is not possible in the classical method. Moreover, the classical method is restricted to a certain class of inputs; it cannot be applied to any input 4.

    Ifwe must solve a particular linear dilferential equation or find a response of a particular LTI system, the classical method may be the best. In the theoretical study of linear systems, however, it is practically useless. General discussion of differential equations can be found in numerous texts on the subject [ 3 ] .

    1.2 Difference Equations The development of difference equations is parallel to that of differential equations. We consider here only linear difference equations with constant coefficients. An nth-order difference equation can be expressed in two different forms; the first form usesdelaytermssuchasy[k- 1 1 , y [ k - 2 1 , f [ k - 1 1 , f [ k - 21, . . ., etc., and the alternative form uses advance terms such as y [ k + I ] , y [ k + 21, . . . , etc. Both forms are useful. We start here with a general nth-order difference equation, using advance operator form

    Causality Condition The left-hand side of Equation 1.26 consists of values of

    y [ k ] at instants k + 1 1 , k + n - 1 , k + n - 2, and so on. The right-hand side of Equation 1.26 consists of the input at instants k + m , k + m - 1, k + m - 2, and so on. For acausalequation, the solution cannot depend on future input values. This shows that when the equation is in the advance operator form of the Equation 1.26, causality requires m 5 n. For a general causal case, m = n, and Equation 1.26 becomes

    4~nother minor problem is that because the classical method yields total response, the auxiliary conditions must be on the total response, wdch exists only for t 2 o+. In practice we are most likely to know the conditions at r = 0- (before the input is applied). Therefore, we need to derive a new set of auxiliary conditions at t = O+ from the known conditions at I = 0- . The convolution method can handle both kinds of initial conditions. If the conditions aregiven at I = 0-, we apply these conditions only to y , ( t ) because by its definition the convolution integral is 0 at t = 0- .

  • THE CONTROL HANDBOOK

    where some of the coefficients on both sides can be zero. How- ever, the coefficient of y [k + n] is normalized to unity. Equa- tion 1.27aisvalid for allvalues ofk. Therefore, the equation is still. valid ifwe replace k by k -n throughout the equation. This yields the alternative form (the delay operator form) of Equation 1.27a

    We designate the form of Equation 1.27a the advance operator form, and the form of Equation 1.27b the delay operator form.

    1.2.1 Initial Conditions and Iterative Solution Equation 1.27b can be expressed as

    This equation shows that y [k], the solution at the kth instant, is computed from 2n + 1 pieces of information. These are the pastnvaluesofy[k]: y[k-11, y[k-21, .... y[k-nlandthe present and past n values of the input: f [k], f [k - 11, f [k - 21, .... f [k -n]. If the input f [k] is known fork = 0, 1.2, .. ., then the values of y [k] fork = 0, 1,2, ... can be computed from the 2n initial conditions y [-I], y [-21, .... y [-n] and f [-I], f [-21, .... f [-n]. If the input is causal, that is, i f f [k] = 0 for k c 0, then f [- l ] = f [-21 = ... = f [-n] = 0, and we need only n initial conditions y [-I], y[-21, .... y [-n]. This allows us to compute iteratively or recursively the values y [0], y [I] , y [2], y [3], ... , and so on.5 For instance, to find y [0] we set k = 0 in Equation 1.27~. The left-hand side is y[O], and the right-hand side contains terms y [-I], y [-21, .... y [-n], andtheinputs f [O], f [-I], f [-21, .... f [-n]. Therefore, to begin with, we must knowthen initialconditions y [- 11, y [-21, .... y [-n]. Knowing these conditions and the input f [k], we can iteratively find the response y[O], y[l] , y[2], . .., and so on. The following example demonstrates this procedure. This method basically reflects the manner in which a computer would solve a difference equation, given the input and initial conditions.

    5 ~ o r this reason Equation 1.27 is called a recursive difference equa- tion. However, in Equation 1.27 ifao = a1 = at = ... = an-l = 0, then it follows from Equation 1.27~ that determination of the present

    .... value of y [k] does not require the past values y [k - 11, y [k - 21, ktc. For this reason when ai = 0, (i = 0, 1, .... n - I), the differ- ence Equation 1.27 is nonrecursive. This classification is important in designing and realizing digital filters. In this discussion, however, this classification is not important. The analysis techniques developed here apply to general recursive and nonrecursive equations. Observe that a nonrecursive equation is a special case of recursive equation with a0 = a1 = .. . = an-I = 0.

    EXAMPLE 1.4:

    Solve iteratively

    ~ [ k l - 0.5y[k - 11 = f [k] (1.28a) with initial condition y[-1] = 16 and the input f [k] = k2 (starting at k = 0). This equation can be expressed as

    Y [kl = 0.5y [k - 11 + f [k] (1.28b) If we set k = 0 in this equation, we obtain

    Now, setting k = 1 in Equation 1.28b and using the value y [0] = 8 (computed in the first step) and f [ I ] = (112 = 1, we obtain

    Next, setting k = 2 in Equation 1.28b and using the value y [1] = 5 (computed in the previous step) and f [2] = (2j2, we obtain

    Continuing in this way iteratively, we obtain

    This iterative solution procedure is available only for difference equations; it cannot be applied to differential equations. Despite the many uses of this method, a closed-form solution of a differ- ence equation is far more useful in the study of system behavior and its dependence on the input and the various system param- eters. For this reason we shall develop a systematic procedure to obtain a closed-form solution of Equation 1.27.

    Operational Notation In difference equations it is convenient to use operational

    notation similar to that used in differential equations for the sake of compactness and convenience. For differential equations, we use the operator D to denote the operation of differentiation. For difference equations, we use the operator E to denote the operation for advancing the sequence by one time interval. Thus,

    Ef(k1 = f [ k + 11 ~ ~ f [ k l = f [k+21 . . . . . . . . . . . . . . .

    En f [k] '= f [k + n ] (1.29) Ageneralnth-order difference Equation 1.27a can be expressed

    as

    (En +an-, E"-' + ... + a l E + ao)y[k] = (bnEn + bn-1 E"-' + ... + b l E + bo) f [k] (1.30a)

  • 1.2. DIFFERENCE EQUATIONS

    or

    Q[EIy lk l = P [ E l f [kl ( 1.3ob)

    where Q [ E ] and P [ E l are 11th-order polynomial operators, re- spectively,

    Q [ E ] = E" + a,,-1 E"-' +. . .+ a l E + a o (1.31a)

    1.2.2 Classical Solution Eollowing the discussion of differential equations, we can show that if y,, [ k ] is a solution of Equation 1.27 or Equation 1.30, that is,

    then y, , [k] + p,[k] is also a solution of Equation 1.30, where y, [ k ] is a solution of the homogeneous equation

    As before, we call y,,[k] the particular solution and yc [k ] the complementary solution.

    Complementary Solution (The Natural Response) By definition

    y c [ k + n l + a , - l y , [ k + n - 11 + . . . + a l y , [ k + 1 1 + aoyc [ k ] = 0 (1.33~)

    We can solve this equation systematically, but even a cursory examination of this equation points to its solution. This equation states that a linear combination of yc [ k ] and delayed y, [ k ] is zero not for some values of k, but for all k . This is possible ifand only if y, [ k ] and delayed yc [ k ] have the same form. Only an exponential function y k has this property as seen from the equation

    This shows that the delayed y is a constant times y k . Therefore, the solution of Equation 1.33 must be of the form

    To determine c and y , we substitute this solution in Equa- tion 1.33. From Equation 1.34, we have

    Substitution of this in Equation 1.33 yields

    For a nontrivial solution of this equation

    Our solution c y k [Equation 1.341 is correct, provided that y satisfies Equation 1.37. Now, Q [ y ] is an nth-order polynomial and can be expressed in the factorized form (assuming all distinct roots):

    Clearly y has n solutions yl ,.y2, . . . , yn and, therefore, Equa- tion 1.33 also has n solutions cl y:, czy; , . . . , cny,k. In such a case we have shown that the general solution is a linear combi- nation of the n solutions. Thus,

    where yl , n, . . . , yn are the roots of Equation 1.37 and cl , c2, . . . , cn are arbitrary constants determined from n auxiliary con- ditions. The polynomia! Q [ y ] is called the characteristicpolyno- mial, and

    is the characteristicequation. Moreover, y l , y 2 , . . - , yn, the roots of the characteristic equation, are called characteristic roots or characteristic values (also eigenvalues). The exponentials yik (i = 1,2, . . . , n) are the characteristic modes or natural modes. A characteristic mode corresponds to each characteristic root, and the complementary solution is a linear combination of the char- acteristic modes of the system.

    Repeated Roots For repeated,roots, the form of characteristic modes is

    modified. It can be shownby direct substitution that ifa root y re- peats r times (root ofmultiplicityr), thecharacteristic modescor- responding to this root are y k , k y ' , k2 y ' , . . . , kr-' Y . Thus, if the characteristic equation is

  • THE CONTROL HANDBOOK

    the complementary solution is

    Particular Solution The particular solution y,, [ k ] is the solution of

    Weshall find the particular solution using the methodof undeter- mined coefficients, the same method used for differential equa- tions. Table 1.2 lists the inputs and the corresponding forms of solution with undetermined coefficients. These coefficients can be determined by substituting y p [ k ] in Equation 1.42 and equating the coefficients of similar terms.

    TABLE 1.2 - --

    Input f [ k 1 Forced Response y,, [ k ]

    3. cos(C2k + 19) B cos ( n k + 4)

    Note: By definition, y,, [ k ] cannot have any characteristic mode terms. If any term p [ k ] shown in the right-hand column for the particular solution should also be a characteristic mode, the correct form of the particular solution must be modified to k i p [ k ] , where i is the smallest integer that will prevent k i p [ k ] from having a characteristic mode term. For example, when the input is r k , the particular solution in the right-hand column is of the form c r k . But if r k happens to be a natural mode, the correct form of the particular solution is pkrk(see Pair 2).

    EXAMPLE 1.5:

    Solve

    if the input f [ k ] = (3k + 5)u [ k ] and the auxiliary conditions are y [ 0 ] = 4, y [ 1 ] = 13.

    The characteristic equation is

    Therefore, the complementary solution is

    To find the form of y p [ k ] we use Table 1.2, Pair 4 with r = 1, m = 1. This yields

    yp[kI = B I ~ + Bo Therefore,

    ~ ~ p [ k + 2 l = S 1 ( k + 2 ) + p o = p l k + 2 p 1 +Do Also,

    f [kl = 3k + 5 and

    f [ k + 1 ] = 3 ( k + 1 ) + 5 = 3 k + 8 Substitution of the above results in Equation 1.43 yields

    Comparison of similar terms on two sides yields

    This means 3 5

    y p [ k ] = -6k - - 2

    The total response is

    3 5 = c 1 ( 2 ) ~ + c ~ ( 3 ) ~ - 6k - - k 2 0 (1.44)

    2 To determine arbitrary constants cl and c2 we set k = 0 and 1 and substitute the auxiliary conditions y [ O ] = 4, y [ I ] = 13 to obtain

    Therefore,

    and 13 y [ k ] = 28(21k - - (3 )k - 6k - - 2

    35 (1.46) 2

    -- ~ c [ k l ' i 'p [ k I

    A Comment on Auxiliary Conditions This method requires auxiliary conditions y [ 0 ] , ~ [ l ] , . . . ,

    y [ n - 1 1 because the total solution is valid only for k 2 0. But ifwe are given the initial conditions y [ - I ] , y [ -21 , . . . , y[-111, we can derive the conditions y [ 0 ] , y [ l ] , . . . , J [ I I - 1 1 using the iterative procedure discussed earlier.

  • 1.2. DIFFERENCE EQUATIONS

    Exponential Input As in the case of differential equations, we can show that

    for the equation

    Q[EIy[k l = P I E l f [kl (1.47) the particular solution for the exponential input f [ k ] = rk is given by

    y p [ k l = ~ [ r 1 r k r # ~ i (1.48) where

    H l r ] = # (1.49) The poof follows from the fact that if the input f [ k ] = r k , then from Table 1.2 (Pair 4), yp [ k ] = /?rk. Therefore,

    E' f [ k ] = f [k + i ] = rk+' = rirk and P [ E ] f [ k ] = p [ r ] r k

    E J y p [ k ] = #3rk+j = /?r'rk and Q [ E ] y [ k ] = pe[ r ] rk so that Equation 1.47 reduces to

    / ? e [ r ] r k = p [ r ] r k

    whichyields/? = P [ r ] / Q [ r ] = H [ r ] . This result is valid only if r is not a characteristic root. If r is

    a characteristic root, the particular solution is j3krk where p is determined by substituting yp [ k ] in Equation 1.47 and equating coefficients of similar terms on the two sides. Observe that the exponential rk includes a widevariety of signals such as a constant C , a sinusoid cos (Qk + 8), and an exponentially growing or decaying sinusoid 1 l k cos (Qk + 8) .

    A Constant Input f ( k ) = C This is a special case ofexponential Crk with r = 1. There-

    fore, from Equation 1.48 we have

    A Sinusoidal Input The input ejQk is an exponential rk with r = ejQ. Hence,

    Similarly for the input e-ink

    yp [ k ] = H [e-jn]e-jak

    Consequently, if the input

    1 f [ k ] = cos Qk = -(eJQk + ,-jQk) 2 1

    yp[k] = - 2 ( ~ [ e j " ] e ~ " * + ~ [ ~ - j Q ] ~ - j m ]

    Since the two terms on the right-hand side are conjugates

    yp [ k ] = Re ( H [ej" ]ejnk 1 If

    ~ [ , j n ] = I H [ e j ~ l l , i L ~ [ e j n ~

    then

    = l ~ [ e j " ] l c o s (S2k + f H [ e j n ] ) (1.51)

    Using a similar argument, we can show that for the input

    f [ k ] = cos (Qk + 8 )

    yp [ k ] = I H[e jn] ( cos (i2k + 8 + L H [ejn]) (1.52)

    EXAMPLE 1.6:

    Solve ( E 2 - 3E + 2 )y [k ] = ( E + 2) f [ k ]

    for f [ k ] = (3)ku[k] and the auxiliary conditions y [ O ] = 2, y [ l ] = 1.

    In this case

    and the particular solution to input (31ku[k] is H[3](31k; that is,

    The characteristic polynomial is ( y 2 - 3 y +2) = ( y - l ) ( y - 2). The characteristic roots are 1 and 2. Hence, the complementary solution is y, [ k ] = cl + c2(21k and the total solution is

    Setting k = 0 and 1 in this equation and substituting auxiliary conditions yields

    5 15 2 = ~ 1 + c 2 + - and 1=c l+2c2+- 2 2

    Solution of these two simultaneous equations yields cl = 5.5, c2 = -5. Therefore,

    5 y [ k ] = 5.5 - 6(2)' + (3)k k z O

  • THE CONTROL HANDBOOK

    1.2.3 Method of Convolution In this method, the input f [ k ] is expressed as a sum of impulses. The solution is then obtained as a sum of the solutions to all the impulse components. The method exploits the superposition property of the linear difference equations. A discrete-time unit impulse function S [ k ] is defined as

    Hence, an arbitrary signal f [ k ] can be expressed in terms of impulse and delayed impulse functions as

    f [ k ] = f [O]S[k] + f [ l ] S [ k - I ] + f [2lS[k - 21 +. . . + f [ k ] 6 [ 0 ] + . . . k ? O (1.54)

    The right-hand side expresses f [ k ] as a sum of impulse compo- nents. Ifh [ k ] is the solution ofEquation 1.30 to the impulseinput f [ k ] = 6 [ k ] , then the solution to input 6 [ k - m ] is h [ k - m ] . This follows from the fact that because of constant coefficients, Equation 1.30 has time invariance property. Also, because Equa- tion 1.30 is linear, its solution is the sum of the solutions to each of the impulse components of f [ k ] on the right-hand side of Equation 1.54. Therefore,

    All practical systems with time as the independent variable are causal, that is h [ k ] = 0 fork < 0. Hence, all the terms on the right-hand side beyond f [ k ] h [O] are zero. Thus,

    The general solution is obtained by adding a complementary solution to the above solution. Therefore, the general solution is given by

    The last sum on the right-hand side is known as the convolution strm of f [ k ] and h [ k ] .

    The function h [ k ] appearing in Equation 1.56 is the solution of Equation 1.30 for the impulsive input ( f [ k ] = S [ k ] ) when all initial conditions are zero, that is, h [- l ] = h [-21 = . . . = h [-rt ] = 0. It can be shown that [3] h [ k ] contains an impulse and a linear combination of characteristic modes as

    where the unknown constants Ai are determined from n values of h [ k ] obtained by solving the equation Q [ E ] h [ k ] = P [ E ] S [ k ] iteratively.

    EXAMPLE 1.7:

    Solve Example 1.5 using convolution method. In other words solve

    ( E 2 - 3E + 2 ) y [ k ] = ( E + 2 ) f [ k ] for f [ k ] = (31ku[k] and the auxiliary conditions y[O] = 2, y [ l ] = 1.

    The unit impulse solution h [ k ] is given by Equation 1.57. In this case a0 = 2 and bo = 2. Therefore,

    h [ k l = 6 [ k l + A i ( l l k + A2(21k (1.58) To determine the two unknown constants A l and A2 in Equa- tion 1.58, we need two values of h [ k ] , for instance h [ O ] and h [ l ] . These can be determined iteratively by observing that h [ k ] is the solution of ( E ~ - 3 E $ 2 ) h [ k ] = ( E + 2 ) 6 [ k ] , that is,

    h [ k + 2 ] - 3h[k + 1) + 2 h [ k J = S[k + 1) + 2 6 [ k ] (1.59) subject to initial conditions h [ - 1 ] = h[-21 = 0. We now determine h[O] and h [ l j iteratively from Equation 1.59. Setting k = -2 in this equation yields

    h [ O ] - 3 ( 0 ) + 2 ( 0 ) = O + O =+ h [ 0 ] = O

    Next, sctting k = - 1 in Equation 1.59 and using h [ 0 ] = 0, we obtain

    Setting k = 0 and 1 in Equation 1.58 and substituting h[O] = 0 , h [ 1 j = 1 yields

    O = 1 + A l + A 2 and 1 = ~ ~ + 2 ~ ~

    Solution of these two equations yields A l = -3 and A2 = 2. Therefore,

    h [ k ] = S [ k ] - 3 + 2(21k and from Equation 1.56

    The sums in the above expression are found by using the geo- metric progression sum formula

    Setting k = 0 and 1 and substituting the given auxiliary condi tionsy(O1 = 2 , y [ l ] = 1,weobtain

    2=ci+c2+1.5-4+2.5 and 1=c1+2cz+i .5-8+7.5

    Solution of these equations yields cl = 4 and c2 = -2. There- fore,

    y [ K ] = 5.5 - 6(21k + 2.5(31k which confirms the result obtained by the classical method.

  • 1.2. DIFFERENCE E Q U A T I O N S

    Assessment of the Classical Method The earlier remarks concerning the classical method for

    solving differential equations also apply to difference equations. General discussion of difference equations can be found in texts on the subject [2].

    References (11 Birkhoff, G. and Rota, G.C., Ordinary Differential

    Equations, 3rd ed., John Wiley & Sons, New York, 1978.

    [2] Goldberg, S., Introduction to Difference Equations, John Wiley & Sons, New York, 1958.

    [ 3 ] Lathi, B.P., Linear Systems and Signals, Berkeley- Cambridge Press, Carmichael, CA, 1992.

  • The Fourier, Laplace, and z-Transforms

    .......................................................... 2.1 Introduction 17 2.2 Fundamentals of the Fourier, Laplace, and z-Transforms .......... 17

    Laplace Transform0Rational LaplaceTransforms* Irrational Transformse Discrete-Time Fourier Transform z-Transform Rational z-Transforms

    .......................................... 2.3 Applications and Examples 25 Spectrum of a Signal Having a Rational Laplace Transform Numerical Comvutation of the Fourier Transform Solution of Differential Equa-

    Edward W. Kamen tions Solution of Difference Equations ' Defining Terms .................................................................... Scl~ool of Electrical and Computer Engineering. References 3 1

    .............................................................

  • THE CONTROL HANDBOOK

    -w to oo. The FT F(w) off (t) is defined by

    where w is the frequency variable in radians per second (radls), j = f i and e-jot is the complex exponential given by Euler's formula

    Inserting Equation 2.2 into Equation 2.1 results in the following expression for the FT:

    where R(o) and I(w) are the real and imaginary parts, respec- tively, of F(w) given by

    00

    R(w) = Lm f (t) cos(ot)dt 00

    I (w) = - [_ f (t) sin(wt)dt (2.4) From Equation 2.3, it is seen that in general the FT F(w) is a complex-valued function of the frequency variable w. For any value of w, F(w) has a magnitude IF (w)l and an angle L F(w) given by

    where again R(o) and I (w) are the real and imaginary parts defined by Equation 2.4. The function IF(w)l represents the magnitude of the frequency components comprising f (t), and thus the plot of IF (w)l vs. w is called the magnitudespectrum of f (t). The function L F(w) represents the phase of the frequency components comprising f (t), and thus the plot of L F(w) vs. w is called the phase spectrum of f (t). Note that F(w) can be expressed in the polar form

    whereas the rectangular form of F(w) is given by Equation 2.3. The function (or signal) f (t) is said to have a FT in the ordinary

    sense if the integral in Equation 2.1 exists for all real values of w. Sufficient conditions that ensure the existence of the integral are that f (t) have only a finite number of discontinuities, maxima, and minima over any finite interval of time and that f (t) be absolutely integrable. The latter condition means that

    There are a number of functions f (t) of interest for which the integral in Equation 2.1 does not exist; for example, this is the case for the constant function f (t) = c for -oo < t 4 m, where c is a nonzero real number. Since the integral in Equation 2.1 obviously does not exist in this case, the constant function

    does not have a FT in the ordinary sense, but it does have a FT in the generalized sense, given by

    F (w) = 2nc6(w) (2.8)

    where S(w) is the impulse function. If Equation 2.8 is inserted into the inverse FT given by Equation 2.1 1, the result is the con- stant function f (t) = c for all t. This observation justifies taking Equation 2.8 as the definition of the (generalized) FT of the con- stant function.

    The FT defined by Equation 2.1 can be viewed as an operator that maps a time function f (t) into a frequency function F(w). This operation is often given by

    F(w) = 5 [ f (t)] (2.9) where 3 denotes the FT operator. From Equation 2.9, it is clear that f (t) can be recomputed from F(w) by applying the inverse FT operator denoted by 3-'; that is,

    The inverse operation is given by

    The FT satisfies a number of properties that are very useful in applications. These properties are listed in Table 2.1 in terms of functions f (t) and g (t) whose transfdrms are F (w) and G(w), respectively. Appearing in this table is the convolution f (t) *g(t) off (t) and g(t), defined by

    Also in Table 2.1 is the convolution F(w) * G(w) given by

    From the properties in Table 2.1 and the generalized transform given by Equation 2.8, it is possible to determine the FT of many common functions. A list of FT of some common functions is given in Table 2.2.

    2.2.1 Laplace Transform Given the real-valued function f (t), the two-sided (or bi4teral) Laplace transform F (s) of f (t) is defined by

    where s is a complex variable. The one-sided (or unilateral) Laplace transform off (t) is defined by

    Note that iff (t) = 0 for all t < 0, the one-sided and two-sided Laplace transforms off (t) are identiid. In controls engineering, the one-sided Laplace transform is primarily used, and thus our presentation focuses on only the one-sided Laplace transform, which is referred to as the Laplace transform.

  • 2.2. FUNDAMENTALS OF THE FOURIER, LAPLACE, AND Z-TRANSFORMS

    TABLE 2.1 Properties of the Fourier Transform. Property TransformIProperty

    Linearity 3[af (t) + bg(t)] = a F(w) + bG(w) for any scalars a , b Right or left shift in t 2 [ f (t - to)] = F(w) exp(- jwt,]) for any to Time scaling 3 [ f (at)] = ( l l a ) F(w1a) for any real number n > 0

    -

    Time reversal 3[ f (-t)] = F(-w) = F(w) = complexconjugate of F(w) Multiplication by a power oft 3 [tn f (t)] = jrl F (w), I = 1, 2, . . . Multiplication by exp(jo&t) 3[ f (t) e x p ( j ~ t ) ] = F(w - wo) for any real number wo Multiplication by sin(w0t) 3[f(t)sin(wot)l = ( j /2 ) [F(w+ wo) - F(w - wo)] Multiplication by cos(w0t) 3[f(t)cos(wot)l = (1/2)[F(w+wo) + F ( w - w)] Differentiation in the time domain 5 6 f (t) = (jw)'F(w), n = 1, 2, . . . [ I Multiplication in the time domain >[ f (t)g(t)] = & [F(w) * G(w)] Convolution in the time domain 3[ f (t) * g(t)] = F(w)G(w) Duality 3 [ F ( t ) ] = 2nf (-w) Parseval's theorem I:rn f (t)g(t)dt = & jrrn F(-w)G(w)dw Special case of Parseval's theorem jrm f (t)d t = & SFM IF (w) l 2 dw

    TABLE 2.2 Common Fourier Transforms. f (t) F (0)

    6 (t) = unit impulse c , - c o < t < m

    f (t) = I 0, all other t sin(at) -

    n - a < w < a t 0, all other w

    e-'ltl, any b > 0 2h w2+h2

    eVht2 any b > o ,iW 2n8(w - wo) cos(wot + 0) n [e-"6(w+ wo) +e.ie6(w - o&)] sin(wot + 0) j n re-je6(w+wo) -eje6(w-wo)1

    Given a function f (t), the set of all.complex numbers s such of convergence includes the imaginary axis of the complex plane that the integral in Equation 2.15 exists is called the region of (all complex numbers equal to jw). convergence of the Laplace transform of f (t). For example, if The Laplace transform defined by Equation 2.15 can be viewed f (t) is the unit-step function u(t) given by u(t) = 1 for t 2 0 as an operator, denoted by F'!s) = L[ f (t)] that maps a time and u(t) = 0 for t < 0, the integral in Equation 2.15 exists for function f (t) into the function F(s) of the complex variable s . any s = a + j w with real part a > 0. Hence, the region of

    he inverse ~ap lace transform operator is often denoted by L-' , convergence is the set of all complex numbers s with positive real and is given by part, and, for any such s, the transform of the unit-step function u (t) is equal to 11s. f (t) = L-' [ ~ ( s ) ] = &j I;::;, X(s)s"ds (2.18) , Given a function f (t), if the region of convergence of the The integral in Equation 2.18 is evaluated along the path s = Laplace transform F(s ) includes all complex numbers s = j w c + j w in the complex plane from c - jco to c + jco, where for w ranging from -ca to co, then F ( j w ) = F(s)l,=,,, is well c is any real number for which the path i. + j w lies in the re- defined (i.e., exists) and is given by gion of convergence of the transform F ( s ) . It is often possible

    to determine f (r) without having to use Equation 2.18; for ex- F ( j w ) = f (t)e-jwtdt (2.16) ample, this is the case when F(s) is a rational function of s. The

    Then i f f ( t ) = 0 for t < 0, the right-hand side of Equation 2.16 is equal to the FT F(w) o f f (t) (see Equation 2.1). Hence, the FT o f f (r) is given by

    computation ofthe Laplace transform or the inverse transform is often facilitated by using the properties of the Laplace transform, which are listed in Table 2.3. In this table, f (t) and g(r) are two functio