by tian xia - university of toronto t-space...tian xia master of applied science the edward s....
TRANSCRIPT
-
SYNCHRONIZATION IN NETWORKS OF LINEAR TIME-INVARIANT SYSTEMS
by
Tian Xia
A thesis submitted in conformity with the requirementsfor the degree of Master of Applied Science
The Edward S. Rogers Sr. Department of Electrical & Computer EngineeringUniversity of Toronto
c© Copyright 2018 by Tian Xia
-
Abstract
Synchronization in Networks of Linear Time-Invariant Systems
Tian Xia
Master of Applied Science
The Edward S. Rogers Sr. Department of Electrical & Computer Engineering
University of Toronto
2018
This thesis studies the synchronization problem for a network of identical, linear time-invariant
systems. A criterion to test network synchronization is derived and used for synchronization de-
sign. Minimum requirements for the solvability of the design problem are provided. A selection
of control protocols for specific classes of systems are presented, and their synchronization prop-
erties are characterized. An algorithm to approximate the root locus of polynomials is developed
to study synchronization properties in the high-gain limit. This analysis yields an algebraic char-
acterization of systems that synchronize with respect to a large set of interconnections, as well as
the conclusion that certain control protocols have poor synchronization properties.
ii
-
Acknowledgements
I would like express my deepest gratitude to my supervisor Luca Scardovi for his guidance in my
research and the revision of this thesis. I am grateful for my girlfriend and my parents for their
support and encouragement. I would like to thank NSERC, which provided the funding that kept
me afloat during my graduate study.
iii
-
Contents
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Glossary of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
1 Introduction 1
1.1 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Thesis overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Notation and preliminaries 7
2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 System theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 System compositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.2 Controllability and observability . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.3 Passivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.4 Nyquist criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Graph theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 Abstract algebra and polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Problem statement 19
3.1 Model of a network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
iv
-
4 Analysis of synchronization and synchronization region 25
4.1 Synchronization criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Synchronization region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3 Computation of the synchronization region . . . . . . . . . . . . . . . . . . . . . . 31
5 Synchronizability 38
5.1 Output feedback synchronizability . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6 Designing the synchronization region 45
6.1 Solutions of SDP-S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.1.1 Plants with full actuation or state output . . . . . . . . . . . . . . . . . . . 46
6.1.2 Passive and output feedback passive plants . . . . . . . . . . . . . . . . . . 48
6.1.3 Minimum phase plants of relative degree 1 . . . . . . . . . . . . . . . . . . 52
6.2 Solutions of SDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7 Asymptotic approximation of the root locus of polynomials 56
7.1 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.1.1 Treating polynomials with rational powers as functions . . . . . . . . . . . 58
7.1.2 Root locus of a polynomial and its approximation . . . . . . . . . . . . . . 59
7.1.3 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.2 Proof of the main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.2.1 Basic properties of APA . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.2.2 Computation of APA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.2.3 Identification of APA1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7.2.4 Identification of APAn . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
7.3 Additional properties of APA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.3.1 Rotational symmetry of APA . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.3.2 Conjugation symmetry of APA . . . . . . . . . . . . . . . . . . . . . . . . 77
7.3.3 Factorization using APAexa . . . . . . . . . . . . . . . . . . . . . . . . . . 77
v
-
8 High gain analysis of synchronization 80
8.1 Analysis of the asymptotic synchronization region . . . . . . . . . . . . . . . . . . 81
8.1.1 Special properties of APA for systems . . . . . . . . . . . . . . . . . . . . 83
8.1.2 Maximum size of the asymptotic synchronization region . . . . . . . . . . 86
8.1.3 Characterization of large synchronization regions . . . . . . . . . . . . . . 87
8.2 Designing the asymptotic synchronization region . . . . . . . . . . . . . . . . . . 94
8.2.1 General design methodology using APA . . . . . . . . . . . . . . . . . . . 94
8.2.2 SDP-S for minimum phase plants of relative degree 1 . . . . . . . . . . . . 96
8.2.3 Solvability of SDP-D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
9 Conclusion 99
A Matrix theory 101
vi
-
List of Tables
7.1 The APA of p(x,y) = x2 +1+ xy+ y . . . . . . . . . . . . . . . . . . . . . . . . . 61
vii
-
List of Figures
1.1 Network of metronomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 Compositions of systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Block diagram of (A−BHC,B,C) . . . . . . . . . . . . . . . . . . . . . . . . . . 122.3 ε-Nyquist contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Closed-loop system in the Nyquist criterion . . . . . . . . . . . . . . . . . . . . . 14
3.1 Block diagram of a network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 LC oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Network of LC oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 SDP block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.1 Synchronization region of Example 4.11 . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Synchronization regions and Nyquist regions of Example 4.16 . . . . . . . . . . . 37
5.1 Synchronization region of Example 5.7 . . . . . . . . . . . . . . . . . . . . . . . . 44
6.1 ε-Nyquist contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.1 r-leading polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
7.2 Bounding sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
viii
-
Glossary of Symbols
Symbol Description
N Set of natural numbers including zero
N>0 Set of natural numbers excluding zero
C>a Set of complex numbers with real part greater than a
R>a Set of real numbers greater than a
Spec(M) Multiset of eigenvalues of the matrix M
GN Set of graphs of size N
SR(S) Synchronization region of the system S (Definition 4.3)
NR(S) Nyquist region of the system S (Definition 2.3)
ASR(S) Asymptotic synchronization region of the system S (Definition 8.1)
CharS Characteristic polynomial of the system S (Definition 4.9)
C[Qx] Polynomial ring over C with powers in Q (Definition 2.9)
LP(p) Leading power of a polynomial p (Definition 2.10)
LC(p) Leading coefficient of a polynomial p (Definition 2.10)
LT(p) Leading term of a polynomial p (Definition 2.10)
TP(p) Trailing power of a polynomial p (Definition 2.10)
TC(p) Trailing coefficient of a polynomial p (Definition 2.10)
TT(p) Trailing term of a polynomial p (Definition 2.10)
LPx(p) Leading power of a multivariate polynomial p as a polynomial of x
RLp Root locus of a polynomial of two variables p (Definition 7.4)
Ord(σ) Order of a polynomial p (Definition 7.5)
ix
-
APAn(p) nth order asymptotic polynomial approximation of p (Definition 7.6)
Shr p Shift operator of p ∈ C[Nx×Qy] (Definition 7.15)Tra p Translation operator of p ∈ C[Nx×Qy] (Definition 7.15)Rotn σ Rotation of a polynomial in C[y : Q] (Definition 7.29)
Ldr(p) r-leading polynomial of p (Definition 7.20)
x
-
Chapter 1
Introduction
Recent years witnessed an increasing interest in systems that are composed of a large amount
of interconnected units. As a whole, those networks can exhibit one or more dynamic features
that cannot be predicted from the properties of the individual parts alone, but rather emerge as a
result of their mutual interaction. The collective behavior of a flock of birds and memory in the
brain are good examples of such properties. Moreover, in an engineering scenario, controlling
the global behavior of interconnected systems is key in emerging applications such as multi-agent
systems coordination, smart grids, and social networks. The archetype of collective behavior is
synchronization.
Synchronization phenomena have been known since the 17th century, when Christiaan Huy-
gens observed that two pendulums swinging in the same room would eventually oscillate in unison
regardless of their initial configuration. A collection of metronomes placed on a mobile platform
(as shown in Figure 1.1) exhibit the same behavior. Synchronization is also ubiquitous in nature:
swarms of fireflies flash in unison; flocks of Starling birds fly in coordinated patterns; neurons
in the brain fire in synchronized patterns. These phenomena motivate a systematic analysis of
synchronization, which is the topic of this thesis.
The analysis of synchronization has many applications. For example, synchronization of neu-
ral discharges encodes information within the nervous system [1]; and an excess of synchrony is
correlated with Parkinson disease [2]. Therefore, the analysis of synchronization can contribute
1
-
Figure 1.1: Oscillating metronomes on a mobile platform
to a better understanding of the nervous system, and can help to treat neurological diseases. Syn-
chronization theory is also used in engineering to design distributed power generation systems
to maintain phase alignment and to stabilize formations in groups of unmanned vehicles. Other
applications are presented in [3, 4].
This thesis introduces a general framework for the analysis of synchronization in networks of
linear time-invariant systems. A selection of control protocols are presented, and their synchro-
nization properties are studied.
1.1 Literature review
Synchronization has been an active area of research in the past two decades. Particular attention
has been devoted to the study of networks composed single integrators. The synchronization prob-
lem for single integrator subunits is also called consensus. It was shown in [5] that consensus is
achieved under very mild conditions in the coupling graph. The study of synchronization later
generalized to networks consisting of double integrators [6, 7], harmonic oscillators [8], and gen-
2
-
eral subunit dynamics. The literature on general subunit dynamics focuses primarily on designing
controllers to synchronize the network under of a class of couplings. In many papers, the assump-
tions imposed in this thesis “subunits are linear and coupling is time-invariant” are not required.
In exchange however, either the subunits must be passive, or the controllers must access more than
diffusive information (differences between outputs of subunits).
For example, synchronization is solved for passive nonlinear subunits under symmetric time-
variant coupling in [9], and for passive nonlinear subunits under time-invariant coupling in [10].
However, both papers use a passivity-based argument, hence require the subunits to be passive and
the coupling to be symmetric. The symmetry assumption is removed in [10] using a rescaling of the
potential functions, though at the expense of time-invariant coupling. [11] solved synchronization
of weakly minimum-phase nonlinear systems; and [12] solved synchronization of linear subunits
under time-variant coupling. However, these papers assume that the outputs of subunits are inputs
of their respective controllers. Controllers of these form are infeasible when only the diffusive
information is available. Synchronization of nonlinear subunits is also studied in [13]. However,
the nonlinear subunits therein are approximated by linear systems. In fact, the result in [13] relies
on the synchronization theory for linear systems.
Within the context of linear subunits with time-invariant coupling, the design problem is essen-
tially solved when the controllers are dynamic. [14] showed the existence of a dynamic controller
enforcing synchronization under very mild assumptions. In fact, if the subunits have their states as
outputs, the network can be synchronized even if the coupling is time-variant [12]. However, the
controllers used in these papers access outputs of neighboring controllers. This property is com-
mon to all dynamic controllers presented in the literature (e.g., [15]). However, some applications
forbid such access. For this reason, the design of static controllers, a class of controllers access-
ing only diffusive information from subunits, is also a problem of interest. Unlike its dynamic
counterpart, the static design problem is far from being solved completely.
So far, the static design problem is only solved for specific classes of subunits. In the case the
subunits have states as their outputs or they are fully actuated, [16] and [17] provided algorithms
for designing static controllers. These results are generalized to a larger class of subunits in [18].
3
-
Nonetheless, this class remains to be a small subset of all linear systems. The static design problem
is also considered in many papers (e.g., [19–21]), but a general method applicable for all linear
subunits is missing. This thesis will present general design method, applicable to almost all linear
subunits. More importantly, this method not only produces synchronizing controllers, but also
identifies subunits for which such controllers do not exist.
Many papers in the literature also considered synchronization of heterogeneous1 networks (e.g.,
[13, 22, 23]) and synchronization of networks with delayed couplings (e.g., [11, 19]). These topics
are outside of the scope of this thesis.
In contrast to the existing literature, the focus of this thesis is not the design of controllers.
Instead, it is a systematic study of network synchronization for linear subunits, without any re-
striction on the system structure or the interconnection topology. We will reduce the network
synchronization problem into one about the “synchronization region” of the subunit. The synchro-
nization region characterizes the synchronization properties of the subunit (the set of graphs with
respect to which it synchronizes), hence it provides a general framework for synchronization anal-
ysis. Using this framework, we recover existing control protocols and develop novel results. A
particular novelty of this thesis is an algorithm to measure the synchronization properties of any
given controller; whereas most results in the literature only guarantee synchronization for specific
controllers.
1.2 Contributions
The following are the notable contributions of this thesis:
1. Theorem 4.4 reduces the synchronization problem about a network into the problem of com-
puting the synchronization region of each subunit. Results on the computation of the syn-
chronization region are presented Chapter 4. The synchronization region is used in Chapter
6 to characterize the synchronization properties of control protocols in the literature.
1having non-identical subunits
4
-
2. Propositions 5.5 and 5.6 provide characterizations of output feedback synchronizability, the
minimum requirement for the solvability of the design problem.
3. Proposition 6.4 shows that passive and output feedback passive systems have “good” syn-
chronization properties. Furthermore, Proposition 6.6 and Corollary 6.7 establish that among
SISO systems, they are the only systems with “good” synchronization properties.
4. Theorems 7.32, 7.25, and 7.27 provide an algorithm to approximate the root locus of a
polynomial. Other properties of these approximations are presented in Chapter 7. The results
have application beyond synchronization, such as approximating the eigenvalues of A+ sB
when ‖s‖ is large.
5. Proposition 8.2 provides a numerical method to determine coupling graphs inducing syn-
chronization under a high gain controller.
6. Propositions 8.12 and 8.13 provide algebraic characterizations of systems with large syn-
chronization region (synchronizes with a large set of graphs).
7. Section 8.2.1 illustrates a method to design static controllers, applicable to almost all linear
subunits.
1.3 Thesis overview
Chapter 3 presents a formal definition of the synchronization problems considered in this thesis.
Chapter 4 contains the synchronization criterion (Theorem 4.4), the fundamental result which
establishes the connection between synchronization of a network and the synchronization region of
its subunits. All following analysis of synchronization will be stated in terms of the synchronization
region. The remaining chapter presents properties of the synchronization region and methods of
computation.
Chapter 5 studies synchronizability, the fundamental limitation about a subunit for a network
of such subunits to synchronize; and Chapter 6 presents controllers guaranteeing synchronization
for specific classes of subunits.
5
-
The synchronization region is not computable for general systems, Chapter 8 presents a method
to determine the geometry of the synchronization region near infinity. Synchronization properties
and design results are derived using this method. Chapter 7 develops the mathematical tools re-
quired in Chapter 8.
The following diagram shows the dependency of the chapters.
3
4
5 6
7
8
6
-
Chapter 2
Notation and preliminaries
2.1 Notation
The fields of rational, real, and complex numbers are respectively denoted by Q, R, and C. Given
z ∈ C, its real and imaginary components are denoted by Re(z) and Im(z). The symbol Ca are defined analogously.The set of natural numbers (including zero) is denoted by N. Natural numbers excluding zero is
represented by N>0.
Fn is the set of all ordered n-tuple (x1, . . . ,xn), where xi are elements of a field F. We denote
the set of m by n matrices with elements in F by Fm×n. We denote by 1n the vector in Cn whose
components are all 1, and by In the identity matrix in Cn×n. The index n is dropped when the
dimension is clear from the context. Given a matrix A ∈ Cm×n, its transpose is denoted by AT andits Hermitian conjugate is denoted by A∗. Given two matrices A and B, the Kronecker product
A⊗B is
A⊗B =
a11B . . . a1nB
... . . ....
am1B . . . amnB
,
where ai j are the components of A. The multiset of eigenvalues of a square matrix A is denoted by
7
-
Spec(A). The matrix A is called Hurwitz if Spec(A)⊆C B (resp. A≥ B) if A−B is positive definite (resp. positivesemidefinite).
Some notations on multisets are defined below.
Definition 2.1. Let S be a set, a multiset in S is a subset of S, but counting multiplicity. For
example, the roots of polynomial (x− 1)2(x+ 1) is the multiset {1,1,−1} in C. Any multiset Ain S corresponds uniquely to a multiplicity function 1A : S→ N. For example, the multiplicityfunction of A = {1,1,−1} the function mapping 1 to 2, −1 to 1, and everything else to 0. In viewof the correspondence to multiplicity functions, the collection of all multisets of S is denoted by NS.
Given A ∈ NS, B ∈ NS and n ∈ N, we define
• Subset: A⊆ B if 1A ≤ 1B.
• Union: A∪B is the multiset C such that 1C = max{1A, 1B}.
• Intersection: A∩B is the multiset C such that 1C = min{1A, 1B}.
• Sum: A]B is the multiset C such that 1C = 1A +1B.
• Exclusion: A\B is the multiset C such that 1C = max{1A−1B, 0}.
• Multiplication: n⊗A is the multiset C such that 1C = n1A.
• Cardinality (size): |A|= ∑x∈S1A(x)
2.2 System theory
In the context of this thesis, a system is a linear ODE of the form
S :
ẋ = Ax+Bu,y =Cx+Du,where u ∈ Rm is the input, y ∈ Rp is the output, and x ∈ Rn is the internal state. The systemis denoted by its four matrices: S = (A,B,C,D). The short form notation S = (A,B,C) means
8
-
S= (A,B,C,0), and S= D means S= (0,0,0,D) or S is a static matrix gain.
A system is square if the dimension of the input space is equal to the dimension of the output
space (i.e. m = p). Two systems S = (A,B,C,D) and S′ = (A′,B′,C′,D′) are equivalent if S′ is
S under a linear coordinate transformation to the state space (i.e. there exists invertible P ∈ Rn×n
such that A′ = PAP−1, B′ = PB, C′ =CP−1, D′ = D).
2.2.1 System compositions
Let S1 = (A1,B1,C1,D1) and S2 = (A2,B2,C2,D2) are two systems, their compositions shown in
Figure 2.1 are denoted using the following notations. The composition S2 ◦S1 denotes the cascade
S2 ◦S1 :
ddt
x1x2
= A1 0
B2C1 A2
x1x2
+ B1
B2D1
u1,y2 =
[D2C1 C2
]x1x2
+D2D1u1.The product S1×S2 denotes the system
S1×S2 :
ddt
x1x2
=A1 0
0 A2
x1x2
+B1 0
0 B2
u1u2
y1
y2
=C1 0
0 C2
x1x2
+D1 0
0 D2
u1u2
.The power SN1 denotes the product S1×·· ·×S1 consisting of N identical copies of S1.
2.2.2 Controllability and observability
A system S = (A,B,C,D) is controllable if for every initial condition x(0) and every finite time
T > 0, there exists a continuous input u defined on t ∈ [0,T ] such that the solution satisfiesx(T ) = 0. The system is observable if for every T > 0, the only initial condition that yields
9
-
S1 S2u2 = y1 y2u1
(a) Cascade S2 ◦S1
S1
S2
u1 y1
u2 y2
u y
(b) Cartesian product S1×S2
S1
...
S1
u1
uN
y1
yN
u y
(c) SN1
Figure 2.1: Compositions of systems
y(t) = 0 for every t ∈ [0,T ] under zero input u = 0 is x(0) = 0. In other words, the initialcondition is uniquely determined by the output on any interval containing t = 0. Let n be the
order of S. By classical linear theory, S is controllable if and only if its controllability ma-
trix C = [B, AB, . . . , An−1B] has full rank, and S is observable if and only if its observabil-
ity matrix O = [CT , CT AT , . . . , CT (AT )n−1]T has full rank. Furthermore, by decomposing the
state space of S into four subspaces Im(C)∩Ker(O), Im(C)∩Ker(O)⊥, Im(C)⊥ ∩Ker(O), andIm(C)⊥∩Ker(O)⊥, the system matrices of S in such a basis are
A′ =
Aco A12 A13 A14
0 Aco 0 A24
0 0 Aco A34
0 0 0 Aco
, B′ =
Bco
Bco
0
0
, C′ =[0 Cco 0 Cco
], D′ = D,
where the subsystemsAco A120 Aco
,Bco
Bco
andAco A24
0 Aco
, [Cco Cco]
are respectively controllable and observable. Such a basis is called a Kalman decomposition of S.
The subsystem (Aco,Bco,Cco), which is always controllable and observable, is called a controllable
10
-
and observable subsystem of S. System S is called stabilizable ifAco A340 Aco
is Hurwitz (the uncontrollable states have stable dynamics), and S is called detectable ifAco A13
0 Aco
is Hurwitz (the unobservable states have stable dynamics). Stabilizability is equivalently character-
ized by the existence of a feedback matrix K such that A−BK is Hurwitz. Similarly, detectabilityis equivalent to the existence of a matrix H such that A−HC is Hurwitz.
2.2.3 Passivity
A square system S = (A,B,C,D) of order n is called passive if there exists a storage function
V : Rn→ [0,∞) such that
V (x(T ))−V (x(0))≤∫ T
0yT (t)u(t)dt
for every T > 0 and every piecewise continuous u : [0,T ]→Rn. The positive real lemma providesthe following algebraic characterization of passivity.
Lemma 2.2 ([24]). A controllable and observable system S= (A,B,C,D) is passive if there exists
a symmetric and positive definite P such thatPA+AT P PB−CTBT P−C −D−DT
≤ 0.
11
-
S
H
−+
Figure 2.2: Block diagram of (A−BHC,B,C)
In the case D = 0, the above characterization reduces to
PA+AT P≤ 0, PB =CT .
A square system S = (A,B,C) is called output feedback passive if there is a real matrix H
such that (A−BHC, B, C), the closed-loop system shown in Figure 2.2, is passive. It is easyto verify that passivity of (A−BHC, B, C) is equivalent to the existence of a storage functionV : Rn→ [0,∞) for S such that for T > 0,
V (x(T ))−V (x(0))≤∫ T
0yT (t)u(t)+ yT (t)Hy(t)dt.
This inequality remains to be valid if H is replaced by kI such that kI ≥ H. The above definitionof output feedback passivity is therefore equivalent to the existence of a scalar k such that (A−kBC, B, C) is passive (which is the definition used in [25]).
2.2.4 Nyquist criterion
Given the transfer function G of a system, its ε-Nyquist contour is the oriented curve shown in
Figure 2.3. The contour contains half of the disk with radius 1/ε , and bypasses the purely imag-
inary poles of G with semicircle of radius ε . The ε-Nyquist plot, denoted by γε is the image of
this contour under G. Namely, γε = G ◦ τ where τ is any parametrization of the Nyquist contour.Informally, the Nyquist plot of G, denoted by γ , is the closed curve defined by the limit limε→0+ γε .
We stress “informally” since the limit may not exist. However, the winding number of γ at a point
12
-
ε
ε
1/ε
Figure 2.3: The ε-Nyquist contour of a transfer function with poles at x marks.
s is formally defined as
Indγ (s) := limε→0+
Indγε (s) ,
where Indγε (s) is the standard winding number1 of the oriented curve γε at s. We state below a
complex extension of the standard Nyquist criterion.
Definition 2.3. The Nyquist region of a SISO system S
NR(S) = {s ∈ C∣∣ Indγ (s) = # poles of G in C>0},
where G is the transfer function and γ is the Nyquist plot of S.
Lemma 2.4 (Nyquist criterion). Assume S= (A,B,C) is a controllable and observable SISO sys-
tem and s ∈ C is nonzero. The matrix A− sBC (i.e., the matrix of the closed-loop system in Figure2.4) is Hurwitz if and only if −1/s ∈ NR(S).
1The winding number of an oriented curve at a point is the number of counterclockwise rotations of the curvearound the point.
13
-
S
s
−+
Figure 2.4: Closed-loop system in the Nyquist criterion
2.3 Graph theory
A weighted directed graph (or graph) G= (V,E,Σ) is a finite set of nodes V= {1,2, . . . ,N} inter-connected by edges E⊆ V×V. Each edge has a nonzero real weight, and the weight of edge (i, j)is the (i, j)th component of the weighted adjacency matrix Σ∈RN×N . Set [Σ]i, j = 0 if (i, j) 6∈ E (soE is uniquely determined by Σ). Furthermore, self-loop is forbidden, which means that [Σ] j, j = 0.
The set of all graphs containing N nodes is denoted by GN .
For any graph G ∈ GN , its Laplacian matrix LG ∈ RN×N is defined component-wise by
[LG]i, j :=
∑Nk=1 σi,k, i = j,
−σi, j, i 6= j,
where σi, j is the (i, j)th component of the weighted adjacency matrix. The Laplacian matrix al-
ways contain 0 and 1N as an eigenvalue-eigenvector pair. More can be said about the remaining
eigenvalues if the graph satisfies special properties.
The size of a graph is the number of nodes, or |V|. A graph is said to be positive if everycomponent of Σ is nonnegative, and undirected if Σ is symmetric. Assume v,v′ ∈ V. Node v′ iscalled a neighbor of v if (v,v′) ∈ E. Node v′ is said to be reachable from v if there is a sequencev = v0,v1,v2, . . . ,vn = v′ such that each node in the sequence is a neighbor of the previous one.
Node v is called globally reachable if it is reachable from every node in V.
Clearly, if a graph is undirected, its Laplacian matrix is symmetric and has real eigenvalues.
The following are additional properties about the Laplacian spectrum.
Lemma 2.5 (Proposition 1 in [26]). If G ∈ GN is positive, then the eigenvalues of its Laplacian
14
-
matrix are contained in {reiθ∣∣ r ≥ 0, |θ | ≤ π/2−π/N}
Lemma 2.6 ([27]). A positive graph G ∈ GN contains a globally reachable node if and only if theeigenvalue 0 of LG has algebraic multiplicity one.
In particular, for any positive graph containing a globally reachable node, its Laplacian spec-
trum excluding one instance of zero is contained in C>0.
2.4 Abstract algebra and polynomials
This section defines a generalization of polynomial rings which will be used in this thesis. Some
definitions may deviate from the standard definitions found in the literature.
Definition 2.7. A ring is a set R equipped with addition operations + : R×R→ R and multiplica-tion operation · : R×R→ R satisfying
1. (Additive associativity) (a+b)+ c = a+(b+ c) for all a,b,c ∈ R;
2. (Additive commutativity) a+b = b+a for all a,b ∈ R;
3. (Additive identity) There exists 0 ∈ R satisfying a+0 = a for all a ∈ R;
4. (Additive inverse) For every a ∈ R, there exists b ∈ R such that a+b = 0;
5. (Multiplicative associativity) (ab)c = a(bc) for all a,b,c ∈ R;
6. (Multiplicative commutativity) ab = ba for all a,b ∈ R;
7. (Multiplicative identity) There exists 1 ∈ R satisfying a ·1 = a for all a ∈ R;
8. (Distributivity) a(b+ c) = ab+ac for all a,b,c ∈ R.
Note that a ring is simply a field without multiplicative inverse. A map between two rings f :
R→ S is a homomorphism if it preserves addition and multiplication (i.e. f (r1+r2)= f (r1)+ f (r2)and f (r1r2) = f (r1) f (r2)). A bijective homomorphism is called an isomorphism. One easily veri-
fies that the inverse of an isomorphism remains to be an isomorphism. If there is an isomorphism
between two rings, these rings are said to be isomorphic.
The polynomial ring, defined below, will play an important role in this thesis.
15
-
Definition 2.8. Given a ring R and a symbol x (also called an indeterminate), the polynomial ring
R[x] is a set of formal expressions (called polynomials)
∑n∈N
λnxn,
where λn ∈ R and all but finitely many are zero. Two polynomials ∑n∈Nλnxn and ∑s∈S µnxn areequal if λn = µn for all n ∈ N. The set R[x] is given a ring structure with addition operation(
∑n∈N
λnxn)+
(∑
n∈Nµnxn
)= ∑
n∈N(λn +µn)xn,
and multiplication operation(∑
n∈Nλnxn
)·(
∑m∈N
µmxm)
= ∑n∈N
(n
∑m=0
λmµn−m
)︸ ︷︷ ︸
∈R
xn.
The polynomial ring C[x] consists of polynomials with complex coefficients. Just like complex
polynomial functions, polynomials in R[x] can be regarded as functions from R to R. For example,
the polynomial p = ∑n∈Nλnxn ∈ R[x] defines the function mapping r ∈ R to p(r) := ∑n∈Nλnrn,where the summation is evaluated using the addition operation of R. However, polynomials (as
formal expressions) do not always correspond one-to-one to the functions they define. Consider
the case R = Z/2Z= {0,1}, the polynomials p = 0 and q = x2+x both define the trivial functions(mapping 0 and 1 to 0). However, they have distinct coefficients, hence are not equal as formal
expressions. Such phenomenon does not occur when R is an infinite field.
In the definition above, all powers appearing in a polynomial are natural numbers. In this thesis,
we will need to consider “polynomials” with powers in Q. The definition of polynomial ring will
be generalized to allow powers in any monoid. A monoid is a set S equipped with an associative2
and commutative3 binary operation ∗ : S× S→ S and containing an identity4. The structure of a2(a∗b)∗ c = a∗ (b∗ c) for all a,b,c,∈ S3a∗b = b∗a for all a,b ∈ S4An element 0 ∈ S such that 0∗a = a for all a ∈ S
16
-
monoid can be used to define the monoid ring.
Definition 2.9. Given a ring R and a monoid S, the monoid ring R[S] is a set of formal expression
∑s∈S
λss,
where λs ∈ R and all but finitely many are zero. Two elements ∑s∈S λss and ∑s∈S µss are equal ifλs = µs for all s ∈ S.
The set R[S] is equipped with addition operation defined by
(∑s∈S
λss
)+
(∑s∈S
µss
)= ∑
s∈S(λs +µs)s,
and multiplication operation defined by(∑s∈S
λss
)·(
∑t∈S
µtt
)= ∑
s∈S, t∈S(λsµs)(s∗ t).
Given p = ∑s∈S λss, the coefficient λs in front of s is denoted by Coefs(p).
To see that monoid ring generalizes polynomial ring, let Nx = {xn∣∣ n ∈ N} with operation
xn ∗ xm = xn+m (note that each xn is a symbol), then R[Nx] = R[x] as sets and their operations areidentical. Furthermore, we may define polynomial ring over R with power in the monoid S as R[Sx]
where Sx := {xs∣∣ s ∈ S} is the monoid with operation xs ∗ xt = xs∗t . However, unlike polynomials
in R[x], elements of R[Sx] do not naturally define functions from R to R. One needs an evaluation
map assigning a value in R to every rs.
If there is an ordering on S, the following notions are defined on R[Sx].
Definition 2.10. Assume p ∈ R[Sx] where S is totally ordered. The leading power, leading coeffi-cient, and leading term of p are respectively
LP(p) = a, LC(p) = Coefxa(p), LT(p) = Coefxa(p)xa,
17
-
where a is the maximum element of S such that Coefxa(p), the coefficient of xa in p, is non-zero.
Note that LP(p) is exactly the degree of p commonly defined for p ∈ R[x]. The trailing power,trailing coefficient, and trailing term are
TP(p) = b, TC(p) = Coefxb(p), TT(p) = Coefxb(p)xb,
where b is the minimum element of S such that Coefxb(p) 6= 0. By convention, we set LP(0) =LC(0) = · · ·= TT(0) = 0.
Definition 2.9 can be used to define polynomials on several variables. The case of two variables
will be illustrated. Assume R is a ring and S, T are two monoids. The product Sx×Ty is the monoidcontaining elements (xs,yt) with operation (xs,yt)∗ (xs′,yt ′) = (xs∗s′, yt∗t ′). For simplicity, denote(xs,yt) by xsyt = ytxs. The ring R[Sx× Ty] is a polynomial ring on variables x and y. Note thatR[Sx×Ty] is naturally isomorphic to R[Sx][Ty], the polynomial ring over R[Sx] with powers in T ,via the identification
∑s∈S, t∈T
λs,txsyt 7→ ∑t∈T
(∑s∈S
λs,txs)
︸ ︷︷ ︸∈R[Sx]
yt
Similarly, R[Sx×Ty] is also isomorphic to R[Ty][Sx], the polynomial ring over R[Ty] with powersin S. Assume S and T are totally ordered, the notions defined in Definition 2.10 are extended as
follow. For any p ∈ R[Sx×Ty], we use LCx(p) to denote LC(p) where p is regarded as an elementof R[Ty][Sx], and LCy(p) to denote LC(p) where p is regarded as an element of R[Sx][Tx]. Notations
such as LPx and TPy are defined analogously. They are illustrated in the example below.
Example 2.11. Consider p ∈ R[Nx×Ny] given by p(x,y) = x2y+ 2x+ xy+ 1. As an element ofR[Ny][Nx], p(x,y) = yx2 +(2+ y)x+ 1. The coefficient in front of the highest power of x is y, so
LCx(p) = y. On the other hand, as an element of R[Nx][Ny], p(x,y) = (x2 + x)y+(2x+ 1). The
coefficient in front of the highest power of y is x2+x, so LCy(p) = x2+x. Note that LCx(p)∈R[Ny]and LCy(p) ∈ R[Nx].
18
-
Chapter 3
Problem statement
3.1 Model of a network
Consider N identical copies of an LTI system S= (A,B,C). Let xi, ui, and yi denote the state, input,
and output of the ith copy of S:
ẋi = Axi +Bui,yi =Cxi, (3.1)The systems are mutually coupled by the input-output relation1
ui =N
∑j=1
σi, j(y j− yi), ∀i = 1, . . . ,N, (3.2)
with constant coupling coefficients σi, j ∈ R. The closed-loop system formed by the systems (3.1)and the coupling (3.2) is a network (its block diagram is illustrated in Figure 3.1). Since the
network is entirely determined by S and the coupling coefficients, it is denoted by (S,G) where G
is a graph on N vertices having adjacency matrix [σi, j]. The diagonal terms σi,i are set to zero (the
graph has no self-loops).
1This input-output relation is called diffusive coupling since it is the coupling involved in diffusion processes (uionly depends on the differences (y j− yi))
19
-
S
...
S
LG ⊗ I
u1
uN
y1
yN
u y
−
Figure 3.1: System diagram of N= (S,G). The vectors u and y are respectively the concatenations[uT1 , . . . ,u
Tn ]
T and [yT1 , . . . ,yTn ]
T . The feedback block accounts for the coupling (3.2).
In a network N = (S,G), the number of copies of S is the size of N; each copy is called a
subunit; and G is called the interconnection graph (or coupling graph). The network is said to
synchronize if the synchronized subspace{(x1, . . . ,xN)
∣∣ x1 = · · ·= xN} is asymptotically stable.Since the synchronized subspace is always invariant by Equations (3.1) and (3.2), synchronization
is equivalently defined by
limt→∞
x j(t)− xi(t) = 0, ∀i, j ∈ {1, . . .N}
for all initial conditions. When N = (S,G) synchronizes, we also say that S synchronizes with
respect to G.
It is clear from Figure 3.1 that the network N=(S,G) is an LTI system with state x= [xT1 , . . .xTN ]
T .
Letting u = [uT1 , . . .uTN ]
T and y = [yT1 , . . .yTN ]
T , Equations (3.1) and (3.2) can be rewritten as
ẋ = (IN⊗A)x+(IN⊗B)u,
y = (IN⊗C)x,
u = (LG⊗ I)y,
20
-
I
l
IL
c
+
−
VCV
Figure 3.2: Circuit diagram of the LC oscillator in Example 3.1
where LG is the Laplacian matrix of G (defined in Section 2.3). Therefore, the equation of N is
ẋ = (IN⊗A−LG⊗BC)x. (3.3)
3.2 Objectives
This thesis aims at analyzing the synchronization property of a given network, and designing dis-
tributed controllers to enforce synchronization. The following toy example illustrates the practical
significance of these objectives and motivates the precise problem statements which will be pre-
sented later in this section.
Example 3.1. Given N copies of the LC oscillator (Figure 3.2) interconnected via resistive loads
(illustrated in Figure 3.3 for N = 3). We would like them to oscillate in unison. This can be modeled
as a network synchronization problem.
21
-
l
I1
c
+
−
V1 l
I2
c
+
−
V2 l
I3
c
+
−
V3
Osc. 1 Osc. 2 Osc. 3
r1,3 = r3,1
r1,2 = r2,1 r2,3 = r3,2
Interconnection
Figure 3.3: A network of three LC oscillators interconnected via resistive loads
Each LC oscillator is modeled as a subunit S given by
S :
ẋ =
0 1/l−1/c 0
x+ 0
1/c
u,y =
[0 1
]x,
where x = [IL,VC]T , u = I, and y =V . The resistive loads create the coupling
u j = ∑i6= j
1r ji
(yi− y j).
22
-
The resulting network is (S,G) where G is a graph on N vertices given by adjacency matrix
0 1r121
r13. . . 1r1N
1r21
0 1r23 . . .1
r2N1
r311
r320 . . . 1r3N
......
... . . ....
1rN1
1rN2
1rN3
. . . 0
.
Our goal is to synchronize this network. This goal is achieved in two stages:
(1) Fixing l, c, and ri j, determine whether the network (S,G) synchronizes.
(2) In case (S,G) does not synchronize, modify S such that the modified network (S′,G) synchro-
nizes. In this case, S is treated as a given plant and each subunit of the modified network is
the plant combined with a controller2.
Example 3.1 motivates the formulation of the following problems. Partial solutions, and in
some cases complete solutions, will be provided in the next chapters.
Problem 3.2 (SAP: Synchronization analysis problem). Given a system S, determine graphs with
respect to which it synchronizes.
Problem 3.3 (SDP: Synchronization design problem). Given a plant P = (A,B,C) and a set of
graphs I , design a controller C = (E,F,G,K) and a gain matrix H such that the system S =
(H×P)◦C synchronizes with respect to every graph in I (block diagram of the system S is shownin Figure 3.4).
The general control structure in SDP enables the exchange of controller state by the coupling,
since controller state can be exposed to the output via the matrix H. However, this information may
not be exchangeable depending on the application. Furthermore, the controller may be required
to be memoryless. Therefore, we introduce the following variants of SDP with restricted control
structure.2The controller must be local and distributed (i.e., it can exchange exclusively relative information with its neigh-
bors defined by the communication topology)
23
-
C
P
Hu y
Figure 3.4: Block diagram of the controlled system
Problem 3.4 (SDP-D: Synchronization design problem - dynamic). Given a plant P = (A,B,C)
and a set of graphs I , design a controller C= (E,F,G) such that the cascade S= P◦C synchro-nizes with respect to every graph in I .
Problem 3.5 (SDP-S: Synchronization design problem - static). Given a plant P = (A,B,C) and
a set of graphs I , design a gain K such that the cascade S = P ◦K synchronizes with respect toevery graph in I .
SAP and SDP will be the topics of Chapter 4 and Chapter 6 respectively. The SAP problem is
computationally difficult for complex systems S. Another version of SAP will also be considered.
Problem 3.6 (SAP-HG: Synchronization analysis problem with high gain). Given a system S,
determine graphs G such that (S◦ k,G) synchronizes for sufficiently large gain k > 0.
SAP-HG identifies graphs with respect to which S synchronizes under a high gain. A priori,
SAP-HG appears to be more difficult than SAP. On the contrary, SAP-HG will be easier to handle,
using the mathematical tools developed in Chapter 7. SAP-HG will be addressed in Chapter 8.
24
-
Chapter 4
Analysis of synchronization and
synchronization region
This chapter investigates SAP of a system, the problem of identifying the graphs with respect to
which the system synchronizes. The synchronization region defined in this chapter provides a
spectral characterization of the set of synchronizing graphs. This reduces SAP into the problem of
computing the synchronization region, which is studied later in this chapter.
4.1 Synchronization criterion
The following is a well-known sufficient and necessary condition for the synchronization of a
network. Ideas for this result trace back to [28].
Theorem 4.1 (Synchronization criterion). Let S= (A,B,C) and (S,G) be a network of size N. The
following facts hold true:
(a) (S,G) synchronizes if and only if A− λBC is Hurwitz for every λ ∈ Spec(LG) \ {0}. (ByDefinition 2.1, Spec(LG)\{0} is the spectrum of LG excluding one instance of zero.)
(b) Assume 0 is not a simple eigenvalue of LG. If (S,G) synchronizes , then limt→∞ x(t) = 0 for
every initial condition.
25
-
(c) Assume 0 is a simple eigenvalue of LG. Let w be the unique vector satisfying wT LG = 0 and
wT 1N = 1. If (S,G) synchronizes, then limt→∞ x(t)− (1NwT ⊗ eAt)x(0) = 0 for every initialcondition.
Proof. Recall that the state of the network satisfies the equation ẋ = (In⊗A−LG⊗BC)x. WriteLG = PΛP−1 where Λ is in Jordan normal form. Since LG1N = 0, we may choose P such that
P =[1N Q
], Λ =
0 M0 Γ
, (4.1)where Q ∈CN×(N−1) and Γ ∈C(N−1)×(N−1). Under the change of coordinates z = (P⊗ In)−1x, theequation of the network becomes ż = (IN ⊗A−Λ⊗BC)z. Let z1 ∈ Rn and z2 ∈ Rn(N−1) be thecomponents of z, then
ddt
z1z2
=A M⊗BC
0 IN−1⊗A−Γ⊗BC
z1z2
.Claim 1: The network synchronizes if and only if limt→∞ z2(t) = 0 for every initial condition.
(Proof of claim 1) Synchronization is equivalent to limt→∞ φ (x(t)) = 0 where φ : CnN→ [0,∞)is defined by φ(x) = ∑i6= j ‖xi− x j‖. But
φ (x) = φ ((1N⊗ z1)+(Q⊗ In)z2) = φ ((Q⊗ In)z2) =: ψ(z2).
The function ψ : Cn(N−1)→ [0,∞) is continuous and satisfies ψ(kz2) = |k|ψ(z2) for every k ∈ R.Let a and b be the the minimum and the maximum of ψ on the compact sphere {z2 ∈ Cn(N−1)
∣∣‖z2‖= 1}. Both are strictly positive because the image of (Q⊗ In) only intersects the synchronizedsubspace Span{1N⊗ In} at zero. The scaling property implies a‖z2‖ ≤ ψ(z2)≤ b‖z2‖. Therefore,ψ(z2(t)) converges to zero if and only if z2(t) converges to zero.
(Proof of a) Since z2 satisfies the equation ż2 = (IN−1⊗A−Γ⊗BC)z2, and the system matrixis a block upper-triangular matrix containing one diagonal block A−λBC for each λ ∈ Spec(LG)\{0}. We conclude that the network synchronizes if and only if A− λBC is Hurwitz for every
26
-
λ ∈ Spec(LG)\{0}.(Proof of b) Since 0 ∈ Spec(LG)\{0}, A must be Hurwitz by part (a). So limt→∞ z(t) = 0.(Proof of c) Since Λ is in Jordan normal form and 0 is a simple eigenvalue, the matrix M in
Equation (4.1) must be zero, and the first row of P−1 must be equal to w. Therefore, z1(0) =
(w⊗ In)x(0). Since ż1 = Az1, we conclude that z1(t) = eAtz1(0) and
x(t)− (1NwT ⊗ eAt)x(0) = (1N⊗ z1(t))+(Q⊗ In)z2(t)− (1NwT ⊗ eAt)x(0)
x(t)− (1NwT ⊗ eAt)x(0) = (Q⊗ In)z2(t).
Since the network synchronizes, the above difference converges to zero.
The synchronization criterion is illustrated on the LC network in Example 3.1.
Example 4.2. Consider the LC network in Example 3.1, which is modeled by (S,G) where
S=
0 1/l−1/c 0
, 0
1/c
, [0 1] ,
and G has weighted adjacency matrix 0 1r12
1r13
1r12
0 1r231
r131
r230
.
We will show that the network synchronizes assuming l > 0, c > 0, and r12 = r13 = r23 = r > 0.
The eigenvalues of the Laplacian matrix are
Spec(LG) = Spec
2r −1r −1r−1r 2r −1r−1r −1r 2r
=
{0,
3r,
3r
}.
27
-
The only nonzero eigenvalue of LG is 3/r. Since the matrix
A− 3r
BC =
0 1/l−1/c 3/rc
is Hurwitz, the network must synchronize.
4.2 Synchronization region
Statement (a) of Theorem 4.1 may be restated using the synchronization region defined below.
Definition 4.3. The synchronization region of a square system S= (A,B,C) is
SR(S) ={
s ∈ C∣∣ A− sBC is Hurwitz} .
Theorem 4.4 (Synchronization criterion). The network (S,G) synchronizes if and only if Spec(LG)\{0}⊆ SR(S). Note that Spec(LG)\{0} denotes Spec(LG) excluding one instance of zero (see Def-inition 2.1).
The restatement above reduces the synchronization problem of (S,G) into the computation of
SR(S) and Spec(LG). The Laplacian spectrum can often by localized using general properties of
the graph, thus circumventing the need to compute the eigenvalues of the Laplacian matrix, as
illustrated below.
Example 4.5. We will show that when l > 0 and c > 0, the LC oscillator
S= (A,B,C) =
0 1/l−1/c 0
, 0
1/c
, [0 1] ,
synchronizes with respect to all positive undirected graphs containing a globally reachable node.
28
-
For any s ∈ C, the eigenvalues of A− sBC are
Spec(A− sBC) ={−s/c±
√s2/c2−4/lc2
}.
The matrix A− sBC is Hurwitz when s ∈ C>0, so SR(S) = C>0. By Lemmas 2.5 and 2.6, anypositive graph containing a globally reachable node has eigenvalues contained in C>0 except one
instance of 0. Therefore, S synchronizes with respect to any such graph. In particular, the LC
network in Example 3.1 synchronizes as long as every pair of oscillators is connected via a series
of resistors.
As illustrated, the synchronization region of a system S determines the graphs for which the
network (S,G) synchronizes. The synchronization region, therefore, provides information about
types of coupling graphs required to achieve synchronization. For example, if the synchronization
region does not intersect the real axis, the graph must necessarily be a non symmetric.
Computing the synchronization region of a system is therefore useful to deduce the synchro-
nization properties of a network. The synchronization region is always open1 and symmetric with
respect to the real axis2. Furthermore, it depends “only” on the controllable and observable sub-
system of S.
Proposition 4.6. Given system S. Let S̃ be a controllable and observable subsystem of S, then
SR(S) =
SR(S̃), if S is stabilizable and detectable;
/0, otherwise.1Eigenvalues of A− sBC vary continuously with respect to the parameter s.2Eigenvalues of A− sBC and A− s∗BC are complex conjugated.
29
-
Proof. Using Kalman decomposition, S is equivalent to S′ = (A,B,C) of the form:
A =
Aco A12 A13 A14
0 Aco 0 A24
0 0 Aco A34
0 0 0 Aco
, B =
Bco
Bco
0
0
,
C =[0 Cco 0 Cco
].
The matrix A− sBC has diagonal blocks Aco, Aco− sBcoCco, Aco, Aco. Therefore, A− sBC isHurwitz if and only if Aco− sBcoCco is Hurwitz, and S is stabilizable and detectable. Since S̃ isequivalent to (Aco, Bco,Cco), the conclusion follows.
The above proposition proves the intuitive fact that stabilizability and detectability are neces-
sary conditions for synchronization. Assuming detectability, synchronization is in fact equivalent
to output synchronization, defined below.
Definition 4.7. A network (S,G) output synchronizes if the outputs of its subunits y1, . . . ,yN mutu-
ally converge. Formally,
limt→∞
yi(t)− y j(t) = 0, ∀i, j ∈ {1, . . . ,N}.
Several papers in the synchronization literature study output synchronization instead of (state)
synchronization (e.g., [10, 23]). In the context of LTI systems, the two notions are essentially
equivalent.
Proposition 4.8. Let S be a detectable system, then the network (S,G) synchronizes if and only if
it output synchronizes.
Proof. Synchronization implies output synchronization since (yi− y j) =C(x j− x j) for every pair(i, j). Assuming output synchronization, every pairwise difference yi− y j converges to zero. Bydetectability, there exists a matrix H such that A−HC is Hurwitz. The difference xi− x j satisfies
30
-
the equation
ddt(xi− x j) = A(xi− x j)+B(ui−u j)
ddt(xi− x j) = (A−HC)(xi− x j)+H(yi− y j)+B
N
∑k=1
σi,k(yk− yi).
Since A−HC is Hurwitz, and all terms (yi−y j) converge to zero exponentially fast, (xi−x j) mustconverge to zero.
4.3 Computation of the synchronization region
In this section, we will present methods for computing the synchronization region, and derive the
synchronization regions of some particular systems.
Definition 4.9. Given system S = (A,B,C), its characteristic polynomial CharS is an element of
C[Nλ ×Ns] (polynomial ring on two variables introduced in Section 2.4) defined by
CharS(λ ,s) = det(λ I−A+ sBC).
Fixing s0 ∈ C, the roots of CharS(λ ,s0) as a polynomial of λ are the eigenvalues of A− s0BC.So the synchronization region is the set of s0 such that CharS(λ ,s0) is Hurwitz. If the degree of the
characteristic polynomial is small, one can compute the “boundary” of the synchronization region,
and express the synchronization region as connected components of the remaining space.
Proposition 4.10. Given system S satisfying LPs(CharS)≤ 1. Write CharS(λ ,s) = p0(λ )+sp1(λ )where p0, p1 ∈ C[Nλ ], then the boundary of SR(S) is a subset of
Z ={− p0(it)
p1(it)
∣∣ t ∈ R and p1(it) 6= 0} ,and Z does not intersect SR(S). In particular, SR(S) is a union of zero or more connected compo-
nents of C\Z.
31
-
Proof. Consider the disjoint sets
SR(S) = {s ∈ C∣∣ CharS(λ ,s) ∈ C[Nλ ] has roots in C
-
−1 −0.5 0 0.5 1
−4
−2
0
2
4
ZP1
P2
P3
Figure 4.1: Synchronization region of S in Example 4.11 and Laplacian spectrum of positive graphsof size 3.
connected component, one deduces that SR(S) is the shaded part of Figure 4.1.
The following conclusions are deduced from the synchronization region. Since SR(S) does not
intersect the real axis, any synchronizing graph must be directed. By Lemma 2.5, the Laplacian
spectrum of any positive graph of size 3 is contained in the dashed part of Figure 4.1, disjoint from
SR(S). Therefore, a collection SN synchronizes under a positive graph only for N > 3.
The requirement LPs(CharS) ≤ 1 in Propositions 4.10 may be relaxed. The result extends toLPs(CharS) ≤ d as long as one can explicit represent the roots of a polynomial of degree d. Theresult for d = 2 is stated for illustration.
Proposition 4.12. Given system S satisfying LPs(CharS)≤ 2. Write CharS(λ ,s)= p0(λ )+sp1(λ )+s2 p2(λ ), then the boundary of SR(S) is a subset of
Z =
−p1(it)±√
p21(it)−4p0(it)p2(it)2p2(it)
∣∣ t ∈ R and p2(it) 6= 0∪{
− p0(it)p1(it)
∣∣ t ∈ R and p2(it) = 0 and p1(it) 6= 0} ,and Z does not intersect SR(S).
33
-
Proof. Consider V defined in the proof of Proposition 4.10. The same argument shows that V
contains the boundary of SR(S). In this case, V is the set of s ∈ C such that p0(it)+ sp1(it)+s2 p2(it) = 0 for some t ∈ R. Assume wlog that p j(it) are never simultaneously zero, then W isequal to Z.
The requirement in Propositions 4.10 and 4.12 is guaranteed if the dimension of the input/output
space is low, as quantified by the proposition below.
Proposition 4.13. For any system S= (A,B,C),
LPs(CharS)≤min{Rank(B) , Rank(C)} .
Proof. Let r = Rank(BC). There exists independent vectors v1, . . . ,vr that span the image space
of BC. Let these vectors be the first r columns of an invertible matrix P, then P−1BCP is a matrix
with nonzero terms in the first r rows. Therefore, the power of s in
CharS(λ ,s) = det(λ I−P−1AP+ sP−1BCP)
is at most r. The conclusion follows from Rank(BC)≤min{Rank(B) , Rank(C)}.
As a consequence, the synchronization region of any SISO system can be computed using
Proposition 4.10. One can show that for a controllable and observable SISO system S, the ratio
p1/p0 of polynomials p0 and p1 defined in Proposition 4.10 is exactly the transfer function. Since
p1(it)/p0(it) is the Nyquist plot, the set B in Proposition 4.10 is the image of the Nyquist plot
under the map s 7→ −1/s. This observation suggests a connection between the synchronizationregion and the Nyquist plot.
Proposition 4.14. For every controllable and observable SISO system S,
SR(S)\{0}={−1
s
∣∣ s ∈ NR(S)\{0}} .
34
-
Proof. This is a direct consequence of the Nyquist criterion (Lemma 2.4), which states that for
s 6= 0, A− sBC is Hurwitz iff −1/s ∈ NR(S).
In view of the above proposition, Theorem 4.4 specializes to the following SISO counterpart,
which was proven in [29] using a different argument.
Corollary 4.15. Let S be a controllable and observable SISO system, then the network N= (S,G)
synchronizes if and only if −1/λ ∈NR(S) for every λ ∈ Spec(LG)\{0}, where Spec(LG)\{0} isthe spectrum of LG excluding one instance of zero.
Finally, we will illustrate the synchronization regions of some commonly studied systems.
Example 4.16 (SISO system of order 2). Let S = (A,B,C) be any controllable and observable
SISO system of order 2. Under a change of coordinate, it can be written as
A =
0 1a b
, B =0
1
, C = [c d] .The characteristic polynomial is
CharS(λ ,s) = (λ 2−bλ −a)+ s(dλ + c) = p0(λ )+ sp1(λ ).
By Proposition 4.10, the boundary of SR(S) is contained in
− p0(it)p1(it)
=t2 +a+ ibt
c+ idt=
(ct2 +bdt2 +ac)+ i(−dt3 +bct−adt)c2 +d2t2
By plotting this curve and checking each connected components, the synchronization regions are
sketched for particular choices of parameters (left of Figure 4.2). The corresponding Nyquist
regions are also shown to validate Proposition 4.14.
The synchronization regions of the double integrator (Figure 4.2a) and harmonic oscillator
(Figure 4.2b) both contain the right plane of a vertical asymptote. Therefore, the system synchro-
nizes with respect to all sufficiently strong coupling. Namely, given any positive graph G containing
35
-
a globally reachable node, S synchronizes with respect to a scaled version of G. On the other hand,
the harmonic oscillator in Figure 4.2c has a bounded synchronization region, therefore it only syn-
chronizes with respect to weak coupling.
Note that when S is a double integrator (a = b = 0), the synchronization region recovers the
criterion presented in Theorem 1 of [7]: a network of double integrators synchronizes iff
Im(λ )2
Re(λ )(Re(λ )2 + Im(λ )2)<
d2
c
for every λ in Spec(LG)\{0}, the spectrum of LG with one instance of the zero eigenvalue removed.
36
-
−4 −2 0 2 4
−4
−2
0
2
4
SR (S)
−40 −20 0 20 40
−40
−20
0
20
40
NR (S)
(a) Double integrator: a = b = 0 and c = d = 1
−4 −2 0 2 4
−4
−2
0
2
4
θ SR (S)
−15 −10 −5 0 5 10 15
−10
0
10
θNR(S)
(b) Harmonic oscillator: a =−1, b = 0 and c = d = 1
−1.5 −1 −0.5 0 0.5 1 1.5
−1
0
1
θ SR (S)
−15 −10 −5 0 5 10 15
−10
0
10
θNR(S)
idω+c1−ω2
(c) Harmonic oscillator: a =−1, b = 0, c =−1, and d = 1
Figure 4.2: Synchronization regions (left) and Nyquist regions (right) of S in Example 4.16 forparticular choices of parameters a, b, c, and d. The lines in the synchronization plot are the set Zin Proposition 4.10.
37
-
Chapter 5
Synchronizability
The previous chapter provided a partial solution to SAP. We would like to address SDP, the problem
of designing a controller for a plant such that the controlled plant synchronizes with respect to a
set of graphs. However, before tackling this problem, it is worth identifying whether the plant has
any chance of being synchronized. This consideration has an analogous counterpart in the study
of stabilizability. A plant that can be synchronized is called synchronizable. This terminology will
be precisely defined in this chapter.
5.1 Output feedback synchronizability
Given a collection of plants PN (recall from Section 2.2 that PN denotes N copies of P), the
following definition formalizes two notions of synchronizability.
Definition 5.1. Given a collection PN where P= (A,B,C),
• PN is dynamically output feedback synchronizable if there exists a controller C = (E,F,G)and a graph G ∈ GN such that the network (P◦C, G) synchronizes.
• PN is statically output feedback synchronizable if there exists a gain K and a graph G ∈ GNsuch that the network (P◦K, G) synchronizes.
38
-
The above notions are so named because of their similarities to dynamic output feedback sta-
bilizability and static output feedback stabilizability. Recall that a system P= (A,B,C) is dynam-
ically output feedback stabilizable if the closed-loop system below is asymptotically stable for
some dynamic controller C= (E,F,G). Similarly, P is statically output feedback stabilizable if the
closed-loop system is asymptotically stable for a static controller C= K.
PC−
+
By Theorem 4.4, output feedback synchronizability requires the existence of a controller such
that the synchronization region is non-empty. We will show that the Laplacian spectrum can be
localized to any non-empty synchronization region. This yields the following characterization of
output feedback synchronizability.
Proposition 5.2. Let P= (A,B,C) and N ≥ 2. The following are true:
(a) Assume N is even, PN is dynamically (resp. statically) output feedback synchronizable if and
only if SR(P◦C) intersects R for some C= (E,F,G) (resp. C= (K)).
(b) Assume N is odd, PN is dynamically (resp. statically) output feedback synchronizable if and
only if SR(P◦C) is non-empty for some C= (E,F,G) (resp. C= (K)).
Proof. (Necessity of a) Choose C and G ∈ GN such that (P◦C, G) synchronizes. By Theorem 4.4,Spec(LG)\{0} ⊆ SR(P◦C). Since N is even, Spec(LG)\{0} contains a real eigenvalue.
(Sufficiency of a) Assume SR(P◦C) contains λ ∈ R. Let L ∈ RN×N be any matrix havingeigenvalues {0,λ , . . . ,λ} and satisfying L1N = 0. Let G ∈ GN be the graph having edge weights1
equal to the off-diagonal entries of L, then LG = L. The network (P◦C, G) synchronizes.(Necessity of b) Follows from the same argument as necessity of (a).
1The weights are potentially negative.
39
-
(Sufficiency of b) Assume SR(P◦C) contains λ ∈ C. By symmetry, it also contains λ ∗. LetM ∈ R(N−1)×(N−1) be the block diagonal matrix with diagonal blocks Re(λ ) Im(λ )
−Im(λ ) Re(λ )
.The spectrum of M is {λ ,λ ∗, . . . ,λ ,λ ∗}. Let Q ∈ RN×(N−1) be a matrix whose columns are or-thonormal basis of Span{1}⊥. The product L = QMQT has spectrum Spec(M)]{0} and satisfiesL1N = 0. Therefore, L = LG for some G ∈ GN , and the network (P◦C, G) synchronizes.
One notable consequence of Proposition 5.2 is that output feedback synchronizability of a
collection PN only depends on the parity of N. Output feedback synchronizability of PN for even
N implies the same property for odd N, but the converse fails in general, as illustrated by Example
5.7 at the end of this section.
Leveraging Proposition 5.2, we will characterize synchronizability of a collection using the
properties of its subunit. The following extension of Proposition 4.6 shows that stabilizability and
detectability are necessary.
Proposition 5.3. Given a plant P = (A,B,C), a controller C = (E,F,G,K), and a gain H. Let P̃
be a controllable and observable subsystem of P, then
SR((H×P)◦C) =
SR((H× P̃)◦C
), P is stabilizable and detectable;
/0, otherwise.
Proof. Use Kalman decomposition on P, then follow the same argument as the proof of Proposi-
tion 4.6.
Just like dynamic output feedback stabilizability, dynamic output feedback synchronizability
is satisfied under mild assumptions.
Proposition 5.4. For N ≥ 2, a collection PN is dynamically output feedback synchronizable if andonly if P is stabilizable and detectable.
40
-
Proof. Necessity follows from the previous results. For sufficiency, let P = (A,B,C) and C =
(E,F,G). By the definition of the synchronization region, SR(P◦C) contains 1 iff the closed-loopsystem below is asymptotically stable.
PC−
+
The system matrix of the closed-loop system is E −FCBG A
=I I
0 I
E−BG E−FC−A−BGBG A+BG
I −I0 I
.Asymptotically stability is guaranteed by choosing E = A+FC+BG, F such that A+FC is Hur-
witz, and G such that A+BG is Hurwitz. Therefore, SR(P◦C) contains 1, and PN is dynamicallyoutput feedback synchronizable.
The above result shows that dynamic output feedback synchronizability is independent of the
collection size. However, static output feedback synchronizability depends on the parity of the
collection size. Its characterization is presented in Propositions 5.5 and 5.6 for even and odd
collections respectively [30].
Proposition 5.5. For any even N ≥ 2, PN is statically output feedback synchronizable if and onlyif P is output feedback stabilizable.
Proof. Let P = (A,B,C) and fix any C = (K). The synchronization region SR(P◦C) containsλ ∈ R if and only if A− λBKC is Hurwitz, which is equivalent to asymptotic stability of theclosed-loop system below.
41
-
PλK−
+
The characterization of static output feedback synchronizability for an odd collection is a gen-
eralization of the sufficient and necessary condition for output feedback stabilizability presented
in the main theorem of [31].
Proposition 5.6. For any odd N ≥ 2, PN is statically output feedback synchronizable if and only if
(i) (A,B) is stabilizable;
(ii) (A,C) is detectable;
(iii) There exist a real matrix K, a complex number s, and a Hermitian positive semidefinite
matrix P such that
(A− sBKC)∗P+P(A− sBKC)+CTC+CT KT KC = 0. (5.1)
Proof. (Sufficiency) Choose K, P, and s such that Equation (5.1) is satisfied, then
(A− sBKC)∗P+P(A− sBKC) =−M
where M = CTC +CT KT KC. Since (A− sBKC, C) is detectable, Ker(M) = Ker(C) does notcontain any eigenvector of A− sBKC associated to an eigenvalue in C≥0. Therefore A− sBKC andM satisfy the assumptions of Lemma A.3. Since P is positive semidefinite, the matrix A− sBKC isHurwitz. We conclude that PN is statically output feedback synchronizable.
(Necessity) Since PN is statically output feedback synchronizable, SR(P◦K) 6= /0 for some K.By Proposition 5.3, P is stabilizable and detectable. Since A− sBKC is Hurwitz for some s ∈ C,
42
-
and M =CTC+CT KT KC is positive semidefinite, Lemma A.3 implies the existence of P such that
Equation (5.1) holds.
The above characterizations of static output feedback synchronizability are far from satisfac-
tory, due to their limited use. However, we can conclude that static output feedback synchronizabil-
ity problem is as difficult as output feedback stabilization problem, which is still an open problem
[32].
We conclude this section with an example which shows that static output feedback synchroniz-
ability for an even collection is strictly stronger than static output feedback synchronizability for
an odd collection.
Example 5.7. Consider the system P= (A,B,C) defined by
A =
0 1 0 0
0 0 1 0
0 0 0 1
1 8 9 4
, B =
0
0
0
1
, C =[0 4 6 4
].
Its synchronization region SR(P) (sketched in Figure 5.1) is non-empty, but does not intersect the
real axis. Since P is SISO, the synchronization region of P ◦K is a scaling of SR(P) for everystatic gain K. We conclude by Proposition 5.2 that PN is statically output feedback synchronizable
for odd N only. As an interesting consequence, the collection P3 synchronizes under the coupling
graph shown in Figure 5.1, but if one subunit is added or removed, the collection would fail to
synchronize regardless of the coupling.
It is also worth noting that the non-empty synchronization region implies that P is stabilizable
and detectable, hence any collection PN is dynamically output feedback synchronizable. Synchro-
nization can be achieved using a dynamic controller regardless of the collection size.
43
-
1 1.2 1.4 1.6 1.8 2−2
−1
0
1
2
P1 P2
P3
1
11
Figure 5.1: Left: Synchronization region of P in Example 5.7. Right: Circulant graph that syn-chronizes the collection P3; the Laplacian spectrum of this graph is labeled by the x-marks.
44
-
Chapter 6
Designing the synchronization region
Output feedback synchronizability studies the existence of a control protocol for a plant P such
that the controlled system S synchronizes with respect to some graph. This chapter investigates the
design of a control protocol such that S synchronizes with respect to a given set of graphs, which
is SDP stated in Section 3.2. In view of Theorem 4.4, synchronization of S with respect to the set
I is equivalent to
SR(S)⊇{
Spec(LG)∣∣ G ∈I } .
Therefore, we can restate SDP as the design of a controller such that SR(S) contains a specific
subset of C. With an abuse of terminology, we will call SDP with respect to D ⊆C as the problemof finding C and H such that SR((H×P)◦C) ⊇ D . SDP-D and SDP-S with respect to D aredefined analogously.
Note that instead of solving SDP with respect to an arbitrary subset D ⊆ C, we will onlyconsider the particular subsets D =C>0 and D =C>1. Solving SDP with respect to C>0 and C>1are particularly interesting because
• If the controlled plant S satisfies SR(S) ⊇ C>0, then S synchronizes with respect to everypositive graph containing a globally reachable node.
• If SR(S) ⊇ C>1, then for every b > 0, we have SR(S◦b−1
)⊇ C>b. Therefore, any con-
45
-
troller solving SDP with respect to C>1 may be composed with a high gain to solve SDP with
respect to C>b for any b > 0. This implies that the plant may be controlled to synchronize
with respect to any specific positive graph containing a globally reachable node.
It is worth pointing out that Section 8.1.2 will provide some evidence that SDP is not solvable with
respect to sets much larger than C>0. Therefore, the consideration of SDP with respect to C>0 is
close to the best we can achieve.
As an additional remark, by Proposition 5.3, SDP with respect to any non-trivial set is only
solvable if P is stabilizable and detectable. In view of the same proposition, any controller solving
SDP for the controllable and observable subsystem of P solves SDP for the plant P. Therefore,
assuming that P is controllable and observable does not weaken the applicability any design result.
However, controllability and observability will not be assumed unless explicitly stated.
6.1 Solutions of SDP-S
This section solves SDP-S with respect to C>0 and C>1 for the following classes of plants
• Plants with full actuation and plants with state output
• Passve or output feedback passive plants
• Minimum phase plants with relative degree 1
Furthermore, we will show that among SISO systems, passivity (resp. output feedback passivity)
are also necessary for the solvability of SDP-S with respect to C>0 (resp. C>1).
6.1.1 Plants with full actuation or state output
First, we will present the solutions to SDP-S for plants with full actuation. The static gain solving
SDP-S are explicitly constructed in the proofs of the results. The solution to SDP-S with respect
to C>1 is contained implicitly in [14].
Proposition 6.1. Let P= (A, I,C) be detectable and b > 0. There exists a static gain K such that
SR(P◦K)⊇ C>b. Specifically, A− sKC is Hurwitz for every s ∈ C>b.
46
-
Proof. By detectability, there exists F such that A−FC is Hurwitz. Therefore, there exists P =PT > 0 such that
P(A−FC)+(A−FC)T P < 0.
This implies that vT (PA+AT P)v < 0 for every v ∈ Ker(C) = Ker(CTC
). By Lemma A.4, there
exists α > 0 such that
PA+AT P−αCTC < 0.
Set K = α2bP−1CT , we have
P(A− sKC)+(A− sKC)∗P = PA+AT P− αRe(s)b
CTC
For every s ∈ C>b, the above matrix is negative definite, so A− sKC is Hurwitz.
To solve SDP-S with respect to C>0, one must impose additional assumptions on P= (A, I,C).
Clearly, if the matrix A has an eigenvalue in C>0, then regardless of the static gain K, the matrix
A− sKC is not Hurwitz for s in an open neighborhood of zero. Therefore, SDP-S with respect toC>0 is not solvable unless A has eigenvalues contained in C≤0. It turns out a solution exists as long
as A satisfies the mildly stronger assumption of being neutrally stable. The solution we present is
implicitly contained in [21].
Proposition 6.2. Given detectable and neutrally stable plant P = (A, I,C), there exists a static
gain K such that SR(P◦K)⊇ C>0. Specifically, A− sKC is Hurwitz for every s ∈ C>0.
Proof. Since A is neutrally table, there exists a real invertible matrix Q =[Q1 Q2
]such that
[Q1 Q2
]−1A[Q1 Q2
]=
T 00 H
where T is anti-symmetric and H is Hurwitz. Since (A,C) is detectable, the pair (T, CQ1) must
47
-
be detectable. In particular, Ker(CQ1) = Ker(QT1 C
TCQ1)
does not contain any eigenvector of T
corresponding to an eigenvalue in C≥0. Lemma A.3 applied to the inequality
(T − sQT1 CTCQ1)+(T − sQT1 CTCQ1)∗ ≤ 2Re(s)QT1 CTCQ1, s ∈ C>0.
shows that T − sQT1 CTCQ1 is Hurwitz for every s ∈ C>0. Since
[Q1 Q2
]−1 (A− sQ1QT1 CTC
)[Q1 Q2
]=
T 00 H
− s[Q1 Q2]−1 [Q1 Q2]QT1
0
CTC[Q1 Q2]
=
T − sQT1 CTCQ1 sQT1 CTCQ20 H
,we conclude that A− sQ1QT1 CTC is Hurwitz for every s ∈ C>0.
So far, we have presented the solutions to SDP-S for plants with full actuation. The solutions
to SDP-S for plants with state input follow by duality.
Remark 6.3. Given a stabilizable plant P = (A,B, I). Its dual PT := (AT , I,BT ) is detectable.
Propositions 6.1 and 6.2 imply that there exist K such that AT − sKBT is Hurwitz for every s ∈C>1 (or s ∈ C>0 if P is neutrally stable). By transposition, KT satisfies SR
(P◦KT
)⊇ C>1 (or
SR(P◦KT
)⊇ C>0 if P is neutrally stable).
6.1.2 Passive and output feedback passive plants
When the plant is passive, a storage function can be used to establish synchronization, even if the
plant is nonlinear and the coupling is time-variant (e.g., [9, 12]). However, the coupling graph
must be symmetric or at least balanced. Our result shows that for LTI passive plants, synchro-
nization may be achieved with respect to all positive graph containing a globally reachable node.
Furthermore, the result extends to output feedback passive plants.
48
-
Proposition 6.4. Let P = (A,B,C) be a controllable and observable square system, and K be a
symmetric and positive definite matrix, the following facts hold true
(a) If S is passive, then SR(P◦K)⊇ C>0.
(b) If S is output feedback passive, then there exists b > 0 such that SR(P◦K)⊇ C>b.
Proof. (Proof of b) Since P is output feedback passivity, there exists real matrix H such that
(A−BHC, B, C) is passive. By KYP Lemma, there exists of a real matrix P = PT > 0 such that
P(A−BHC)+(A−BHC)T P≤ 0,
BT P =C.
For any s ∈ C, we can express P(A− sBKC) = P(A−BHC)+CT HC− sCT KC, so
P(A− sBKC)+(A− sBKC)∗P≤CT (H +HT −2Re(s)K)C.
Since K is positive definite, there exists b≥ 0 such that
2bK ≥ H +HT , (6.1)
so
P(A− sBKC)+(A− sBKC)∗P≤ 2Re(b− s)CT KC.
For any s ∈ C>b, the right-hand side is negative semidefinite. Observability of (A− sBKC,C)implies that Ker(C) = Ker
(CT KC
)does not contain any eigenvector of A− sBKC. Therefore,
A− sBKC is Hurwitz by Lemma A.3.(Proof of a) Following the proof of (b) and assuming additionally that S is a passive system,
we can set H = 0 and b = 0 in Equation (6.1). Therefore, C>0 = C>b ⊆ SR(P◦K).
In general, passivity is only a sufficient condition for the existence of a static gain K such
49
-
that SR(P◦K) ⊇ C>0. However, we will show that among SISO plants, it is both sufficient and“necessary”. To prove this result, we need the following lemma.
Lemma 6.5. Let S = (A,B,C) be a SISO controllable and observable system. If NR(S) ⊇ C0.
Proof. (Proof of a) The Nyquist plot does not intersect NR(S). Therefore, G( jω) ∈ C\NR(S)⊆C≥0 where G( jω) is defined.
(Proof of b) The poles of G are the eigenvalues of A. By Lemma 2.4, A− εBC is Hurwitz forevery ε > 0. By continuity of the eigenvalues, Spec(A)⊆ C≤0.
(Proof of c) Consider the continuous maps β : (C\{−1})→ S1 := {s ∈ C∣∣ ‖s‖ = 1} and
Φ : S1→ S1 defined by
β (s) =s+1‖s+1‖ ,
Φ(eiθ ) =
1 θ ∈ [−π/2,π/2]e2i(θ−π/2) θ ∈ (π/2,3π/2).For sufficiently small ε > 0, the ε-Nyquist plot γGε that does not intersect −1 and
0 = IndγGε (−1) = Indβ◦γGε (0) = IndΦ◦β◦γGε (0) .
The first equality follows from −1 ∈NR(S) and (b). The third equality holds because Φ : S1→ S1
has mapping degree 1 (each rotation in the domain leads to one rotation in the image with the same
orientation).
50
-
ε
ε
1/ε
A
A
A B
C
C
Figure 6.1: An ε-Nyquist contour
By (a), Φ◦β ◦ γGε = 1 on the segments lying on the imaginary axis (A in Figure 6.1). Since Gis a strictly proper rational function, Φ◦β ◦ γGε = 1 on the semicircle encircling the open right-halfplane (B in Figure 6.1) as long as ε is sufficiently small. On the semicircles bypassing the poles (C
in Figure 6.1, Φ◦β ◦γGε = 1 only rotates counterclockwise when ε is sufficiently small. Therefore,in order for Φ ◦β ◦ γGε to have zero winding number about zero, it must be equal to 1 identically.This implies that the ε-Nyquist plot γGε is always contained in C≥−1.
The semicircle of γGε bypassing a pole at s̄ ∈ C=0 is parametrized by
γGε (θ) = G(
s̄+ εeiθ)= ε−ne−inθ G0
(s̄+ εeiθ
), θ ∈
[π2,
π2
],
where n is the degree of the pole and G(s) = G0(s)/(s− s̄)n. For γGε (θ) to be contained in C≤−1for every small ε > 0, we must have n = 1 and G0(s̄) ∈ R>0.
The lemma allows us to establish the following equivalence.
Proposition 6.6. A SISO controllable and observable system S= (A,B,C) is passive if and only if
SR(S)⊇ C>0.
51
-
Proof. (Sufficiency) is a direct consequence of Proposition 6.4.
(Necessity) Proposition 4.14 implies that NR(S)⊇ Cb for some b ∈ R.
Proof. S= (A,B,C) is output feedback passive if and only if there exists b∈R such that Sb := (A−bBC, B, C) is passive or SR(Sb)⊇ C>0. The conclusion follows from SR(Sb) = SR(S)−b.
Proposition 6.6 will be used to show that passivity of a SISO plant P is “necessary” for the
existence of K such that SR(S◦K)⊇ C>0. The key observation is that if SR(S◦K)⊇ C>0 holdsfor some K, then we may choose K ∈ {1,0,−1}, due to the identity SR(S◦b) = SR(S)/b forevery b > 0. In the case K ∈ {1,−1} solves SR(S◦K) ⊇ C>0, either P or −P is passive. In thecase K = 0 solve SR(S◦K) ⊇ C>0, the plant P is asymptotically stable. Therefore, SDP-S withrespect to C>0 is solvable if and only if P is passive, −P is passive, or P is asymptotically stable.The same argument combined with Corollary 6.7 shows that SDP-S with respect to C>1 is solvable
if and only if P is output feedback passive, −P is output feedback passive, or P is asymptoticallystable. Refer to [33] for more details.
6.1.3 Minimum phase plants of relative degree 1
A square system S= (A,B,C) of the form
S :
ẋ = Ax+Bu,y =Cx, x ∈ Rn, u ∈ Rm, y ∈ Rm
is said to have relative degree 1 if ∂y∂u = CB is invertible. For such systems, m ≤ n and B, C arefull rank matrices. Most importantly, for any full rank matrix T ∈R(n−m)×n satisfying T B = 0, the
52
-
transformation
x1x2
=C
T
x is bijective and under the new coordinates,
S :
ddt
x1x2
=A11 A12
A21 A22
x1x2
+CB
0
u,y =
[Im 0
]x1x2
.(6.2)
The system S is called minimum phase if A22 is Hurwitz. Note that this property is independent of
the choice of T (since all choices are related via left multiplication by an invertible matrix). These
definitions are a specialization of their general versions for nonlinear systems presented in Section
13.2 of [34].
The main result is that SDP-S with respect to C>1 is solvable for all minimum phase plants of
relative degree 1.
Proposition 6.8. Let P= (A,B,C) be minimum phase and relative degree 1, and K be a diagonal-
izable matrix satisfying Spec(K)⊆ R>0. Then SR(P◦ (CB)−1K
)⊇ C>b for some b > 0.
Proof. Write K = QΛQ−1 where Λ is diagonal. Use a change of coordinates to write P in the form
of Equation (6.2), then the synchronization region of P◦ (CB)−1K is the set of s ∈ C such thatA11− sK A12
A21 A22
=Q 0
0 I
Q−1A11Q− sΛ Q−1A12A21Q A22
Q−1 00 I
is Hurwitz. Since P is minimum phase, A22 is Hurwitz. Let M11 = Q−1A11Q, M12 = Q−1A12, and
M21 = A21Q. We will show that there exists b such that
M11− sΛ M12M21 A22
is Hurwitz for everys ∈ C>b.
By Lemma A.2, there exists P = PT > 0 such that PA22+AT22P =−I. Furthermore, there exists
53
-
µ > 0 such that Λ≥ µI. Therefore, whenever Re(s)> 0,
I 00 P
M11− sΛ M12M21 A22
+M,1− sΛ M12
M21 A22
∗I 00 P
≤H11−2µRe(s) I H12
H21 −I
(6.3)for H11 = M11 +MT11, H12 = M12 +PM
T21, and H21 = H
T12. Sincev1
v2
T H11−2µRe(s) I H12H21 −I
v1v2
≤−(‖v2‖−‖H12‖‖v1‖)2−
(2µRe(s)−‖H12‖2−‖H11‖
)‖v1‖2,
the right-hand side of Equation (6.3) is negative definite as long as 2µRe(s) > ‖H12‖2 + ‖H11‖.
This implies
M11− sΛ M12M21 A22
is Hurwitz.Note that the requirement Spec(K)⊆R>0 alone without diagonalizability is insufficient. Con-
sider P=
0 01 0
, I2, I2 and K =
1 10 1
. The synchronization region of P◦K is the set of ssuch that s±√s are both contained in Cb forany b ∈ 0.
6.2 Solutions of SDP
The previous section presented solutions of SDP-S for specific classes of plants. In contrast, SDP
can be solved in full generality, owing to the additional flexibility of dynamic control protocols. In
particular, we will show that SDP with respect to C>1 is solvable for all stabilizable and detectable
plants, and that SDP with respect to C>0 is solvable if the plant is also neutrally stable. These
results have been presented in [14, 21].
Proposition 6.9. Let P = (A,B,C) be a stabilizable and detectable system of order n with output
space of dimension m, and b be a strictly positive number. There exists a controller C = (E,F,G)
54
-
of order n such that the synchronization region of S= (Im×P)◦C contains C>b.
Proof. Let F =[F1 F2
]and G =
G1G2
where F1 ∈ Rn×m and G1 ∈ Rm×n. The equation of S is
S :
ddt
xCxP
= E 0
BG2 A
xCxP
+F1 F2
0 0
u,y =
G1 00 C
xCxP
.The synchronization region of S is the set of s such that the following matrix is Hurwitz.E− sF1G1 −sF2C
BG2 A
=I I
0 I
E− sF1G1−BG2 −sF2C− sF1G1 +E−A−BG2BG2 A+BG2
I −I0 I
Set E = A+BG2, F1 =−F2, G1 =C, and G2 such that A+BG2 is Hurwitz, then SR(S) is the set ofs such that A+ sF2C is Hurwitz. By Proposition 6.1, there exists F2 such that A+ sF2C is Hurwitz
for every s ∈ C>b.
Proposition 6.10. Let P = (A,B,C) be a neutrally stable