data-driven distributed learning of multi-agent systems: a

8
Data-driven Distributed Learning of Multi-agent Systems: A Koopman Operator Approach Sai Pushpak Nandanoori, Seemita Pal, Subhrajit Sinha, Soumya Kundu, Khushbu Agarwal, Sutanay Choudhury Pacific Northwest National Laboratory, Richland, WA 99354 Emails: {saipushpak.n, seemita.pal, subhrajit.sinha, soumya.kundu, khushbu.agarwal, sutanay.choudhury}@pnnl.gov Abstract—Koopman operator theory provides a model-free technique for studying nonlinear dynamical systems purely from data. Since the Koopman operator is infinite-dimensional, re- searchers have developed several methods that provide a finite- dimensional approximation of the Koopman operator so that it can be applied for practical use cases. One common thing with most of the methods is that their solutions are obtained by solving a centralized minimization problem. In this work, we treat the dynamical system to be a multi-agent system and propose an algorithm to compute the finite-dimensional approximation of the Koopman operator in a distributed manner using the knowledge of the topology of the underlying multi-agent system. The proposed distributed approach is shown to be equivalent to the centralized learning problem and results in a sparse Koopman whose block structure mimics the Laplacian of the multi-agent system. Extensive simulation studies illustrate the proposed framework on the network of oscillators and the IEEE 68 bus system. I. I NTRODUCTION Traditional analysis of nonlinear dynamical systems that exhibit rich and complex behavior such as multiple equilibrium points, periodic orbits, chaotic attractors, limit cycles, etc., assumes the knowledge of the governing equations [1]. In contrast to the study of linear dynamical systems, the approach to study each nonlinear system can be unique and hence challenging, especially for higher-order nonlinear dynamical systems. Koopman operator theory (KOT) provides an al- ternate approach to study nonlinear systems using spatio- temporal time-series data. Instead of studying the evolution of the nonlinear system on the state space, the state space data is transformed into a linear observable space where the evolution of the observables is given by the linear Koopman operator [2]–[4]. However, there is a trade-off because though the lifted dynamical system is linear, it is typically infinite- dimensional. [2]–[5]. The Koopman operator being infinite dimensional is com- putationally intractable. Recent works proposed numerous methods to identify approximation of the infinite-dimensional Koopman operator on a finite-dimensional space. Most notable works include dynamic mode decomposition (DMD) [4], [6], [7], extended DMD (EDMD) [8], [9], Hankel DMD [10], naturally structured DMD (NS-DMD) [11] and deep learning Pacific Northwest National Laboratory (PNNL) is a multi-program national laboratory operated by Battelle for the U.S. Department of Energy (DOE) under contract No. DE-AC05-76RL01830. This work was supported by the DOE’s Advanced Grid Modeling (AGM) program. based DMD (deepDMD) [12]–[14]. These methods are data- driven and one or more of these methods have been suc- cessfully applied for system identification [4], [15] including system identification from noisy data [16], data-driven ob- servability/controllability gramians for nonlinear systems [17], [18], control design [19]–[21], data-driven causal inference in dynamical systems [22], [23], data-driven quantification of information transfer [24]–[26] and to identify persistence of excitation conditions for nonlinear systems [27]. The above-mentioned methods are network topology inde- pendent and achieve the centralized computation of the finite- dimensional Koopman operator. Three of the existing works that are most relevant to our proposed approach are discussed in more detail. The Sparse DMD introduced in [28] promotes sparsity of the Koopman operator by imposing a 1 regulariza- tion on the Koopman learning problem. Although this method succeeds in capturing the underlying system dynamics, it suffers from two drawbacks: (a) the computational complexity of obtaining the sparse Koopman increases exponentially with an increase in the number of system states, (b) the choice of the weight for the sparsity promoting cost is a user-defined decision variable and this choice determines the extent of the sparsity of the Koopman operator. The compositional Koopman operator introduced in [29] leverages the graph neural networks and imposes additional structural constraints on the Koopman operator on the observable space to obtain a sparse Koopman. Another drawback with the existing works includes retraining the Koopman when the system undergoes any topological changes. The Koopman operator method in this paper is developed to perform state predictions. Therefore, its predictive perfor- mance is evaluated by applying to two different dynamical systems. Problem statement: Develop a distributed approach for com- puting the Koopman operator for large-scale networks, while maintaining the accuracy. Summary of contributions: The main contributions of this work are summarized as follows. - Develop methods for learning distributed Koopman oper- ator for large-scale networks, using system topology and network sparsity properties. - Our method scales better than state-of-the-art approaches while maintaining model predicted accuracy when ap- plied for use-cases.

Upload: others

Post on 26-Nov-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Data-driven Distributed Learning of Multi-agent Systems: A

Data-driven Distributed Learning of Multi-agentSystems: A Koopman Operator Approach

Sai Pushpak Nandanoori, Seemita Pal, Subhrajit Sinha, Soumya Kundu, Khushbu Agarwal, Sutanay ChoudhuryPacific Northwest National Laboratory, Richland, WA 99354

Emails: {saipushpak.n, seemita.pal, subhrajit.sinha, soumya.kundu, khushbu.agarwal, sutanay.choudhury}@pnnl.gov

Abstract—Koopman operator theory provides a model-freetechnique for studying nonlinear dynamical systems purely fromdata. Since the Koopman operator is infinite-dimensional, re-searchers have developed several methods that provide a finite-dimensional approximation of the Koopman operator so thatit can be applied for practical use cases. One common thingwith most of the methods is that their solutions are obtained bysolving a centralized minimization problem. In this work, we treatthe dynamical system to be a multi-agent system and proposean algorithm to compute the finite-dimensional approximationof the Koopman operator in a distributed manner using theknowledge of the topology of the underlying multi-agent system.The proposed distributed approach is shown to be equivalentto the centralized learning problem and results in a sparseKoopman whose block structure mimics the Laplacian of themulti-agent system. Extensive simulation studies illustrate theproposed framework on the network of oscillators and the IEEE68 bus system.

I. INTRODUCTION

Traditional analysis of nonlinear dynamical systems thatexhibit rich and complex behavior such as multiple equilibriumpoints, periodic orbits, chaotic attractors, limit cycles, etc.,assumes the knowledge of the governing equations [1]. Incontrast to the study of linear dynamical systems, the approachto study each nonlinear system can be unique and hencechallenging, especially for higher-order nonlinear dynamicalsystems. Koopman operator theory (KOT) provides an al-ternate approach to study nonlinear systems using spatio-temporal time-series data. Instead of studying the evolutionof the nonlinear system on the state space, the state spacedata is transformed into a linear observable space where theevolution of the observables is given by the linear Koopmanoperator [2]–[4]. However, there is a trade-off because thoughthe lifted dynamical system is linear, it is typically infinite-dimensional. [2]–[5].

The Koopman operator being infinite dimensional is com-putationally intractable. Recent works proposed numerousmethods to identify approximation of the infinite-dimensionalKoopman operator on a finite-dimensional space. Most notableworks include dynamic mode decomposition (DMD) [4], [6],[7], extended DMD (EDMD) [8], [9], Hankel DMD [10],naturally structured DMD (NS-DMD) [11] and deep learning

Pacific Northwest National Laboratory (PNNL) is a multi-program nationallaboratory operated by Battelle for the U.S. Department of Energy (DOE)under contract No. DE-AC05-76RL01830. This work was supported by theDOE’s Advanced Grid Modeling (AGM) program.

based DMD (deepDMD) [12]–[14]. These methods are data-driven and one or more of these methods have been suc-cessfully applied for system identification [4], [15] includingsystem identification from noisy data [16], data-driven ob-servability/controllability gramians for nonlinear systems [17],[18], control design [19]–[21], data-driven causal inferencein dynamical systems [22], [23], data-driven quantification ofinformation transfer [24]–[26] and to identify persistence ofexcitation conditions for nonlinear systems [27].

The above-mentioned methods are network topology inde-pendent and achieve the centralized computation of the finite-dimensional Koopman operator. Three of the existing worksthat are most relevant to our proposed approach are discussedin more detail. The Sparse DMD introduced in [28] promotessparsity of the Koopman operator by imposing a `1 regulariza-tion on the Koopman learning problem. Although this methodsucceeds in capturing the underlying system dynamics, itsuffers from two drawbacks: (a) the computational complexityof obtaining the sparse Koopman increases exponentially withan increase in the number of system states, (b) the choice ofthe weight for the sparsity promoting cost is a user-defineddecision variable and this choice determines the extent ofthe sparsity of the Koopman operator. The compositionalKoopman operator introduced in [29] leverages the graphneural networks and imposes additional structural constraintson the Koopman operator on the observable space to obtain asparse Koopman. Another drawback with the existing worksincludes retraining the Koopman when the system undergoesany topological changes.

The Koopman operator method in this paper is developedto perform state predictions. Therefore, its predictive perfor-mance is evaluated by applying to two different dynamicalsystems.Problem statement: Develop a distributed approach for com-puting the Koopman operator for large-scale networks, whilemaintaining the accuracy.Summary of contributions: The main contributions of thiswork are summarized as follows.

- Develop methods for learning distributed Koopman oper-ator for large-scale networks, using system topology andnetwork sparsity properties.

- Our method scales better than state-of-the-art approacheswhile maintaining model predicted accuracy when ap-plied for use-cases.

Page 2: Data-driven Distributed Learning of Multi-agent Systems: A

The paper is organized as follows. The mathematical back-ground for this work is given in Section II and the mainresults of this work are discussed in Section III. The proposeddistributive computation is illustrated on a large-scale networkof oscillators and IEEE 68 bus system in Section IV. Finally,concluding remarks are provided in Section V.

II. MATHEMATICAL PRELIMINARIES

Consider a multi-agent system evolving over a net-work/graph (G) defined by the Laplacian L. Let G = (V, E)where V = {v1, v2, . . . , vs} and E = {e1, e2, . . . , em} denotethe set of nodes and edges respectively. We know,

L = D −A = EE>

where D ∈ Rs×s is the degree matrix which is diagonal, A ∈Rs×s is the adjacency matrix and E ∈ Rs×m is the incidencematrix. Hence, L ∈ Rs×s. The entries of the adjacency matrix,A are denoted by aij and it is defined as follows:

aij =

{1 if there exists a link between i and j0 otherwise

For an undirected graph, the adjacency matrix is symmetric.The columns of the adjacency matrix are denoted by aaaj . Letthe vector eeei represent the standard basis vectors in Rs suchthat only the ith component of the vector is 1 as shown below.

eeei =[0 · · · 1 · · · 0

]>

Suppose Q =[Q>1 Q>2 . . . Q>`

]>where Q1, Q2, . . . , Q`

are wide rectangular matrices. Then, we have

‖ Q ‖2F≡∑

i=1

‖ Qi ‖22 (1)

where Eq. (1) follows from the linearity properties of trace.Consider the multi-agent system described by a discrete-

time nonlinear (autonomous) dynamical system as

xxx+ = F (xxx) (2)

where xxx ∈ RN is the overall state of the multi-agent agent, xxx+

is the successor value of the state and F is the discrete-timenonlinear transition mapping.

Suppose at any given time i, denote the overall system stateas xxxi ∈ RN such that

xxxi =[xxx>i,1 xxx>i,2 . . . xxx>i,n

]>

where the first part of the subscript indicates the time andthe second element in the subscript indicates the agent forwhich the states belong to such that xxxi,α ∈ Rnα for any agentα ∈ {1, . . . , n} and N :=

∑nα=1 nα.

A. DMD and EDMD

Next, we recall the construction of the approximate Koop-man operator using dynamic mode decomposition (DMD) [15]and extended DMD (EDMD) methods [8], [30]. Let the systemstate time-series data from the experiment or simulation of themulti-agent system (2) is given by

Xp =[xxx1 xxx2 · · · xxxk−2 xxxk−1

]

Xf =[xxx2 xxx3 · · · xxxk−1 xxxk

] (3)

Let {ψ1, . . . , ψM} be the choice of dictionary functions orobservables where ψi ∈ L2(RN ,B, µ) and ψi : RN → C.Define a vector valued observable function Ψ : RN → CM asfollows.

Ψ(xxx) :=[ψ1(xxx) ψ2(xxx) · · · ψM (xxx)

]>.

Then, we have the following least squares problem thatminimizes the error

minK‖ Yf −KYp ‖2F (4)

such that

Yp = Ψ(Xp) = [Ψ(xxx1),Ψ(xxx2), · · · ,Ψ(xxxk−1)]

Yf = Ψ(Xf ) = [Ψ(xxx2),Ψ(xxx3), · · · ,Ψ(xxxk)]

where K ∈ Rm ×Rm is the finite dimensional approximationof the Koopman operator defined on the space of observablesHereafter, we refer to the finite dimensional approximationof the Koopman operator as the Koopman operator only.The solution to the optimization problem (4) can be obtainedanalytically and the approximate Koopman operator is givenby

K = YfYp† (5)

where Yp† is the pseudo-inverse of Yp. The original states canbe obtained from the observables as a solution to the followingminimization problem.

minC‖ Xp − CYp ‖2F (6)

where the solution to (6) can be analytically obtained as C =XpY

†p . Note that, DMD is a special case of EDMD algorithm

with Ψ(xxx) = xxx and C = I , where I denote the identity matrix.

B. Notations

Following notations are used in this paper. The number ofnodes in a graph is denoted by s, number of agents are denotedby n, number of states at agent α are denoted by nα such that∑nα=1 nα := N , number of dictionary functions denoted by

M , number of dictionary functions at agent α are denoted bymα such that

∑nα=1mα := M . The vectors aaaj , eeej respectively

denote the jth column of the adjacency matrix, A and a vectorof standard basis in Rs.

Page 3: Data-driven Distributed Learning of Multi-agent Systems: A

III. DISTRIBUTED KOOPMAN OPERATOR FORMULATION

We present the main results of this work in this section.We begin with two necessary assumptions on the topologyof the multi-agent system and its evolution. Eventually, wepresent the distributed Koopman operator computation andshow that it is equivalent to the centralized Koopman operatorcomputation.

Assumption 1 (Knowledge of Topology): The topology ofthe network on which the multi-agent system (2) evolves isknown.The following assumption leads to the identification of Koop-man operator in a distributed manner.

Assumption 2 (Agent Evolution): For any given agent, thedynamic evolution of its states depend only on the states ofthe given agent and the states of its neighbors.

The time-series data for the multi-agent system (2) given in(3) corresponding to any agent α ∈ {1, 2, . . . , n} is denotedby

Xp,α =[xxx1,α xxx2,α . . . xxxk−2,α xxxk−1,α

]

Xf,α =[xxx2,α xxx3,α . . . xxxk−1,α xxxk,α

] (7)

for every agent α ∈ {1, 2, . . . , n} where the the first elementof the subscript denotes the time snapshot information and thesecond element denote the agent information.

Recall from (5), K denotes the global Koopman operatorfor the system (2). We partition the global Koopman into thetransition mapping associated with each agent as follows.

K =

K1

K2

...Kn

=

K11 K12 · · · K1n

K21 K22 · · · K2n

......

. . ....

Kn1 Kn2 · · · Knn

(8)

where each row(s) of K, denoted by Kα indicates the transitionmapping associated to the agent α. Observe the Koopmanoperator K is a n × n block matrix where n are the numberof agents.

In what follows, we formally establish an algorithm toidentify the Koopman operator in a distributive manner.

A. Distributive EDMD for Multi-Agent Systems

Based on the Assumptions 1 and 2, it follows that theobservable space can be defined for each agent as shownbelow. Let Ψα : Rnα → Cmα be the vector-valued observablefunctions corresponding to the agent α. Define,

Yp,α := Ψα(Xp,α) =[Ψα(xxx1,α) Ψα(xxx2,α) · · · Ψα(xxxk−2,α) Ψα(xxxk−1,α)

]

Yf,α := Ψα(Xf,α) =[Ψα(xxx2,α) Ψα(xxx3,α) · · · Ψα(xxxk−1,α) Ψα(xxxk,α)

]

for every α ∈ {1, 2, . . . , n}. We now have Kαα ∈ Rmα×mα ,and Kαβ ∈ Rmα×mβ .

Since the observables for each agent are identified onlybased on the agent state space data, the projections from theobservable space to the state can be obtained by solving thefollowing minimization problem.

minCα‖ Xp,α − CαYp,α ‖2F (9)

and the projection operator for the entire system can beobtained by, C = blkdiag(C1, C2, . . . , Cn).

Remark 3: In order to obtain the one time-step forwardedtime-series data corresponding to agent α, we have,

KαYp =[KNα KNα

] [Yp,Nα

Yp,Nα

]

where Nα denotes the neighboring agents of agent α alongwith the agent itself and Nα denote the non-neighboring agentsof agents α such that Nα ∪ Nα denote the set of agents{1, 2, · · · , n}. Hence, KNα

and KNαrespectively denote

the (rectangular) matrices associated with the agent α, itsneighbors and the non-neighbors of agent α. Similarly thetime-series data, Yp is partitioned according to the neighbors,denoted by Yp,Nα and non-neighbors, denoted by Yp,Nα

.Before presenting the distributed Koopman operator com-putation, we define the following transformation matrices,Tf,α, Tp,α associated with each agent as follows.

Tf,α :=blkdiag(ee1, ee2, . . . , een)

Tp,α :=blkdiag(ae1, ae2, . . . , aen)

where

eei = (eeeα)i ⊗ Imi for i ∈ {1, 2, . . . , n}aei = (aaaα + eeeα)i ⊗ Imi for i ∈ {1, 2, . . . , n}

where (·)i denotes the ith entry of the vector (·) and thematrices, Rp,α and Rf,α denote the transformation matricesthat remove zero rows. In the following, we present the dis-tributed Koopman operator computation and its correspondingworkflow is shown in Fig. 1.

Algorithm 1 Distributed Koopman Operator ComputationGiven an n agent network.for α = 1, α ≤ n, α = α+ 1 do

1) Compute Yf,α, Yp,Nαassociated with agent α as

Yf,α = Rf,αTf,αYf

Yp,Nα= Rp,αTp,αYp

2) Solve the minimization problem

minKNα

‖ Yf,α −KNαYp,Nα ‖2F

3) Compute Kα = KNαRp,α

endThe distributed Koopman operator is given by

KD =[K>1 K>2 · · · K>n

]>

Theorem 4: Under assumptions 1 and 2, the distributed ap-proximate finite-dimensional Koopman operator (Algorithm 1)is equivalent to the approximate finite-dimensional Koopmanoperator computed using EDMD algorithm (4).Proof. We first show the sufficiency condition and the ne-cessity follows be reversing the steps. Consider the original

Page 4: Data-driven Distributed Learning of Multi-agent Systems: A

<latexit sha1_base64="8TxHfw2JGMeM2REZ73UjIsNbf3o=">AAAB73icdVBNS8NAEN3Ur1q/qh69LBbBQwlZW6u9Fbx4rGDbQBvCZrtpl242cXcjlNI/4cWDIl79O978N27TCCr6YODx3gwz84KEM6Ud58MqrKyurW8UN0tb2zu7e+X9g66KU0loh8Q8lm6AFeVM0I5mmlM3kRRHAae9YHK18Hv3VCoWi1s9TagX4ZFgISNYG8l1/aQKXT/0yxXHdlATNRDMSO28lpN6EyLbyVABOdp++X0wjEkaUaEJx0r1kZNob4alZoTTeWmQKppgMsEj2jdU4Igqb5bdO4cnRhnCMJamhIaZ+n1ihiOlplFgOiOsx+q3txD/8vqpDi+9GRNJqqkgy0VhyqGO4eJ5OGSSEs2nhmAimbkVkjGWmGgTUcmE8PUp/J90z2zUsNFNvdKq5nEUwRE4BqcAgQvQAtegDTqAAA4ewBN4tu6sR+vFel22Fqx85hD8gPX2CXc3j4s=</latexit>

Xp, Xf<latexit sha1_base64="JLC5cuPNYboPmdftScQ6hJcqdp8=">AAAB73icdVDLSgMxFM3UV62vqks3wSK4KMPE1mp3BTcuK9gX7TBk0kwbmsmMSUYopT/hxoUibv0dd/6N6XQEFT1w4XDOvdx7jx9zprTjfFi5ldW19Y38ZmFre2d3r7h/0FZRIgltkYhHsutjRTkTtKWZ5rQbS4pDn9OOP7la+J17KhWLxK2extQN8UiwgBGsjdTteXEZ9rzAK5Yc20F1VEMwJZXzSkaqdYhsJ0UJZGh6xffBMCJJSIUmHCvVR06s3RmWmhFO54VBomiMyQSPaN9QgUOq3Fl67xyeGGUIg0iaEhqm6veJGQ6Vmoa+6QyxHqvf3kL8y+snOrh0Z0zEiaaCLBcFCYc6govn4ZBJSjSfGoKJZOZWSMZYYqJNRAUTwten8H/SPrNRzUY31VKjnMWRB0fgGJwCBC5AA1yDJmgBAjh4AE/g2bqzHq0X63XZmrOymUPwA9bbJ3pIj40=</latexit>

Yp, Yf

<latexit sha1_base64="JHkKotvh5vBW+5knSqMPZIviHV4=">AAAB6nicdVBNS8NAEJ3Ur1q/qh69LBbBg4TE1mpvBS8eK9oPaUPZbDft0s0m7G6EEvoTvHhQxKu/yJv/xm1aQUUfDDzem2Fmnh9zprTjfFi5peWV1bX8emFjc2t7p7i711JRIgltkohHsuNjRTkTtKmZ5rQTS4pDn9O2P76c+e17KhWLxK2exNQL8VCwgBGsjXRz14/7xZJjO27NrbooI+Wz8oJUasi1nQwlWKDRL773BhFJQio04VipruvE2kux1IxwOi30EkVjTMZ4SLuGChxS5aXZqVN0ZJQBCiJpSmiUqd8nUhwqNQl90xliPVK/vZn4l9dNdHDhpUzEiaaCzBcFCUc6QrO/0YBJSjSfGIKJZOZWREZYYqJNOgUTwten6H/SOrXdqu1eV0r1k0UceTiAQzgGF86hDlfQgCYQGMIDPMGzxa1H68V6nbfmrMXMPvyA9fYJlO6N8Q==</latexit>

Yp

<latexit sha1_base64="vK8y5XUp81t862SnIUxPYEfgGDk=">AAAB6nicdVBNS8NAEJ3Ur1q/qh69LBbBg4TE1mpvBS8eK9oPaUPZbDft0s0m7G6EEvoTvHhQxKu/yJv/xm1aQUUfDDzem2Fmnh9zprTjfFi5peWV1bX8emFjc2t7p7i711JRIgltkohHsuNjRTkTtKmZ5rQTS4pDn9O2P76c+e17KhWLxK2exNQL8VCwgBGsjXRz1w/6xZJjO27NrbooI+Wz8oJUasi1nQwlWKDRL773BhFJQio04VipruvE2kux1IxwOi30EkVjTMZ4SLuGChxS5aXZqVN0ZJQBCiJpSmiUqd8nUhwqNQl90xliPVK/vZn4l9dNdHDhpUzEiaaCzBcFCUc6QrO/0YBJSjSfGIKJZOZWREZYYqJNOgUTwten6H/SOrXdqu1eV0r1k0UceTiAQzgGF86hDlfQgCYQGMIDPMGzxa1H68V6nbfmrMXMPvyA9fYJhcaN5w==</latexit>

Yf

.

.

.

<latexit sha1_base64="BTk9hrXYgKSvWeMQwEYLlr81w54=">AAAB63icdVDLSgMxFM3UV62vqks3wSK4GiZtrXZXcOOygn1AO5RMmmlDk8yQZIRS+gtuXCji1h9y59+YmY6gogcuHM65l3vvCWLOtPG8D6ewtr6xuVXcLu3s7u0flA+PujpKFKEdEvFI9QOsKWeSdgwznPZjRbEIOO0Fs+vU791TpVkk78w8pr7AE8lCRrBJpWFbs1G54rkeaqIGghmpXdRyUm9C5HoZKiBHe1R+H44jkggqDeFY6wHyYuMvsDKMcLosDRNNY0xmeEIHlkosqPYX2a1LeGaVMQwjZUsamKnfJxZYaD0Xge0U2Ez1by8V//IGiQmv/AWTcWKoJKtFYcKhiWD6OBwzRYnhc0swUczeCskUK0yMjadkQ/j6FP5PulUXNVx0W6+0qnkcRXACTsE5QOAStMANaIMOIGAKHsATeHaE8+i8OK+r1oKTzxyDH3DePgFOSY5h</latexit>

<latexit sha1_base64="USnOsveCOHQ6ZmJ49SljVmwKqVI=">AAAB9HicdVDLSsNAFJ3UV62vqks3g0VwFTJtrXZXcOOygn1AE8pkOmmHTiZxZlIood/hxoUibv0Yd/6N0zSCih64cDjnXu69x485U9pxPqzC2vrG5lZxu7Szu7d/UD486qookYR2SMQj2fexopwJ2tFMc9qPJcWhz2nPn14v/d6MSsUicafnMfVCPBYsYARrI3luW7Fh6mIeT/BiWK44toOaqIFgRmoXtZzUmxDZToYKyNEelt/dUUSSkApNOFZqgJxYeymWmhFOFyU3UTTGZIrHdGCowCFVXpodvYBnRhnBIJKmhIaZ+n0ixaFS89A3nSHWE/XbW4p/eYNEB1deykScaCrIalGQcKgjuEwAjpikRPO5IZhIZm6FZIIlJtrkVDIhfH0K/yfdqo0aNrqtV1rVPI4iOAGn4BwgcAla4Aa0QQcQcA8ewBN4tmbWo/Viva5aC1Y+cwx+wHr7BFvzknQ=</latexit>

<latexit sha1_base64="Ni2yTcIuaHuDIDV5f0/uZXRemkw=">AAAB/nicdVDLSsNAFJ34rPVVFVduBovgQkJia7W7ghvBTQX7gKaUm+mkGTp5MDMRSij4K25cKOLW73Dn3zhNI6jogQuHc+7l3nvcmDOpLOvDWFhcWl5ZLawV1zc2t7ZLO7ttGSWC0BaJeCS6LkjKWUhbiilOu7GgELicdtzx5czv3FEhWRTeqklM+wGMQuYxAkpLg9K+44NKHQIcX08HqQM89mE6KJUt07Lrds3GGamcVXJSrWPbtDKUUY7moPTuDCOSBDRUhIOUPduKVT8FoRjhdFp0EkljIGMY0Z6mIQRU9tPs/Ck+0soQe5HQFSqcqd8nUgiknASu7gxA+fK3NxP/8nqJ8i76KQvjRNGQzBd5CccqwrMs8JAJShSfaAJEMH0rJj4IIEonVtQhfH2K/yftU9OumfZNtdw4yeMooAN0iI6Rjc5RA12hJmohglL0gJ7Qs3FvPBovxuu8dcHIZ/bQDxhvn7mzle8=</latexit>

K↵

<latexit sha1_base64="Y+aYWfToI2o5kpEMjDTSnZMiEa4=">AAAB9XicdVDLSgMxFM3UV62vqks3wSK4KGVia7W7ghuXFexD2rFk0kwbmskMSUYpw/yHGxeKuPVf3Pk3ptMRVPTAhcM593LvPW7ImdK2/WHllpZXVtfy64WNza3tneLuXkcFkSS0TQIeyJ6LFeVM0LZmmtNeKCn2XU677vRi7nfvqFQsENd6FlLHx2PBPEawNtLtzTD2yvEA83CCk2RYLNkVGzVQHcGUVE+rGak1IKrYKUogQ2tYfB+MAhL5VGjCsVJ9ZIfaibHUjHCaFAaRoiEmUzymfUMF9qly4vTqBB4ZZQS9QJoSGqbq94kY+0rNfNd0+lhP1G9vLv7l9SPtnTsxE2GkqSCLRV7EoQ7gPAI4YpISzWeGYCKZuRWSCZaYaBNUwYTw9Sn8n3ROKqheQVe1UrOcxZEHB+AQHAMEzkATXIIWaAMCJHgAT+DZurcerRfrddGas7KZffAD1tsnDVmS0w==</latexit>

Yf,↵

<latexit sha1_base64="OC7B3yNFupTQk86E8kx5dLuR4TA=">AAACG3icbZBNS8NAEIY3ftb6FfXoZbEIHmrJFlGPBS8eq9gPaGrYbDft0s0m7G6EEvI/vPhXvHhQxJPgwX/jps1BWwcGHt6ZYWZeP+ZMacf5tpaWV1bX1ksb5c2t7Z1de2+/raJEEtoiEY9k18eKciZoSzPNaTeWFIc+px1/fJXXOw9UKhaJOz2JaT/EQ8ECRrA2kmfXu14aV1FWhQYCA9BlAroh1iPfT2+z+zT0EHQ1C6mC6fgUZZlnV5yaMw24CKiACiii6dmf7iAiSUiFJhwr1UNOrPsplpoRTrOymygaYzLGQ9ozKLDZ1U+nv2Xw2CgDGETSpNBwqv6eSHGo1CT0TWd+tJqv5eJ/tV6ig8t+ykScaCrIbFGQcKgjmBsFB0xSovnEACaSmVshGWGJiTZ2lo0JaP7lRWjXa+i8hm7OKo1qYUcJHIIjcAIQuAANcA2aoAUIeATP4BW8WU/Wi/Vufcxal6xi5gD8CevrBwiCoCI=</latexit>

Xp,1, Xf,1 2 Rm1⇥k�1

<latexit sha1_base64="egQWqA61UY7HjgAU9iGACDfFMJw=">AAACG3icbZBNS8NAEIY3ftb6FfXoZbEIHmpJgqjHghePVewHNDFstpt26WYTdjdCCfkfXvwrXjwo4knw4L9x0+agrQMDD+/MMDNvkDAqlWV9G0vLK6tr65WN6ubW9s6uubffkXEqMGnjmMWiFyBJGOWkrahipJcIgqKAkW4wvirq3QciJI35nZokxIvQkNOQYqS05JtOz8+SupPXoYZQA3Qph26E1CgIstv8Pot8B7qKRkTCbHxq57lv1qyGNQ24CHYJNVBGyzc/3UGM04hwhRmSsm9bifIyJBTFjORVN5UkQXiMhqSvkSO9y8umv+XwWCsDGMZCJ1dwqv6eyFAk5SQKdGdxtJyvFeJ/tX6qwksvozxJFeF4tihMGVQxLIyCAyoIVmyiAWFB9a0Qj5BAWGk7q9oEe/7lReg4Dfu8Yd+c1Zr10o4KOARH4ATY4AI0wTVogTbA4BE8g1fwZjwZL8a78TFrXTLKmQPwJ4yvHw1roCU=</latexit>

Xp,2, Xf,2 2 Rm2⇥k�1

<latexit sha1_base64="HZHR8kYVCj0T8FKB/zKn1WMnNgQ=">AAACG3icbZBNS8NAEIY3ftb6FfXoZbEIHmpJiqjHghePVewHNDVstpt26WYTdjdCWfI/vPhXvHhQxJPgwX/jps1BWwcWnn1nhpl5g4RRqRzn21paXlldWy9tlDe3tnd27b39toxTgUkLxywW3QBJwignLUUVI91EEBQFjHSC8VWe7zwQIWnM79QkIf0IDTkNKUbKSL5d7/o6qfKsCg2EBqBHOfQipEZBoG+zex355q9oRCTU41M3y3y74tScacBFcAuogCKavv3pDWKcRoQrzJCUPddJVF8joShmJCt7qSQJwmM0JD2DHJlZfT29LYPHRhnAMBbmcQWn6u8OjSIpJ1FgKvOl5XwuF//L9VIVXvY15UmqCMezQWHKoIphbhQcUEGwYhMDCAtqdoV4hATCythZNia48ycvQrtec89r7s1ZpVEt7CiBQ3AEToALLkADXIMmaAEMHsEzeAVv1pP1Yr1bH7PSJavoOQB/wvr6ATQWoNk=</latexit>

Xp,n, Xf,n 2 Rmn⇥k�1

<latexit sha1_base64="Y8vrNGbW/gqDbqJGIyMNbtce4d0=">AAACNHicbZDLSgMxFIYz9VbrrerSTbAILkqZKaJuhIIbwU0Fe4FOGTKZ0zY0kxmSTLEMfSg3PogbEVwo4tZnML1QtPWHwMd/ziHn/H7MmdK2/WplVlbX1jeym7mt7Z3dvfz+QV1FiaRQoxGPZNMnCjgTUNNMc2jGEkjoc2j4/etxvTEAqVgk7vUwhnZIuoJ1GCXaWF7+tunF+Aq7PnSZSP2QaMkeRrjppXHRGWHXnWJ5gu4giLSam8KYIIL5lJcv2CV7IrwMzgwKaKaql392g4gmIQhNOVGq5dixbqdEakY5jHJuoiAmtE+60DIoSAiqnU6OHuET4wS4E0nzhMYT9/dESkKlhqFvOs1+PbVYG5v/1VqJ7ly2UybiRIOg0486Ccc6wuMEccAkUM2HBgiVzOyKaY9IQrXJOWdCcBZPXoZ6ueScl5y7s0KlOIsji47QMTpFDrpAFXSDqqiGKHpEL+gdfVhP1pv1aX1NWzPWbOYQ/ZH1/QPsHapl</latexit>

Xp =

26664

Xp,1

Xp,2

...Xp,n

37775

<latexit sha1_base64="/zN4Tpsgdgz/A0uU9PZleaknpfw=">AAACNHicbZDLSsNAFIYn9VbrrerSzWARXEhJiqgboeBGcFPBXqApYTI5aYdOJmFmUiyhD+XGB3EjggtF3PoMTi8Utf4w8PGfc5hzfj/hTGnbfrFyS8srq2v59cLG5tb2TnF3r6HiVFKo05jHsuUTBZwJqGumObQSCSTyOTT9/tW43hyAVCwWd3qYQCciXcFCRok2lle8aXkhvsSuD10mMj8iWrL7EW55WXjijLDrTrEyQXcQxFrNTWFMEMF8yiuW7LI9EV4EZwYlNFPNKz65QUzTCISmnCjVduxEdzIiNaMcRgU3VZAQ2iddaBsUJALVySZHj/CRcQIcxtI8ofHE/TmRkUipYeSbTrNfT/2tjc3/au1UhxedjIkk1SDo9KMw5VjHeJwgDpgEqvnQAKGSmV0x7RFJqDY5F0wIzt+TF6FRKTtnZef2tFQ9mcWRRwfoEB0jB52jKrpGNVRHFD2gZ/SG3q1H69X6sD6nrTlrNrOPfsn6+gaonao9</latexit>

Xf =

26664

Xf,1

Xf,2

...Xf,n

37775

<latexit sha1_base64="S+j48ZhWpcxJairRDmR6OMFBbCc=">AAACBXicbVDLSsNAFL2pr1pfUZe6GCyCi1ASEXVZcOOygn1AG8JkOmmHTiZhZiKU0I0bf8WNC0Xc+g/u/BunbRbaemDg3HPu5c49YcqZ0q77bZVWVtfWN8qbla3tnd09e/+gpZJMEtokCU9kJ8SKciZoUzPNaSeVFMchp+1wdDP12w9UKpaIez1OqR/jgWARI1gbKbCPO0GeOj3M0yGeOMhUkYOKEgV21a25M6Bl4hWkCgUagf3V6ycki6nQhGOlup6baj/HUjPC6aTSyxRNMRnhAe0aKnBMlZ/PrpigU6P0UZRI84RGM/X3RI5jpcZxaDpjrIdq0ZuK/3ndTEfXfs5EmmkqyHxRlHGkEzSNBPWZpETzsSGYSGb+isgQS0y0Ca5iQvAWT14mrfOad1nz7i6qdaeIowxHcAJn4MEV1OEWGtAEAo/wDK/wZj1ZL9a79TFvLVnFzCH8gfX5AwYil48=</latexit>

Xp,↵, Xf,↵<latexit sha1_base64="RN968HTjfcJfNMTI0gjtJl5YPZQ=">AAACBXicbVDLSsNAFJ34rPUVdamLwSK4CCURUZcFNy4r2Ie0IdxMJ+3QSTLMTIQSunHjr7hxoYhb/8Gdf+O0zUJbDwyce8693LknFJwp7brf1tLyyuraemmjvLm1vbNr7+03VZpJQhsk5alsh6AoZwltaKY5bQtJIQ45bYXD64nfeqBSsTS50yNB/Rj6CYsYAW2kwD66D3LhdIGLAYwdbKrIwUWJA7viVt0p8CLxClJBBeqB/dXtpSSLaaIJB6U6niu0n4PUjHA6LnczRQWQIfRpx9AEYqr8fHrFGJ8YpYejVJqXaDxVf0/kECs1ikPTGYMeqHlvIv7ndTIdXfk5S0SmaUJmi6KMY53iSSS4xyQlmo8MASKZ+SsmA5BAtAmubELw5k9eJM2zqndR9W7PKzWniKOEDtExOkUeukQ1dIPqqIEIekTP6BW9WU/Wi/Vufcxal6xi5gD9gfX5AwlSl5E=</latexit>

Yp,↵, Yf,↵

<latexit sha1_base64="hwcAIwkOio0jzwS9Im9MgPphJ0A=">AAACG3icbVDLSgMxFM3UV62vUZdugkVwUcpMERXcFNy4rGAf0Awlk2ba0EwyJBlxGPofbvwVNy4UcSW48G9MHwttPavDOedy7z1hwpk2nvftFFZW19Y3ipulre2d3T13/6ClZaoIbRLJpeqEWFPOBG0aZjjtJIriOOS0HY6uJ377nirNpLgzWUKDGA8EixjBxko9t4biUD7kkVSQ2lw2RlcIIsyTIYaICYhyvwJrFYhIXxpdgQKNe27Zq3pTwGXiz0kZzNHouZ+oL0kaU2EIx1p3fS8xQY6VYYTTcQmlmiaYjPCAdi0VOKY6yKe/jeGJVfpwcl8khYFT9fdEjmOtszi0yRiboV70JuJ/Xjc10WWQM5GkhgoyWxSlHBoJJ0XBPlOUGJ5Zgoli9lZIhlhhYmydJVuCv/jyMmnVqv551b89K9cr8zqK4Agcg1PggwtQBzegAZqAgEfwDF7Bm/PkvDjvzscsWnDmM4fgD5yvH9ySn/E=</latexit>

for every ↵ 2 {1, 2, · · · , n}<latexit sha1_base64="n/y6PLX0kTgKQx/ik5vx0q5vflo=">AAACAnicbVDLSsNAFJ34rPUVdSVuBovgQkpSRF0W3Lis0he0IdxMJ+3QySTMTIQSiht/xY0LRdz6Fe78G6dtFrX1wIUz59zL3HuChDOlHefHWlldW9/YLGwVt3d29/btg8OmilNJaIPEPJbtABTlTNCGZprTdiIpRAGnrWB4O/Fbj1QqFou6HiXUi6AvWMgIaCP59vGDnyUXXeDJAMa4Pvfw7ZJTdqbAy8TNSQnlqPn2d7cXkzSiQhMOSnVcJ9FeBlIzwum42E0VTYAMoU87hgqIqPKy6QljfGaUHg5jaUpoPFXnJzKIlBpFgemMQA/UojcR//M6qQ5vvIyJJNVUkNlHYcqxjvEkD9xjkhLNR4YAkczsiskAJBBtUiuaENzFk5dJs1J2r8ru/WWpWsnjKKATdIrOkYuuURXdoRpqIIKe0At6Q+/Ws/VqfVifs9YVK585Qn9gff0C29WXCw==</latexit>

Rp,↵Tp,↵

<latexit sha1_base64="nJt8pv4Mc96wmAD7D/X06r4SWvg=">AAACAnicbVDLSsNAFJ34rPUVdSVuBovgQkpSRF0W3Lis0he0IdxMJ+3QySTMTIQSiht/xY0LRdz6Fe78G6dtFrX1wIUz59zL3HuChDOlHefHWlldW9/YLGwVt3d29/btg8OmilNJaIPEPJbtABTlTNCGZprTdiIpRAGnrWB4O/Fbj1QqFou6HiXUi6AvWMgIaCP59vGDn4UXXeDJAMa4Pvfw7ZJTdqbAy8TNSQnlqPn2d7cXkzSiQhMOSnVcJ9FeBlIzwum42E0VTYAMoU87hgqIqPKy6QljfGaUHg5jaUpoPFXnJzKIlBpFgemMQA/UojcR//M6qQ5vvIyJJNVUkNlHYcqxjvEkD9xjkhLNR4YAkczsiskAJBBtUiuaENzFk5dJs1J2r8ru/WWpWsnjKKATdIrOkYuuURXdoRpqIIKe0At6Q+/Ws/VqfVifs9YVK585Qn9gff0CvGOW9w==</latexit>

Rf,↵Tf,↵

<latexit sha1_base64="s4ig4HB5jyvkHFeUHGUvhs4iUSY=">AAACBHicbVDLSsNAFJ3UV62vqMtuBovgopRERF0W3LiSCvYhTQk300k7dPJgZiKUkIUbf8WNC0Xc+hHu/BsnbRbaeuDC4Zx7ufceL+ZMKsv6Nkorq2vrG+XNytb2zu6euX/QkVEiCG2TiEei54GknIW0rZjitBcLCoHHadebXOV+94EKyaLwTk1jOghgFDKfEVBacs3qvZvGdScANfYFTNKbzE0d4PEYssw1a1bDmgEvE7sgNVSg5ZpfzjAiSUBDRThI2betWA1SEIoRTrOKk0gaA5nAiPY1DSGgcpDOnsjwsVaG2I+ErlDhmfp7IoVAymng6c78Wrno5eJ/Xj9R/uUgZWGcKBqS+SI/4VhFOE8ED5mgRPGpJkAE07diMgYBROncKjoEe/HlZdI5bdjnDfv2rNasF3GUURUdoRNkowvURNeohdqIoEf0jF7Rm/FkvBjvxse8tWQUM4foD4zPH404mJ4=</latexit>

Yp,N↵

<latexit sha1_base64="A5qPwGYlh6eEBkPPcrG5CW3XZbc=">AAACeHichVFda9swFJXddcuyr7R77ItoFtZCFuww1j4WBmMwGCksHyNOzbUiJyKyLCS5EIR/Q/9b3/ZD9rKnyUkYbTLIBcHROTrS1bmJ5EybIPjl+QdPDp8+qz2vv3j56vWbxtHxQOeFIrRPcp6rUQKaciZo3zDD6UgqClnC6TBZfK704S1VmuXih1lKOslgJljKCBhHxY27KGMitjYiwPG3MrZRBmaeKljY79UOuJxDWZY4kqCAc8rxz9im7X8K/oD3mSuHbO+/Of5y040bzaATrArvgnADmmhTvbhxH01zUmRUGMJB63EYSDOxoAwjnJb1qNBUAlnAjI4dFJBRPbGr4ErccswUp7lySxi8Yh86LGRaL7PEnaya19taRf5PGxcmvZxYJmRhqCDrh9KCY5Pjagp4yhQlhi8dAKKY6xWTuYuBGDerugsh3P7yLhh0O+GnTnj9sXkVbOKooRN0is5QiC7QFfqKeqiPCPrtnXjvvJb3x8f+e/98fdT3Np636FH53b+1SMMO</latexit>

minKN↵

k Yf,↵ � KN↵Yp,N↵

k2F

<latexit sha1_base64="OIdMExs8o0fAAc3XR3SfL2NBCVA=">AAACCXicbVDLSsNAFJ3UV62vqEs3g0VwISURUZcFN4IgFewDmhBuppN26OTBzEQoIVs3/oobF4q49Q/c+TdO2iy09cDAmXPu5d57/IQzqSzr26gsLa+srlXXaxubW9s75u5eR8apILRNYh6Lng+SchbRtmKK014iKIQ+p11/fFX43QcqJIujezVJqBvCMGIBI6C05Jk4cwhwfJN7mROCGgUCxtlt8QOejCDPPbNuNawp8CKxS1JHJVqe+eUMYpKGNFKEg5R920qUm4FQjHCa15xU0gTIGIa0r2kEIZVuNr0kx0daGeAgFvpFCk/V3x0ZhFJOQl9XFtvKea8Q//P6qQou3YxFSapoRGaDgpRjFeMiFjxgghLFJ5oAEUzviskIBBClw6vpEOz5kxdJ57Rhnzfsu7N686SMo4oO0CE6Rja6QE10jVqojQh6RM/oFb0ZT8aL8W58zEorRtmzj/7A+PwBfxyayg==</latexit>KN↵<latexit sha1_base64="YtbLwLyeP3JfdeQla9uGg36fxSE=">AAAB9HicbVDLSgNBEOyNrxhfUY9eBoPgQcJuEPUY8OIxinlAsoTeySQZMju7zswGwpLv8OJBEa9+jDf/xkmyB00saCiquunuCmLBtXHdbye3tr6xuZXfLuzs7u0fFA+PGjpKFGV1GolItQLUTHDJ6oYbwVqxYhgGgjWD0e3Mb46Z0jySj2YSMz/EgeR9TtFYyX/opvEF6aCIhzjtFktu2Z2DrBIvIyXIUOsWvzq9iCYhk4YK1LrtubHxU1SGU8GmhU6iWYx0hAPWtlRiyLSfzo+ekjOr9Eg/UrakIXP190SKodaTMLCdIZqhXvZm4n9eOzH9Gz/lMk4Mk3SxqJ8IYiIyS4D0uGLUiIklSBW3txI6RIXU2JwKNgRv+eVV0qiUvauyd39ZqlayOPJwAqdwDh5cQxXuoAZ1oPAEz/AKb87YeXHenY9Fa87JZo7hD5zPH0IBkbk=</latexit>

Rp,↵

<latexit sha1_base64="DbIt5gDxP1uAYfB2KuWHxT1CEaM=">AAACK3icbVDLSsNAFJ34rPUVdelmsAhuLEkRFdwUBRHcVLAPaWKYTCft0MkkzEyEEvI/bvwVF7rwgVv/w0lbsLYeuHA4517uvcePGZXKsj6MufmFxaXlwkpxdW19Y9Pc2m7IKBGY1HHEItHykSSMclJXVDHSigVBoc9I0+9f5H7zgQhJI36rBjFxQ9TlNKAYKS155rkTUu6lTohUDyOWXmeZcwadGAnEGGHwzgvgIZywtRL/+t7lfcUzS1bZGgLOEntMSmCMmme+OJ0IJyHhCjMkZdu2YuWmSCiKGcmKTiJJjHAfdUlbU45CIt10+GsG97XSgUEkdHEFh+rkRIpCKQehrzvzo+W0l4v/ee1EBaduSnmcKMLxaFGQMKgimAcHO1QQrNhAE4QF1bdC3NMxYKXjLeoQ7OmXZ0mjUraPy/bNUalqjeMogF2wBw6ADU5AFVyBGqgDDB7BM3gD78aT8Wp8Gl+j1jljPLMD/sD4/gH63qdt</latexit>

minK

k Yf � KYp k2F

<latexit sha1_base64="vT0VfJHNl+KJOUiHyRLk8h4nzT0=">AAACDHicbVDLSgMxFM34rPVVdekmWARXZaaICm4KbgQ3FewDOkPJpGkbmseQZMQyzAe48VfcuFDErR/gzr8xM52Fth4IHM45l9x7wohRbVz321laXlldWy9tlDe3tnd2K3v7bS1jhUkLSyZVN0SaMCpIy1DDSDdSBPGQkU44ucr8zj1RmkpxZ6YRCTgaCTqkGBkr9SvVxOfIjDFiyU2a+pc+9HkoHxIseRSbPJTalFtzc8BF4hWkCgo0+5UvfyBxzIkwmCGte54bmSBBylDMSFr2Y00ihCdoRHqWCsSJDpL8mBQeW2UAh1LZJwzM1d8TCeJaT3lok9nmet7LxP+8XmyGF0FChb2LCDz7aBgzaCTMmoEDqgg2bGoJworaXSEeI4Wwsf2VbQne/MmLpF2veWc17/a02qgXdZTAITgCJ8AD56ABrkETtAAGj+AZvII358l5cd6dj1l0ySlmDsAfOJ8/HFecQA==</latexit>K computation

<latexit sha1_base64="GvihKzNj1oqqRsJfAsDoFfK9Hiw=">AAACDnicbVDLSgMxFM34rPVVdekmWAquykwRFdwUdCG4qWAf0ClDJs20oXkMSUYsw3yBG3/FjQtF3Lp259+YTrvQ1gOBwzn3kntOGDOqjet+O0vLK6tr64WN4ubW9s5uaW+/pWWiMGliyaTqhEgTRgVpGmoY6cSKIB4y0g5HlxO/fU+UplLcmXFMehwNBI0oRsZKQamS+hyZIUYsvcmCq8y/8KHPQ/mQYsnjxORjWVAqu1U3B1wk3oyUwQyNoPTl9yVOOBEGM6R113Nj00uRMhQzkhX9RJMY4REakK6lAnGie2keJ4MVq/RhJJV9wsBc/b2RIq71mId2cnK7nvcm4n9eNzHReS+lwuYiAk8/ihIGjYSTbmCfKoING1uCsKL2VoiHSCFsbINFW4I3H3mRtGpV77Tq3Z6U67VZHQVwCI7AMfDAGaiDa9AATYDBI3gGr+DNeXJenHfnYzq65Mx2DsAfOJ8/du6c9w==</latexit>KD computation

Fig. 1. Overview of the Koopman operator computation suing EDMD and the proposed distributed DMD.

EDMD learning problem given in Eq. (4) and rewrite it asassociated with each agent as follows. Consider,

‖ Yf −KYp ‖2F

=

∣∣∣∣∣∣∣∣∣

∣∣∣∣∣∣∣∣∣

Yf,1 −K1YpYf,2 −K2Yp

...Yf,n −KnYp

∣∣∣∣∣∣∣∣∣

∣∣∣∣∣∣∣∣∣

2

F

=

n∑

α=1

‖ Yf,α −KαYp ‖2F (from Eq. (1))

=

n∑

α=1

‖ Yf,α −KNαYp,Nα −KNαYp,Nα

‖2F (from Remark 3)

=

n∑

α=1

‖ Yf,α −KNαYp,Nα

‖2F + ‖ KNαYp,Nα

‖2F

− 2 trace(

(Yf,α −KNαYp,Nα)>KNαYp,Nα

)

=

n∑

α=1

‖ Yf,α −KNαYpNα ‖2F (under Assumption 2)

Corollary 5: The transition mappings corresponding toeach agent α are obtained as a solution to the followingminimization problem.

minKNα

‖ Yf,α −KNαYp,Nα

‖2F (10)

Then, the transition mapping corresponding to the agent α isgiven by

Kα = KNαRp,α

Proof. Follows directly from Theorem 4.Remark 6: The distributed Koopman, KD is a block matrix

and has the zero blocks corresponding to the zero entries ofthe Laplacian matrix L and hence the sparsity structure of

KD depends on the connectivity of the underlying multi-agentsystem. Furthermore, it is important to reiterate that KD is nota symmetric matrix.

The ensuing result establishes that the distributive andcentralized learning problems coincide when the network isfully connected.

Corollary 7: For a fully-connected network, the distributedlearning problem and the centralized learning problem yieldsthe same Koopman operator.Proof. The proof follows by noticing that for any agent α, thecorresponding non-neighbors set, Nα is empty. Althoughthe above results were discussed for the EDMD case, underthe assumption that Ψ(x) = x and C = I , all the above resultsapply for DMD as well.

IV. NUMERICAL STUDY

In this section, the proposed distributive Koopman operatorcomputation is illustrated on two different dynamical systems,a network of oscillators and a power network along withcomparisons to existing works such as DMD and sparse DMD.

A. Network of oscillators

The dynamical system associated with the network of os-cillators is given by

θ

]=

[On In

−γM−1L −M−1D

] [θ

θ

](11)

where θ ∈ Rn and θ ∈ Rn represents the angle and angularspeed of the linear oscillator respectively with n being thenumber of agents. The matrices M ∈ Rn×n, D ∈ Rn×nare diagonal and the respective entries denote the inertia anddamping of each oscillator. The coupling between the agentsis captured by the Laplacian L and β represents the strengthof the interconnections. The matrices On and In respectivelydenote the zero and identity matrices of size n. The set ofequilibrium points of the system (11) is the synchronizationmanifold.

Page 5: Data-driven Distributed Learning of Multi-agent Systems: A

In a network of few oscillators (around 10 or less), theDMD and the distributive DMD yield no differences and inparticular results in the same Koopman operator structure,same eigenspectrum, and same predictive performance. Thesuperiority of the proposed distributive approach to identifythe finite-dimensional Koopman becomes noticeable as thenumber of agents increases. Hence, we consider a networkwith 100 oscillators and demonstrate the efficiency of theproposed distributive approach when the network is highlysparse.

To measure the sparsity of the network, we evaluate themetric network density, which is defined as shown below.

Network density =total number of edges

total possible number of edges

Essentially, the network density is a quantity that always lies inthe range (0, 1], where the small values of the network densityindicate a sparse network and for a fully-connected network,it takes the value of 1.Case study I - Varying number of neighbors: We considera small-world network with 100 oscillators and graduallyincrease the number of interconnections such that for anyoscillator, the number of neighboring oscillators varies from10 (sparse network) to 99 (fully connected network), as shownin Fig. 2 (top plot). Network parameters such as inertia,damping are chosen randomly and then kept constant for allthe different sets of simulations. The time-series data for eachscenario is generated by simulating each system for 30s with0.05s sampling time with 5 different randomly chosen initialconditions.

10 20 30 40 50 60 70 80 90 1000

0.5

1

Netw

ork

Density

10 20 30 40 50 60 70 80 90 100

5

10

15

Tra

inin

g

RM

SE

10-6

DMD Distributive DMD

10 20 30 40 50 60 70 80 90 100

Number of neighbors

0

0.5

1

Pre

dic

tion

RM

SE

DMD Distributive DMD

Fig. 2. For a small-world network with 100 oscillators, comparison of thetraining and prediction RMSE using DMD and distributed DMD while thenetwork density is varied.

For each choice of the network density, both DMD anddistributed DMD algorithm is used to compute the Koopmanoperator and it is observed that the training RMSE (see Fig.2 middle plot) for both DMD and distributed DMD remain

small (in the order of 10−6). However, distributed DMDperforms better in predicting the future evolution of the states(prediction window was 30s for each scenario), especiallywhen the network is sparsely connected as can be seen inthe bottom plot of Fig. 2.Case study II - Varying number of agents: In the second setof simulations, we consider small-world networks, such thatfor each scenario each agent had 10 neighbors. However, thenumber of agents in the network are increased from 20 to 100,such that the network density decreases gradually, that is, thenetwork is made more sparse, as shown in Fig. 3 (top plot).Similar to the case study on varying the number of neighbors,the training RMSE for both DMD and distributed DMD arevery small (see Fig. 3 middle plot). But as the sparsity inthe network is increased, the distributed DMD outperformednormal DMD, as far as prediction performance is concerned(see Fig. 3 bottom plot). The results are consistent with theprevious case in the sense that when the network sparsity ishigh the distributed DMD performs much better compared tonormal DMD for predicting the future evolution of the states.

20 30 40 50 60 70 80 90 100

0.4

0.6

0.8

1

Netw

ork

Density

20 30 40 50 60 70 80 90 1000

1

2

Tra

inin

g

RM

SE

10-5

DMD Distributive DMD

20 30 40 50 60 70 80 90 100

Number of agents

0

0.5

1

Pre

dic

tion

RM

SE

DMD Distributive DMD

Fig. 3. For a small-world network with 10 neighboring oscillators for eachoscillator, comparison of the training and prediction RMSE using DMD anddistributed DMD while increasing the number of oscillators.

The analysis shows that while the network is sparse, theDMD approach fails to identify the exact dynamics and net-work structure of the underlying system. In particular, it failsto capture the sparse topology and this leads to a degradationin prediction. However, the distributed DMD approach thatrelies on the knowledge of the underlying topology capturesthe correct connectivity and thus has a better predictionperformance. In particular, since centralized DMD does notincorporate the knowledge of the topology of the underlyingnetwork it essentially tries to find N2 (assuming that theKoopman operator K ∈ RN×N ) numbers that best fit the data.As such, it has no hard constraint to make any of its N2

element zero. However, our distributed approach forcefullymakes some of the elements of the Koopman operator zero

Page 6: Data-driven Distributed Learning of Multi-agent Systems: A

(depending on the topology) and then solves the minimizationproblem.DMD, Sparse DMD and distributed DMD: Sparse DMD, in-troduced in [28] addresses the problem of identifying a Koop-man operator that is sparse by imposing additional sparsitypromoting cost in (4) but the resultant minimization problemdoesn’t have an analytical solution and is thus computationallyexpensive. For comparing the proposed distributed DMD algo-rithm with sparsity promoting DMD, we considered a small-world network of 35 oscillators, such that any oscillator hasa maximum of 3 neighbors. With a weight of 0.01 for thesparsity promoting cost in sparse DMD (see [28] for moredetails), it is observed that the sparse DMD and the distributedDMD result in Koopman operator that is sparse whereasDMD is only diagonally dominant (see Fig. 4 (a)). Moreover,the eigenspectrum coincides for distributive DMD and sparseDMD, but not for DMD as shown in Fig. 4 (b). However,leveraging the learned models for prediction it can be seenthat the prediction RMSE for distributed DMD is orders ofmagnitude smaller than sparse DMD and DMD methods (referFig. 4 (c)).

Advantage of Distributed DMD: To address the advantagesof the proposed distributed algorithm and show the challengesassociated with existing works, we consider four differentsparse small-world networks with 15, 35, 60, 100 oscillators.The network densities of the different networks, trainingRMSE, prediction RMSE, and the computational time foreach network applying each method are shown in Table I.The DMD, sparse DMD, and distributed DMD have beenperformed on the computer with configuration: 2.6 GHz IntelCore i7 processor with 16GB of RAM. It is clear that thetraining RMSE for every sparse small-world network is smallunder DMD, sparse DMD, and distributed DMD. However, theprediction RMSE is orders of magnitude higher for both DMDand sparse DMD compared to distributed DMD. Moreover,observing the computational aspects of each of the modelsindicate that the DMD and distributed DMD is quite fasteven for a 100 oscillator network (200 states) whereas thecomputational complexity of sparse DMD increases rapidlywith an increase in the number of states.

Table I. Training and Predictive comparison of different approaches fordifferent sparse small-world networks. The weight for sparsity promotingcost is chosen to be 0.01 for all the networks. Color coding in the table

indicates that smaller values are shown in light green, intermediate values inyellow and large values in red.

Numberof states

Networkdensity

TrainingRMSE

PredictionRMSE

Computationtime [s]

30 0.1429 6.99E-05 0.00013 0.00170 0.0588 8.98E-05 0.27 0.001

120 0.0399 7.52E-05 0.91 0.003DMD

200 0.0202 1.51E-04 0.65 0.0130 0.1429 7.58E-05 1.21E-03 21.570 0.0588 9.97E-05 0.04 238.85

120 0.0399 6.53E-04 0.31 1072.78SparseDMD

200 0.0202 6.27E-03 0.68 4503.2330 0.1429 7.11E-05 2.01E-05 0.0170 0.0588 9.05E-05 1.17E-05 0.01

120 0.0399 7.95E-05 1.75E-05 0.23Distributed

DMD200 0.0202 1.60E-04 3.09E-05 0.82

It is important to mention here that for sparse DMD, sincethe level of the sparsity of the Koopman operator depends onthe choice of the weight for the sparsity promoting cost, thisweight needs to be refined to obtain sparsity seen in the case ofdistributed DMD and especially for large-scale networks thisis computationally expensive. Overall, the distributive DMDcombines the computational benefits of DMD and sparsity ofsparse DMD, while ensuring scalability and minimal predic-tion errors.

B. IEEE 68 Bus System

In this subsection, we consider a 68-bus, 16-machine, 5-area system which is a reduced order equivalent of theinter-connected New England test system (NETS) and NewYork power system (NYPS), with five geographical regionsout of which NETS and NYPS are represented by a groupof generators whereas, the power import from each of thethree other neighboring areas are approximated by equivalentgenerator models. The one-line diagram of the 68 bus systemis shown in Fig. 5 on the left and its equivalent multi-agentsystem with 5 agents is shown on the right.

Time-series data for this study is generated using Grid-STAGE by considering random load changes in the 68 busnetwork. GridSTAGE is introduced in [31] which is a spatio-temporal adversarial data generation tool that emulates highfrequency PMU and SCADA measurements. PMU measure-ments such as frequency and voltage magnitudes are consid-ered for training DMD and distributed DMD models (basedon the proposed distributed Algorithm 1).

The trained DMD and distributed DMD models are appliedto predict the system states starting from an initial conditioncorresponding to a disturbance in the network. The dominanteigenvalues of the Koopman operators using DMD and dis-tributed DMD are shown in Fig. 6 (a). The prediction RMSEfor frequency states and the voltage magnitude states are verysmall and are close to each other for both DMD and distributedDMD approaches (see Fig. 6 (b)). In particular, predictionperformance of both DMD and distributed DMD models arevery similar as can be seen in Figs. 6 (c) and (d).

V. CONCLUSION

This work proposes a distributed algorithm to compute theKoopman operator from time-series data under the knowledgeof the underlying system’s topology. This results in learningthe evolution of each agent as a function of itself and its neigh-bors and the global Koopman operator is obtained by stitchingthe individual Koopman operators. The main advantage of theproposed framework is the fact that the algorithm scales easilyfor large-scale networks, especially for sparse networks, whilemaintaining model accuracy all through. Moreover, simulationstudies on a network of oscillators and the IEEE 68 bus systemshow that the computational time to obtain the Koopmanoperator using DMD and distributed DMD is in the samerange. However, the evaluation of the DMD and distributedDMD model for predictions on a sparse network with 200

Page 7: Data-driven Distributed Learning of Multi-agent Systems: A

DMD

10 20 30 40 50 60 70

20

40

60Colu

mns

0

0.5

1

Sparse DMD

10 20 30 40 50 60 70

20

40

60Colu

mns

0

0.2

0.4

0.6

0.8

Distributive DMD

10 20 30 40 50 60 70

Rows

20

40

60Colu

mns

0

0.2

0.4

0.6

0.8

(a) Sparsity structure of the Koopman operator

0 0.2 0.4 0.6 0.8 1 1.2

Real part of eigenvalues

-0.04

-0.03

-0.02

-0.01

0

0.01

0.02

0.03

0.04

Imagin

ary

part

of eig

envalu

es

DMD

Sparse DMD

Distributive DMD

(b) Eigenspectrum of the Koopman operator

DMD

0 10 20 30 40 50 60 700

0.05

RM

SE

Sparse DMD

0 10 20 30 40 50 60 700

0.001

0.002

0.003

RM

SE

Distributive DMD

0 10 20 30 40 50 60 70

States

0

0.00001

0.00002

RM

SE

(c) Prediction RMSE starting from a random initialcondition

Fig. 4. Comparison of the distributed DMD with DMD, and sparse DMD for a sparse small-world network with 35 oscillators and network density of 0.0588.

NETS

NYPS

Area-3Area-4

Area-5

Fig. 5. New England IEEE 68 bus system, a 16-machine, 5-area networkdepicted as a multi-agent system where each agent corresponds to an area inthe power network. Figure also shows the connection between the NETS partof the system and its agent representation on the right. Similar interpretationis made for other areas and agents.

states reveal that the prediction RMSE is orders of magnitudehigher for DMD when compared to the distributed DMD.

REFERENCES

[1] H. K. Khalil and J. W. Grizzle, Nonlinear systems. Prentice hall UpperSaddle River, NJ, 2002, vol. 3.

[2] B. O. Koopman, “Hamiltonian systems and transformation in Hilbertspace,” Proceedings of the National Academy of Sciences, vol. 17, no. 5,pp. 315–318, 1931.

[3] I. Mezic, “Spectral properties of dynamical systems, model reductionand decompositions,” Nonlinear Dynamics, vol. 41, no. 1-3, pp. 309–325, 2005.

[4] C. W. Rowley, I. Mezic, S. Bagheri, P. Schlatter, and D. S. Henningson,“Spectral analysis of nonlinear flows,” Journal of fluid mechanics, vol.641, pp. 115–127, 2009.

[5] A. Lasota and M. C. Mackey, Chaos, fractals, and noise: stochasticaspects of dynamics. Springer Science & Business Media, 2013, vol. 97.

[6] J. H. Tu, C. W. Rowley, D. M. Luchtenburg, S. L. Brunton, and J. N.Kutz, “On dynamic mode decomposition: Theory and applications,”arXiv preprint arXiv:1312.0041, 2013.

[7] J. N. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor, Dynamicmode decomposition: data-driven modeling of complex systems. SIAM,2016.

[8] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley, “A data–drivenapproximation of the koopman operator: Extending dynamic modedecomposition,” Journal of Nonlinear Science, vol. 25, no. 6, pp. 1307–1346, 2015.

[9] Q. Li, F. Dietrich, E. M. Bollt, and I. G. Kevrekidis, “Extendeddynamic mode decomposition with dictionary learning: A data-drivenadaptive spectral decomposition of the Koopman operator,” Chaos:

An Interdisciplinary Journal of Nonlinear Science, vol. 27, no. 10, p.103111, 2017.

[10] H. Arbabi and I. Mezic, “Ergodic theory, dynamic mode decomposition,and computation of spectral properties of the Koopman operator,” SIAMJournal on Applied Dynamical Systems, vol. 16, no. 4, pp. 2096–2126,2017.

[11] B. Huang and U. Vaidya, “Data-driven approximation of transfer op-erators: Naturally structured dynamic mode decomposition,” in 2018Annual American Control Conference (ACC). IEEE, 2018, pp. 5659–5664.

[12] E. Yeung, S. Kundu, and N. Hodas, “Learning deep neural networkrepresentations for koopman operators of nonlinear dynamical systems,”in 2019 American Control Conference (ACC). IEEE, 2019, pp. 4832–4839.

[13] B. Lusch, J. N. Kutz, and S. L. Brunton, “Deep learning foruniversal linear embeddings of nonlinear dynamics,” arXiv preprintarXiv:1712.09707, 2017.

[14] N. Takeishi, Y. Kawahara, and T. Yairi, “Learning Koopman invariantsubspaces for dynamic mode decomposition,” in Advances in NeuralInformation Processing Systems, 2017, pp. 1130–1140.

[15] P. J. Schmid, “Dynamic mode decomposition of numerical and experi-mental data,” Journal of fluid mechanics, vol. 656, pp. 5–28, 2010.

[16] S. Sinha, B. Huang, and U. Vaidya, “On robust computation of koopmanoperator and prediction in random dynamical systems,” Journal ofNonlinear Science, pp. 1–34, 2019.

[17] U. Vaidya, “Observability gramian for nonlinear systems,” in Decisionand Control, 2007 46th IEEE Conference on. IEEE, 2007, pp. 3357–3362.

[18] A. Surana and A. Banaszuk, “Linear observer synthesis for nonlin-ear systems using Koopman operator framework,” IFAC-PapersOnLine,vol. 49, no. 18, pp. 716–723, 2016.

[19] S. L. Brunton, B. W. Brunton, J. L. Proctor, and J. N. Kutz, “Koop-man invariant subspaces and finite linear representations of nonlineardynamical systems for control,” PloS one, vol. 11, no. 2, p. e0150171,2016.

[20] B. Huang, X. Ma, and U. Vaidya, “Feedback Stabilization UsingKoopman Operator,” arXiv preprint arXiv:1810.00089, 2018.

[21] M. Korda and I. Mezic, “Koopman model predictive control of nonlineardynamical systems,” in The Koopman Operator in Systems and Control.Springer, 2020, pp. 235–255.

[22] S. Sinha and U. Vaidya, “Data-driven approach for inferencing causalityand network topology,” in 2018 Annual American Control Conference(ACC). IEEE, 2018, pp. 436–441.

[23] ——, “On data-driven computation of information transfer for causalinference in discrete-time dynamical systems,” Journal of NonlinearScience, pp. 1–26, 2020.

[24] ——, “Formalism for information transfer in dynamical network,” in2015 54th IEEE Conference on Decision and Control (CDC). IEEE,2015, pp. 5731–5736.

[25] ——, “Causality preserving information transfer measure for controldynamical system,” in 2016 IEEE 55th Conference on Decision andControl (CDC). IEEE, 2016, pp. 7329–7334.

Page 8: Data-driven Distributed Learning of Multi-agent Systems: A

0.965 0.97 0.975 0.98 0.985 0.99 0.995 1 1.005

Real part of eigenvalues

-0.2

-0.15

-0.1

-0.05

0

0.05

0.1

0.15

0.2

Ima

gin

ary

pa

rt o

f e

ige

nva

lue

sDMD

Distributive DMD

(a) Dominant eigenvalues whose absolute value is greater than 0.98

DMD

0 0.004 0.008

Frequency

prediction RMSE

0

20

40

60

Bus L

ocation

Distributive DMD

0 0.004 0.008

Frequency

prediction RMSE

0

20

40

60

Bus L

ocation

0 0.001 0.002 0.003

Voltage magnitude

prediction RMSE

0

20

40

60

Bus L

ocation

0 0.001 0.002 0.003

Voltage magnitude

prediction RMSE

0

20

40

60

Bus L

ocation

(b) Prediction RMSE at each bus

(c) The frequency and voltage magnitude predictions using DMD. (d) The frequency and voltage magnitude predictions using distributiveDMD

Fig. 6. Illustration of the proposed approach on the 5-area, IEEE 68 bus system.

[26] ——, “On information transfer in discrete dynamical systems,” in 2017Indian Control Conference (ICC). IEEE, 2017, pp. 303–308.

[27] N. Boddupalli, A. Hasnain, S. P. Nandanoori, and E. Yeung, “Koopmanoperators for generalized persistence of excitation conditions for nonlin-ear systems,” in 2019 IEEE 58th Conference on Decision and Control(CDC). IEEE, 2019, pp. 8106–8111.

[28] M. R. Jovanovic, P. J. Schmid, and J. Nichols, “Low-rank and sparsedynamic mode decomposition,” Center for Turbulence Research AnnualResearch Briefs, vol. 2012, pp. 139–152, 2012.

[29] Y. Li, H. He, J. Wu, D. Katabi, and A. Torralba, “Learning compo-sitional koopman operators for model-based control,” arXiv preprintarXiv:1910.08264, 2019.

[30] S. P. Nandanoori, S. Sinha, and E. Yeung, “Data-driven operatortheoretic methods for global phase space learning,” in 2020 AmericanControl Conference (ACC). IEEE, 2020, pp. 4551–4557.

[31] S. P. Nandanoori, S. Kundu, S. Pal, K. Agarwal, and S. Choudhury,“Model-agnostic algorithm for real-time attack identification in powergrid using koopman modes,” in 2020 IEEE International Conference onCommunications, Control, and Computing Technologies for Smart Grids(SmartGridComm). IEEE, 2020, pp. 1–6.