siamj.applied dynamical systems - arxiv.org … · 2015-08-31 · paper we incorporate periodic...

22
arXiv:1406.4885v2 [nlin.CD] 28 Aug 2015 SIAM J. APPLIED DYNAMICAL SYSTEMS c xxxx Society for Industrial and Applied Mathematics Vol. xx, pp. x x–x Periodic eigendecomposition and its application to Kuramoto-Sivashinsky system Xiong Ding and Predrag Cvitanovi´ c Abstract. Periodic eigendecomposition, to be formulated in this paper, is a numerical method to compute Floquet spectrum and Floquet vectors along periodic orbits in a dynamical system. It is rooted in numerical algorithms advances in computation of ‘covariant vectors’ of the linearized flow along an ergodic trajectory in a chaotic system. Recent research on covariant vectors strongly strongly suggests that the physical dimension of inertial manifold of a dissipative PDE can be characterized by a finite number of ‘entangled modes’, dynamically isolated from the residual set of transient degrees of freedom. We anticipate that Floquet vectors display similar properties as covariant vectors. In this paper we incorporate periodic Schur decomposition to the computation of dynamical Floquet vectors, compare it with other methods, and show that the method can yield the full Floquet spectrum of a periodic orbit at every point along the orbit to high accuracy. Its power, and in particular its ability to resolve eigenvalues whose magnitude differs by hundreds of orders magnitude, is demonstrated by applying the algorithm to computation of the full linear stability spectrum of several periodic solutions in one dimensional Kuramoto-Sivashinsky flow. Key words. periodic eigendecomposition, periodic Schur decomposition, periodic Sylvester equation, covariant vectors, Floquet vectors, Kuramoto-Sivashinsky, linear stability, continuous symmetry AMS subject classifications. 15A18, 35B10, 37L20, 37M25, 65F15, 65H10, 65P20, 65P40, 76F20 1. Introduction. In dissipative chaotic dynamical systems, the decomposition of the tan- gent space of invariant subsets into stable, unstable and center subspaces is important for analyzing the geometrical structure of the solution field [18]. For equilibrium points, the task is quite simple, which is reduced to the eigen-problem of a single stability matrix, but the scenario is much more difficult for complex structures, such as periodic orbits and invariant tours, since the expanding/contracting rate in high dimensional systems usually span a large order of magnitude. Actually, in literature, two different algorithms are capable of resolving this problem partially originated from different settings. The first candidate is covariant vec- tor algorithm [14, 24, 39]. It is designed to stratify the Oseledets subspaces [32] corresponding to the hierarchy of Lyapunov exponents along a long non-wandering orbit on the attractor. Covariant vectors attract a lot of attention in the past few years. They turn out to be a useful tool for physicists to investigate the dynamical properties of the system, such as hyperbolicity degree [4, 19, 23] and the geometry of inertial manifold [34, 40, 42]. For our interest in peri- odic orbits, it produces Floquet spectrum and Floquet vectors. The second candidate is called periodic Schur decomposition(PSD) [3], and was brought up to compute the eigenvalues of the product of a sequence of matrices without forming the product explicitly. This is suitable for solving the eigenvalue problem in tangent space because the fundamental matrix in tangent space can be formed as a product of its shorter-time pieces. However, in its original form, Center for Nonlinear Science, School of Physics, Georgia Institute of Technology, Atlanta, GA 30332 ([email protected], [email protected]). In literature, it is termed “covariant Lyapunov vectors”, but we prefer to use covariant vectors in this paper. 1

Upload: dinhdang

Post on 07-Aug-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

arX

iv:1

406.

4885

v2 [

nlin

.CD

] 2

8 A

ug 2

015

SIAM J. APPLIED DYNAMICAL SYSTEMS c© xxxx Society for Industrial and Applied MathematicsVol. xx, pp. x x–x

Periodic eigendecomposition and its application to Kuramoto-Sivashinskysystem

Xiong Ding† and Predrag Cvitanovic†

Abstract. Periodic eigendecomposition, to be formulated in this paper, is a numerical method to computeFloquet spectrum and Floquet vectors along periodic orbits in a dynamical system. It is rooted innumerical algorithms advances in computation of ‘covariant vectors’∗ of the linearized flow alongan ergodic trajectory in a chaotic system. Recent research on covariant vectors strongly stronglysuggests that the physical dimension of inertial manifold of a dissipative PDE can be characterized bya finite number of ‘entangled modes’, dynamically isolated from the residual set of transient degreesof freedom. We anticipate that Floquet vectors display similar properties as covariant vectors. In thispaper we incorporate periodic Schur decomposition to the computation of dynamical Floquet vectors,compare it with other methods, and show that the method can yield the full Floquet spectrum of aperiodic orbit at every point along the orbit to high accuracy. Its power, and in particular its abilityto resolve eigenvalues whose magnitude differs by hundreds of orders magnitude, is demonstratedby applying the algorithm to computation of the full linear stability spectrum of several periodicsolutions in one dimensional Kuramoto-Sivashinsky flow.

Key words. periodic eigendecomposition, periodic Schur decomposition, periodic Sylvester equation, covariantvectors, Floquet vectors, Kuramoto-Sivashinsky, linear stability, continuous symmetry

AMS subject classifications. 15A18, 35B10, 37L20, 37M25, 65F15, 65H10, 65P20, 65P40, 76F20

1. Introduction. In dissipative chaotic dynamical systems, the decomposition of the tan-gent space of invariant subsets into stable, unstable and center subspaces is important foranalyzing the geometrical structure of the solution field [18]. For equilibrium points, the taskis quite simple, which is reduced to the eigen-problem of a single stability matrix, but thescenario is much more difficult for complex structures, such as periodic orbits and invarianttours, since the expanding/contracting rate in high dimensional systems usually span a largeorder of magnitude. Actually, in literature, two different algorithms are capable of resolvingthis problem partially originated from different settings. The first candidate is covariant vec-tor algorithm [14, 24, 39]. It is designed to stratify the Oseledets subspaces [32] correspondingto the hierarchy of Lyapunov exponents along a long non-wandering orbit on the attractor.Covariant vectors attract a lot of attention in the past few years. They turn out to be a usefultool for physicists to investigate the dynamical properties of the system, such as hyperbolicitydegree [4, 19, 23] and the geometry of inertial manifold [34, 40, 42]. For our interest in peri-odic orbits, it produces Floquet spectrum and Floquet vectors. The second candidate is calledperiodic Schur decomposition(PSD) [3], and was brought up to compute the eigenvalues of theproduct of a sequence of matrices without forming the product explicitly. This is suitable forsolving the eigenvalue problem in tangent space because the fundamental matrix in tangentspace can be formed as a product of its shorter-time pieces. However, in its original form,

†Center for Nonlinear Science, School of Physics, Georgia Institute of Technology, Atlanta, GA 30332([email protected], [email protected]).

∗In literature, it is termed “covariant Lyapunov vectors”, but we prefer to use covariant vectors in thispaper.

1

2 XIONG DING AND PREDRAG CVITANOVIC

PSD are only capable of computing eigenvalues but not eigenvectors. Also PSD seems notwell known to the physics community.

In this paper, we unify these two methods for computing Floquet spectrum and Floquetvectors along periodic orbits or invariant tori, and name it after periodic eigendecomposition.Special attention is exerted to complex conjugate Floquet vectors. There are two stages inthe process of this algorithm, each of which can be accomplished by two different methods, sowe study performance of four different algorithms in all. Also it turns out that the covariantvectors algorithm reduces to one of them when applied to periodic orbits.

The paper is organized as follows. Sect. 2 describes briefly the nonlinear dynamics moti-vation for undertaking this project, and reviews two existing algorithms related to our work.Readers interested only in the algorithms itself can skip this part. We describe the computa-tional problem in sect. 3. In sect. 4 we deal with the first stage of periodic eigendecomposition,and then show that both the periodic QR algorithm and simultaneous iteration are capableof achieving periodic Schur decomposition. Sect. 5 introduces power iteration and reorderingas two practical methods to obtain all eigenvectors. In sect. 6 we compare the computa-tional effort required by different methods, and sect. 7 applies periodic eigendecomposition toKuramoto-Sivashinsky equation, an example which illustrates method’s effectiveness.

2. Dynamics background and existing algorithms. The study of dynamical systems istrying to understand the statistical properties of the system and the geometrical structureof the global attractor. As we will see, periodic orbits plays an important role in answer-ing both questions. For dissipative systems, orbits typically land onto an invariant subset,called global attractor, after a transient period, and if the system is chaotic, the attractor isa strange attractor which contains a dense set of periodic orbits. The chaotic deterministicflow on strange attractor can be visualized as a walk chaperoned by a hierarchy of unstableinvariant solutions (equilibria, periodic orbits) embedded in the attractor. An ergodic trajec-tory shadows one such invariant solution for a while, is expelled along its unstable manifold,settles into the neighborhood of another invariant solution for a while, and continues in thisway forever. Together, the infinite set of these unstable invariant solutions forms the skeletonof a chaotic attractor, and in fact spatiotemporal averages, such as deterministic diffusioncoefficients, energy dissipation rate, Lyapunov exponents, etc. can be accurately calculated asa sum over periodic orbits weighted by products of their unstable Floquet multipliers [5, 7].This is one reason we study the algorithm of computing Floquet spectrum in this paper.

On the other hand, strange attractor is usually a fractal subset with unsmooth surface, andis not handy to analyze. So this motivates the formulation of concept of inertial manifold [35],which contains the global attractor but is integer-dimensional and exponentially attractive.The existence of inertial manifold has been proved for many dissipative dynamical systems [35],but the mathematical proof shed little knowledge about the dimension of inertial manifold.Although, upper bounds are persistently improved for some systems, such as Kuramoto-Siva-shinsky equation [20, 31], they are far from being tight and give limited hint for suitable modetruncation in numerical simulations. Recently, however, there is strong numerical evidence [34,42] that the long-time chaotic (turbulent) dynamics of at least two spatially extended systems,Kuramoto-Sivashinsky and complex Landau-Ginzburg, is confined to an inertial manifold thatis everywhere locally spanned by a finite number of ‘entangled’ modes, dynamically isolated

PERIODIC EIGENDECOMPOSITION 3

from the residual set of isolated, transient degrees of freedom. Covariant vectors exhibit anapproximate orthogonality between the ‘entangled’ modes and the rest, the ‘isolated’ modes.These results suggest that for a faithful numerical integration of dissipative PDEs, a finitenumber of entangled modes should suffice, and that increasing the dimensionality beyond thatmerely increases the number of isolated modes, with no effect on the long-time dynamics. Thiswork has been made possible by advances in algorithms for computation of large numbers of‘covariant vectors’ [14, 15, 24, 30, 33, 39]. While these studies offer strong evidence forfinite dimensionality of inertial manifolds of dissipative flows, they are based on numericalsimulations of long ergodic trajectories and they yield no intuition about the geometry of theattractor. That is attained by studying the hierarchies of unstable periodic orbits, invariantsolutions which, together with their Floquet vectors, provide an effective description of boththe local hyperbolicity and the global geometry of an attractor embedded in a high-dimensionalstate space. Motivated by the above studies of covariant vectors, we formulate in this paper aperiodic eigendecomposition algorithm suited to accurate computation of Floquet vectors ofunstable periodic orbits.

2.1. Linear stability. Now, we turn to the definition of Floquet exponents and Floquetvectors. Let the flow of a autonomous continuous system be described by x = v(x), x ∈ R

n

and the corresponding time-forward trajectory starting from x0 is x(t) = f t(x0). In the linearapproximation, the deformation of an infinitesimal neighborhood of x(t) (dynamics in tangentspace) is governed by the Jacobian matrix (fundamental matrix) δx(x0, t) = J t(x0) δx(x0, 0),where J t(x0) = J t−t0(x0, t0) = ∂f t(x0)/∂x0. Jacobian matrix satisfies the semi-group multi-plicative property (chain rule) along an orbit,

(2.1) J t−t0(x(t0), t0) = J t−t1(x(t1), t1)Jt1−t0(x(t0), t0) .

For a periodic point x on orbit p of period Tp, Jp = JTp(x) is called the Floquet matrix(monodromy matrix) and its eigenvalues the Floquet multipliers Λj. A Floquet multiplier isa dimensionless ratio of the final/initial perturbation along the jth eigen-direction. It is anintrinsic, local property of a smooth flow, invariant under all smooth coordinate transforma-tions. The associated Floquet vectors ej(x), Jp ej = Λjej , define the invariant directions ofthe tangent space at the periodic point x = x(t) ∈ p. Evolving small initial perturbationaligned with a Floquet direction will generate the corresponding unstable manifold along theperiodic orbit. Floquet multipliers are either real, Λj = σj |Λj|, σj ∈ {1,−1}, or form complexpairs, {Λj ,Λj+1} = {|Λj | exp(iθj), |Λj | exp(−iθj)}, 0 < θj < π. The real parts of Floquetexponents µj = (ln |Λj |)/Tp describe the mean contraction or expansion rates per one periodof the orbit. The Jacobian matrix is naively obtained numerically by integrating the stabilitymatrix

(2.2)dJ t

dt= A(x)J t , with A(x) =

∂v(x)

∂x

along the orbit. However, it is almost certain that this process will overflow or underflowat exponential rate as the system evolves or the resulting Jacobian is highly ill-conditioned.Thus, accurate calculation of expansion rate is not trivial for nonlinear systems, especially forthose that evolve in a high dimensional space. In such cases, the expansion/contraction rate

4 XIONG DING AND PREDRAG CVITANOVIC

can easily range over many orders of magnitude, which raises a challenge to formulating aneffective algorithm to tackle this problem. However, the semi-group property (2.1) enables usto factorize the Jacobian matrix into a product of short-time matrices with matrix elementsof comparable magnitudes. So the problem is reduced to calculating the eigenvalues of theproduct of a sequence of matrices.

2.2. covariant vectors. multiplicative ergodic theorem [29, 32] says that the forward andbackward Oseledets matrices

(2.3) Λ±(x) := limt→±∞

[J t(x)⊤J t(x)]1/2t

both exist for an invertible dynamical system equipped with an invariant measure. The eigen-values are eλ

±1(x) < · · · < eλ

±s (x), where λ±

i (x) are the Lyapunov exponents (characteristicexponents) and s is the total number of distinct exponents (s ≤ n). For an ergodic system,Lyapunov exponents are the same almost everywhere, and λ+

i (x) = −λ−s−i+1(x) = λi. The

corresponding eigenspaces U±1 (x), · · · , U±

s (x) can be used to construct the forward and back-ward invariant subspaces: V +

i (x) = U+1 (x) + · · ·+U+

i (x) , V −i (x) = U−

s (x) + · · ·+U−s−i+1. So

the intersections Wi(x) = V +i (x) ∩ V −

i (x) are dynamically forward and backward invariant:J±t(x)Wi(x) → Wi(f

±t(x)), i = 1, 2, · · · , s. The expansion rate in invariant subspace Wi(x)is given by the corresponding Lyapunov exponents,

(2.4) limt→±∞

1

|t| lnw

w J t(x)uw

w = limt→±∞

1

|t| lnw

w

w[J t(x)⊤J t(x)]1/2u

w

w

w= ±λi , u ∈ Wi(x)

If a Lyapunov exponent has degeneracy one, the corresponding subspace Wi(x) reduces to avector, called covariant vector. For periodic orbits, these λi (evaluated numerically as t → ∞limits of many repeats of the prime period T) coincide with the real part of Floquet exponents(computed in one period of the orbit). Subspace Wi(x) coincides with a Floquet vector, or, ifthere is degeneracy, a subspace spanned by Floquet vectors.

The reorthonormalization procedure formulated by Benettin etc. [2] is the standard wayto calculate the full spectrum of Lyapunov exponents, and it is shown [10] that the orthogonalvectors produced at the end of calculation converges to U−

i , eigenvectors of Λ−(x), calledthe GS vectors (backward Lyapunov vectors). Based on this technique, Wolf etc. [39] andGinelli etc. [14] invented independent methods to recover covariant vectors from GS vectors.Here, we should emphasize that GS vectors are not invariant. Except the leading one, all ofthem are dependent on the specific inner product imposed by the dynamics. Also the localexpansion rate of covariant vectors are not identical to the local expansion rate of GS vectors.Specifically for periodic orbits, Floquet vectors depend on no norm, and map forward andbackward as ej → J ej under time evolution. In contrast, the linearized dynamics does nottransport GS vectors into the tangent space computed further downstream. For more detailedcomparison, please see [24, 41].

2.3. Covariant vectors algorithm. Here we briefly introduce the method used by Ginellietc. to extract covariant vectors from GS vectors. The setup is the same as computing Lya-punov exponents. We follow a long ergodic trajectory, and integrate the linearized dynamicsin tangent space (2.2) with periodic orthonormalization, shown as the first two stages in fig-ure 1. Here, Ji is the Jacobian matrix corresponding to time interval (ti, ti+1), and diagonal

PERIODIC EIGENDECOMPOSITION 5

x(t0)

x(t1)

x(t2)

x(t3)stage 1: JiQ

i = Qi+1R

i

forward transient

stage 2: JiQi = Qi+1Ri

forward, record

Qi

stage 3: Ci = R−1i Ci+1

backward transientstag

e 4: Ci

= R−1iCi+1

backwa

rd,reco

rdCi

Figure 1. Four stages of Covariant vectors algorithm. The black line is a part of a long ergodic trajectory.

elements of upper-triangular matrices Ri store local Lyapunov exponents, long time averageof which gives the Lyapunov exponents of this system. We assume Qi converges to the GSvectors after stage 1, and start to record Ri in stage 2. Since the first m GS vectors spanthe same subspace as the first m covariant vectors, which means Wi = QiCi

† with Ci anupper-triangular matrix, giving the expansion coefficients of covariant vectors in the GS ba-sis. So we have Wi = Ji−1Qi−1R

−1i Ci = Ji−1Wi−1C

−1i−1R

−1i Ci. Since Wi is invariant in the

tangent space, we must have C−1i−1R

−1i Ci = I, which gives the backward dynamics of matrix

Ci : Ci−1 = R−1i Ci. Ginelli etc. cleverly uncover this backward dynamics and show that Ci

converges after a sufficient number of iterations (stage 3 in figure 1). This process is contin-ued in stage 4 in figure 1, and Ci are recorded in this stage. Finally, we obtain the covariantvectors for trajectory x(t1) to x(t2) in figure 1.

covariant vectors algorithm is invented to stratify the tangent spaces along an ergodictrajectory, so it is hard to observe degeneracy numerically. However, for periodic orbits, itis possible that some Floquet vectors form conjugate complex pairs. When this algorithm isapplied to periodic orbits, it is reduced to a combination of simultaneous iteration and purepower iteration; consequently, complex conjugate pairs cannot be told apart. This means thatwe need to pay attention to the two dimensional rotation when checking the convergence ofeach stages in figure 1. As is shown in latter sections, a complex conjugate pair of Floquetvectors can be extracted from this converged two dimensional subspace.

2.4. Periodic Schur decomposition algorithm. The implicit QR algorithm (Francis’salgorithm) is the standard way of solving eigen problem of a single matrix in many numericalpackages, such as the eig() function in Matlab. Bojanczyk etc. [3] extends the idea to computeeigenvalues of the product of a sequence of matrices. Later on, Kurt Lust [26] describes theimplementation details and provides the corresponding Fortran code. As stated before, by useof chain rule (2.1), Jacobian matrix can be decomposed into a product of short-time Jacobianswith the same dimension, so periodic Schur decomposition is suitable for computing Floquet

†Here, Wi refers to the matrix formed by individual covariant vectors at step i of the algorithm. Do notget confused with the ith covariant vector

6 XIONG DING AND PREDRAG CVITANOVIC

Figure 2. Two stages of periodic Schur decomposition algorithm illustrated by tree matrices.

exponents, and we think it is necessary to introduce this algorithm into physics community.As illustrated in figure 2, periodic Schur decomposition proceeds in two stages. First, the

sequence of matrices are transformed to Hessenberg-Triangular form, one of which has upper-Hessenberg form while the others are upper-triangular , by a series of Household reflections.The second stage is iteration of periodic QR algorithm, which diminishes the sub-diagonalcomponents of the Hessenberg matrix until it becomes quasi-upper-triangular. The conver-gence of this second stage is guaranteed by “Implicit Q Theorem” [12, 38]. After the secondstage, The sequence of matrices are all transformed into upper-triangular form except one ofthem be quasi-upper triangular - there are some [2×2] blocks on the diagonal correspondingto complex eigenvalues. Then the eigenvalues are the product of their diagonal elements.However, periodic Schur decomposition is not enough for extracting eigenvectors except theleading one. Kurt Lust claims to formulate the corresponding Floquet vector algorithm, butto the best of our knowledge, such algorithm is not present in literature. Fortunately, Granatetc. [17] propose a method to reorder the diagonal elements after periodic Schur decomposi-tion. It provides a elegant way to compute Floquet vectors as we will see in later sections.

3. Description of the problem. After introducing the underlying physical motivation, letus turn to the definition of the problem. According to (2.1), Jacobian matrix can be integratedpiece by piece along a state orbit:

J t(x0) = J tm−tm−1(x(tm−1), tm−1) · · · J t2−t1(x(t1), t1)Jt1−t0(x(t0), t0)

with t0 = 0, tm = t and x0 the initial point. For periodic orbits, x(tm) = x0. The timesequence ti, i = 1, 2, · · · ,m − 1 is chosen properly such that the elements of each Jacobianmatrix associated with each small time interval has relatively similar order of magnitude. Forsimplicity, we drop all the parameters above and use a bold letter to denote the product:

(3.1) J(0) = JmJm−1 · · · J1 , Ji ∈ R

n×n, i=1, 2, · · · ,m .

This product can be diagonalized if and only if the sum of dimensions of eigenspaces of J(0)

is n.

(3.2) J(0) = E(0)Σ(E(0))−1 ,

where Σ is a diagonal matrix which stores J(0)’s eigenvalues (Floquet multipliers), {Λ1,Λ2, · · · ,Λn},and columns of matrixE(0) are the eigenvectors (Floquet vectors) of J(0): E(0) = [e

(0)1 , e

(0)2 , · · · , e(0)n ].

PERIODIC EIGENDECOMPOSITION 7

In this paper all vectors are written in the column form, transpose of v is denoted v⊤, andEuclidean ‘dot’ product by (v⊤ u). The challenge associated with obtaining diagonalized form(3.2) is the fact that often J

(0) should not be written explicitly since the integration process(2.2) may overflow or the resulting matrix is highly ill-conditioned. Floquet multipliers caneasily vary over 100’s orders of magnitude, depending on the system under study and theperiod of the orbit; therefore all transformations should be applied to the short time Jaco-bian matrices Ji individually, instead of working with the full-time J

(0). Also, in order tocharacterize the geometry along a periodic orbit, not only the Floquet vectors at the initialpoint are required, but also the sets at each point on the orbit. Therefore, we also desirethe eigendecomposition of the cyclic rotations of J

(0): J(k) = JkJk−1 · · · J1Jm · · · Jk+1 for

k = 1, 2, . . . ,m−1. Eigendecomposition of all J(k) is called the periodic eigendecomposition ofthe matrix sequence Jm, Jm−1, · · · , J1.

The process of implementing eigendecomposition (3.2) proceeds in two stages. First,periodic real Schur form (PRSF) is obtained by a similarity transformation for each Ji,

(3.3) Ji = QiRiQ⊤i−1 ,

with Qi orthogonal matrix, and Q0 = Qm. In the case considered here, Rm is quasi-upper tri-angular with [1×1] and [2×2] blocks on the diagonal, and the remainingRi, i = 1, 2, · · · ,m−1 areupper triangular. The existence of PRSF, proved in ref. [3], provides the periodic QR algorithmthat implements periodic Schur decomposition. Defining R

(k) = RkRk−1 · · ·R1Rm · · ·Rk+1,we have

(3.4) J(k) = QkR

(k)Q⊤k ,

with the eigenvectors of matrix J(k) related to eigenvectors of quasi-upper triangular matrix

R(k) by orthogonal matrix Qk. J

(k) and R(k) have the same eigenvalues, stored in the [1×1]

and [2×2] blocks on the diagonal of R(k), and their eigenvectors are transformed by Qk, sothe second stage concerns the eigendecomposition of R(k). Eigenvector matrix of R(k) hasthe same structure as Rm. We evaluate it by two distinct algorithms. The first one is poweriteration , while the second algorithm relies on solving a periodic Sylvester equation [17].

As all R(k) have the same eigenvalues, and their eigenvectors are related by similaritytransformations,

(3.5) R(k) = (Rm · · ·Rk+1)

−1R

(0)(Rm · · ·Rk+1) ,

one may be tempted to calculate the eigenvectors of R(0), and obtain the eigenvectors of R(k)

by (3.5). The pitfall of this approach is that numerical errors accumulate when multiplying asequence of upper triangular matrices, especially for large k. Therefore, in the second stage ofimplementing periodic eigendecomposition, iteration is needed for each R

(k) if power iterationmethod is chosen in this stage. Periodic Sylvester equation bypasses this problem by givingthe eigenvectors of all R(k) simultaneously.

Our work illustrates the connection between different algorithms in the two stages of im-plementing periodic eigendecomposition, pays attention to the case when eigenvectors appearas complex pairs, and demonstrates that eigenvectors can be obtained directly from periodicSylvester equation without restoring PRSF.

8 XIONG DING AND PREDRAG CVITANOVIC

4. Stage 1 : periodic real Schur form (PRSF). This is the first stage of implement-ing periodic eigendecomposition. Eq. (3.4) represents the eigenvalues of matrix J

(k) as realeigenvalues on the diagonal, and complex eigenvalue pairs as [2×2] blocks on the diagonalof R(k). More specific, if the ith eigenvalue is real, it is given by the product of all the ithdiagonal elements of matrices R1, R2, · · · , Rm. In practice, the logarithms of magnitudes ofthese numbers are added, in order to overcome numerical overflows. If the ith and (i + 1)theigenvalues form a complex conjugate pair, all [2×2] matrices at position (i, i+1) on the diag-onal of R1, R2, · · · , Rm are multiplied with normalization at each step, and the two complexeigenvalues of the product are obtained. There is no danger of numerical overflow because allthese [2×2] matrices are in the same position and in our applications their elements are ofsimilar order of magnitude. Sec. 2.4 introduce the periodic Schur decomposition to achievePRSF. Another alternative is the first two stages of covariant vectors in sec. 2.3, which reducesto simultaneous iteration for periodic orbits. Actually, these two methods are equivalent [37],but the computational complexity differs.

Simultaneous iteration. The basic idea of simultaneous iteration is implementing QR de-composition in the process of power iteration. Assume all Floquet multipliers are real, withoutdegeneracy, and order them by their magnitude: |Λ1| > |Λ2| > · · · > |Λn|, with correspondingnormalized Floquet vectors e1, e2, · · · , en. For simplicity, here we have dropped the upper

indices of these vectors. An arbitrary initial vector q1 =∑n

i=1 α(1)i ei will converge to the first

Floquet vector e1 after normalization under power iteration of J(0),

limℓ→∞

(J(0))ℓq1|| · || → q1 = e1 .

Here || · || denotes the Euclidean norm of the numerator (||x|| =√x⊤x). Let 〈a, b, · · · , c〉

represent the space spanned by vector a, b, · · · , c in Rn. Another arbitrary vector q2 is then

chosen orthogonal to subspace 〈q1〉 by Gram-Schmidt orthonormalization, q2 =∑n

i=2 α(2)i [ei−

(q⊤1 ei)q1]. Note that the index starts from i = 2 because 〈q1〉 = 〈v1〉. The strategy now is toapply power iteration of J(0) followed by orthonormalization in each iteration.

J(0)q2 =

n∑

i=2

α(2)i [Λiei − Λ1(q

⊤1 ei)q1]

=

n∑

i=2

α(2)i Λi[ei − (q⊤1 ei)q1] +

n∑

i=2

α(2)i (Λi − Λ1)(q

⊤1 ei)q1 .

The second term in the above expression will disappear after performing Gram-Schmidt or-thonormalization to 〈q1〉, and the first term will converge to q2 = e2 − (q⊤1 e2)q1 (not normal-ized) after a sufficient number of iterations because of the decreasing magnitudes of Λi, andwe also note that 〈v1, v2〉 = 〈q1, q2〉. The same argument can be applied to qi, i = 3, 4, · · · , nas well. In this way, after a sufficient number of iterations,

limℓ→∞

(J(0))ℓ[q1, q2, · · · , qn] → [q1, q2 · · · , qn] ,

where

q1 = e1 , q2 =e2 − (e⊤2 q1)q1

|| · || , · · · , qn =en −∑n−1

i=1 (e⊤n qi)qi

|| · || .

PERIODIC EIGENDECOMPOSITION 9

Let matrix Q0 = [q1, q2, · · · , qn]; then we have J(0)Q0 = Q0R(0) with R

(0) an upper triangularmatrix because of 〈q1, q2, · · · , qi〉 = 〈v1, v2, · · · , vi〉, which is just J(0) = Q0R

(0)Q⊤0 (the Schur

decomposition of J(0)). The diagonal elements of R(0) are the eigenvalues of J(0) in decreasingorder. Numerically, the process described above can be implemented on an arbitrary initial fullrank matrix Q0 followed by QR decomposition at each step JsQs−1 = QsRs with s = 1, 2, 3, · · ·and Js+m = Js. For sufficient number of iterations, Qs and Rs converge to Qs and Rs (3.3)for s = 1, 2, · · · , n, so we achieve (3.4) the periodic Schur decomposition of J(k).

We have thus demonstrated that simultaneous iteration converges to PRSF for real non-degenerate eigenvalues. For complex eigenvalue pairs, the algorithm converges in the sensethat the subspace spanned by a complex conjugate vector pair converges. So,

J(0)Q0 = Q

0R(0) = Q0DR

(0) ,

where D is a block-diagonal matrix with diagonal elements ±1 (corresponding to real eigen-values) or [2×2] blocks (corresponding to complex eigenvalue pairs). Absorb D into Rm, thenRm becomes a quasi-upper triangular matrix, and (3.3) still holds.

5. Stage 2 : eigenvector algorithms. Upon achieving PRSF, the eigenvectors of J(k)

are related to eigenvectors of R(k) by orthogonal matrix Qk from (3.3), and the eigenvectormatrix of R(k) has the same quasi-upper triangular structure as Rm. In addition, if we followthe simultaneous iteration method or implement periodic Schur decomposition without shift,eigenvalues are ordered by their magnitudes on the diagonal. Power iteration utilizing thisproperty could be easily implemented to generate the eigenvector matrix. This is the basicidea of the first algorithm for generating eigenvectors of R(k), corresponding to the 3rd and4th stage in covariant vectors algorithm in figure figure 1. Alternatively, observation that thefirst eigenvector of R(k) is trivial if it is real, v1 = (1, 0, · · · , 0)⊤, inspires us to reorder theeigenvalues so that the jth eigenvalue is in the first diagonal place of R(k); in this way, the jtheigenvector is obtained. For both methods, attention should be paid to the complex conjugate

eigenvector pairs. In this section, v(k)i denotes the ith eigenvectors of R(k), contrast to e

(k)i the

eigenvectors of J(k), and for most cases, the upper indices are dropped if no confusion occurs.

5.1. Iteration method. The prerequisite for iteration method is that all the eigenvaluesare ordered in a ascending or descending way by their magnitude on the diagonal of R(k).Assume that they are in descending order, which is the outcome of simultaneous iteration;therefore the diagonal elements of R(k) are Λ1,Λ2, · · · ,Λn, with magnitudes from large tosmall. If the ith eigenvector of R(k) is real, then it has form vi = (a1, a2, · · · , ai, 0, · · · , 0)⊤.An arbitrary vector whose first i elements are nonzero x = (b1, b2, · · · , bi, 0, · · · , 0)⊤ is a linearcombination of the first i eigenvectors: x =

∑ij=1 αjvj . Use it as the initial condition for

the power iteration by (R(k))−1 = R−1k+1 · · ·R−1

m R−11 R−1

2 · · ·R−1k and after sufficient number of

iterations,

limℓ→∞

(R(k))−ℓx

|| · || = vi .

The property we used here is that (R(k))−1 and R(k) have the same eigenvectors but inverse

eigenvalues.

10 XIONG DING AND PREDRAG CVITANOVIC

For a [2×2] block on the diagonal ofR(k), the corresponding conjugate complex eigenvectorsform a two dimensional subspace. Any real vector selected from this subspace will rotate underpower iteration. In this case, power iteration still converges in the sense that the subspacespanned by the complex conjugate eigenvector pair converges. Suppose the ith and (i + 1)theigenvectors of R(k) form a complex pair. Two arbitrary vectors x1 and x2 whose first i + 1elements are non zero can be written as the linear superposition of the first i+1 eigenvectors,

x1,2 = (∑i−1

j=1 α(1,2)j vj)+α

(1,2)i vi+(α

(1,2)i vi)

∗, where (∗) denotes the complex conjugate. As forthe real case, the first i−1 components above will vanish after a sufficient number of iterations.Denote the two vectors at this instance (corresponding to x1,2) to be X1 and X2 and formmatrix X = [X1,X2]. The subspace spanned by X1,2 does not change and X will be rotatedafter another iteration,

(5.1) (R(k))−1X = X′

= XC ,

where C is a [2×2] matrix which has two complex conjugate eigenvectors vC and (vC)∗.

Transformation (5.1) relates the eigenvectors ofR(k) with those of C: [vi, (vi)∗] = X[vC , (vC)

∗].In practice, matrix C can be computed by QR decomposition; let X = QXRX be the QRdecomposition of X, then C = R−1

X Q⊤XX

′. On the other hand, complex eigenvectors are not

uniquely determined in the sense that eiθvi is also a eigenvector with the same eigenvalue as vifor an arbitrary angle θ, so when comparing results from different eigenvector algorithms, weneed a constraint to fix the phase of a complex eigenvector, such as letting the first elementbe real.

We should note that performance of power iteration depends on the ratios of magnitudesof eigenvalues, so performance is poor for systems with clustered eigenvalues. We assumethat proper modifications, such as shifted iteration or inverse iteration, may help improve theperformance. Such techniques are beyond the scope of this paper.

5.2. reordering method. Except for the performance problem with clustered eigenvalues,the power iteration has a more severe issue when applied to dynamical systems, that is, itcannot get the eigenvectors of R(k) for all k ∈ 0, 1, 2, · · · ,m at the same time. Althougheigenvectors of R(k) and R

(0) are related by (3.5), it is not advisable, as pointed out above, toevolve the eigenvectors of R(0) so as to get eigenvectors of R(k) because of the noise introducedduring this process. Therefore, iteration is needed for each k ∈ 0, 1, 2, · · · ,m.

There exists a direct algorithm to obtain the eigenvectors of every R(k) at once without

iteration. The idea is very simple: the eigenvector corresponding to the first diagonal elementof an upper-triangular matrix is v1 = (1, 0, · · · , 0)⊤. By reordering the diagonal elements (or[2×2] blocks) of R(0), we can find any eigenvector by positioning the corresponding eigenvaluein the first diagonal position. Although in our application only reordering of [1×1] and [2×2]blocks is needed, we recapitulate here the general case of reordering two adjacent blocks of aquasi-upper triangular matrix following Granat [17]. Partition Ri as

Ri =

R00i ∗ ∗ ∗0 R11

i R12i ∗

0 0 R22i ∗

0 0 0 R33i

,

PERIODIC EIGENDECOMPOSITION 11

where R00i , R11

i , R22i , R33

i have size [p0×p0], [p1×p1], [p2×p2] and [p3×p3] respectively, andp0+p1+p2+p3 = n. In order to exchange the middle two blocks (R11

i and R22i ), we construct

a non-singular periodic matrix sequence: Si, i = 0, 1, 2, · · · ,m with S0 = Sm,

Si =

Ip0 0 0

0 Si 0

0 0 Ip3

,

where Si is a [(p1 + p2)×(p1 + p2)] matrix, such that Si transforms Ri as follows:

(5.2) S−1i RiSi−1 = Ri =

R00i ∗ ∗ ∗0 R22

i 0 ∗0 0 R11

i ∗0 0 0 R33

i

,

which is

S−1i

[

R11i R12

i

0 R22i

]

Si−1 =

[

R22i 00 R11

i

]

.

The problem is to find the appropriate matrix Si which satisfies the above condition. AssumeSi has form

Si =

[

Xi Ip1Ip2 0

]

,

where matrix Xi has dimension [p1×p2]. We obtain periodic Sylvester equation [17]

(5.3) R11i Xi−1 −XiR

22i = −R12

i , i = 0, 1, 2, · · · ,m .

The algorithm to find eigenvectors is based on (5.3). If the ith eigenvalue of R(k) is real,we only need to exchange the first [(i − 1)× (i − 1)] block of Rk , k = 1, 2, · · · ,m with itsith diagonal element. If the ith and (i + 1)th eigenvalues form a complex pair, then the first[(i − 1)×(i − 1)] block and the following [2×2] block should be exchanged. Therefore Xi in(5.3) has dimension [p1×1] or [p1×2]. In both cases, p0 = 0.

Real eigenvectors. In this case, matrix Xi is just a column vector, so (5.3) is equivalent to

(5.4)

R111 −R22

1 Ip1

R112 −R22

2 Ip1

R113 −R22

3 Ip1

. . . · · ·

−R22m Ip1 R11

m

X0

X1

X2

· · ·

Xm−1

=

−R121

−R122

−R123

· · ·

−R12m

,

where R22i is the (p1+1)th diagonal element of Ri. The accuracy of eigenvectors is determined

by the accuracy of solving sparse linear equation (5.4). In our application to periodic orbits in

12 XIONG DING AND PREDRAG CVITANOVIC

one dimensional Kuramoto-Sivashinsky equation, Gaussian elimination with partial pivoting(GEPP) is enough. For a more technical treatment, such as cyclic reduction or preconditionedconjugate gradients, to name a few, please see [1, 11, 16].

Now we get all vectors Xi by solving periodic Sylvester equation, but how are they relatedto the eigenvectors? In analogy to R

(0), defining R0 = RmRm−1 · · · R1, we get S−1m R

(0)Sm =R0 by (5.2). Since p0 = 0 and p2 = 1 in (5.2), the first eigenvector of R0, the one correspondingto eigenvalue Λp1+1 is e = (1, 0, · · · , 0)⊤. Before normalization, the corresponding eigenvectorof R(0) is

v(0)p1+1 = Sme =

[

X⊤0 , 1, 0, 0, · · · , 0

]⊤

.

This is the eigenvector of matrix R(0) = RmRm−1 · · ·R1 in (3.4) for k = 0. For R

(1) =R1Rm · · ·R2, the corresponding periodic Sylvester equation will be cyclically rotated one rowup, which means X1 will be shifted to the first place in the column vector in (5.4), and thus

the corresponding eigenvector of R(1) is v(1)p1+1 = [X⊤

1 , 1, 0, · · · , 0]⊤. The same argument goes

for all the following R(k) , k = 2, 3, · · · ,m − 1. In conclusion, solution of (5.4) contains the

eigenvectors for all R(k) , k = 0, 1, · · · ,m−1. Another benefit of reordering method is that wecan selectively get the eigenvectors corresponding to some specific eigenvalues. This merit isimportant in high dimensional nonlinear systems for which only a subset of Floquet vectorssuffices to characterize the dynamics in tangent space, and thus we avoid wasting time incalculating the remaining transient uniformly vanishing modes.

Complex eigenvector pairs. As in the real eigenvalue case, we have p0 = 0, but now p2 = 2,so matrix Xi has dimension [p1×2]. Using the same notation as ref. [17], let v(Xi) denotethe vector representation of Xi with the columns of Xi stacked on top of each other, and letA⊗B denote the Kronecker product of two matrices, with the (i, j)-block element be aijB.

Now, the periodic Sylvester equation (5.3) is equivalent to(5.5)

I2 ⊗R111 −(R22

1 )⊤ ⊗ Ip1

I2 ⊗R112 −(R22

2 )⊤ ⊗ Ip1

I2 ⊗R113 −(R22

3 )⊤ ⊗ Ip1

. . . · · ·

−(R22m )⊤ ⊗ Ip1 I2 ⊗R11

m

v(X0)

v(X1)

v(X2)

· · ·

v(Xm−1)

=

−v(R121 )

−v(R122 )

−v(R123 )

· · ·

−v(R12m )

.

After switching R11i and R22

i , we can get the first two eigenvectors of R0 by multiplying thefirst [2×2] diagonal blocks of Ri: R22 = R22

mR22m−1 · · ·R22

1 . Let the eigenvectors of R22 be v

and v∗ of size [2×1], then the corresponding eigenvectors of R0 are e1 = (v⊤, 0, 0, · · · , 0)⊤ ande2 = (e1)

∗ (the additional zeros make the length of the eigenvectors to be n). Therefore, the

PERIODIC EIGENDECOMPOSITION 13

corresponding eigenvectors of R(0) are

[

v(0)p1+1, v

(0)p1+2

]

= Sm[e1, e2] =

X0

I20 00 0...

0 0

[v, v∗] .

For other R(k), the same argument in the real case applies here too, so we obtain all the

complex eigenvector pairs for R(k) , k = 1, 2, · · · ,m.

6. Computational complexity and convergence analysis. In this paper we make no at-tempt at conducting a strict error analysis of the algorithms presented. However, for practicalapplications it is important to understand their computational costs. Periodic eigendecompo-sition is conducted in two stages: (1) periodic real Schur form, and (2) determination of alleigenvectors. In each stage, there are two candidate algorithms, so the efficiency of periodiceigendecomposition depends on the choice of the specific algorithm chosen in each stage.

Periodic QR algorithm and simultaneous iteration are both effective to achieve PRSF forthe real eigenvalues, and for complex pairs of eigenvalues. Periodic QR algorithm consists oftwo stages. First, matrix sequence Jm, Jm−1 · · · , J1 is reduced to Hessenberg-triangular form,with Jm−1, · · · , J1 upper triangular and Jm upper Hessenberg. It requires O(mn) Householderreflections in this stage and computational cost associated with each reflection is O(n2), if thetransformed matrix is calculated implicitly without forming the Householder matrix [37]. Sothe overall computational cost of this stage is O(mn3). The second stage is the periodic QRiteration which is a generalization of the standard, m = 1, case [37]. O(mn) Givens rotationsare performed in each iteration with overall computational cost of O(mn2). Though thecomputational effort in each iteration in the second stage is less than that in the first stage,the number of iterations in the second stage is usually far more than the dimension of matricesinvolved. In this sense, the second stage is the heavy part of periodic QR algorithm. On theother hand, simultaneous iteration conducts m QR decomposition O(mn3) and m matrix-matrix multiplication O(mn3) in each iteration, giving a total computational cost of O(mn3).The convergence of either algorithm depends linearly on the ratio of adjacent eigenvalues ofR

(0): |Λi|/|Λi+1| without shift [12]. Therefore the ratio of costs is approximately of the orderO(mn3)/O(mn2) = O(n), implying that the periodic QR algorithm is much cheaper than thesimultaneous iteration if the dimension of matrices involved is large enough.

The second stage of periodic eigendecomposition is to find all the eigenvectors of J(k) viaquasi-upper triangular matrices R(k). The first candidate is the combination of power iterationand shifted power iteration. The computational cost of one iteration for the ith eigenvectoris O(mi2). The second candidate, reordering method, relies on an effective method to solveperiodic Sylvester equation (5.3). For example, GEPP is suitable for well conditioned matrix(5.4) and (5.5) with computational cost of O(mn2). On the other hand, the iteration method,as pointed out earlier, could not produce the eigenvectors of R

(k) for all k = 1, 2, · · · ,maccurately in the same time due to the noise introduced during the transformation process

14 XIONG DING AND PREDRAG CVITANOVIC

(3.4), especially when the magnitudes of eigenvalues span a large range. In contrast, thereordering algorithm is not iterative and it gives all the eigenvectors simultaneously.

In summary, if we just consider the computational effort, the combination of periodic QRalgorithm and reordering method is preferable for periodic eigendecomposition.

7. Application to Kuramoto-Sivashinsky equation. Our ultimate goal of implementingperiodic eigendecomposition is to analyze the stability of periodic orbits and the associatedstable/unstable manifolds in dynamical systems, for the hope of getting a better understand-ing of pattern formation and turbulence. As an example, we focus on the one-dimensionalKuramoto-Sivashinsky equation

(7.1) ut +1

2(u2)x + uxx + uxxxx = 0 , x ∈ [0, L]

on a periodic spatial domain of size L = 22, large enough to exhibit complex spatiotemporaldynamics. This equation is formulated independently by Kuramoto in the context of angu-lar phase turbulence in reaction-diffusion systems [25], and by Sivashinsky in the study ofhydrodynamic instability in laminar flames [28]. Periodic boundary condition enables us totransform this partial differential equation into a set of ODEs in Fourier space

(7.2) ak = (q2k − q4k) ak − iqk2

∞∑

m=−∞

amak−m

where qk = 2πk/L, and the coefficients are complex, ak = bk+ick. In our simulations, discreteFourier transform is used with N = 64 modes (k = −N/2 + 1 up to N/2 in (7.2)).

Since u(x, t) is real, ak(t) = a∗−k(t); thus only half of the Fourier modes are independent.As a0 = 0 from (7.2), we can set a0 = 0 corresponding to zero mean velocity without loseof generality. Also the nonlinear term of aN/2 in fact has coefficient qN/2 + q−N/2 = 0 fromsymmetric consideration [36]; thus aN/2 is decoupled from other modes and it can be set tozero as well. Thus then the number of independent variables is N − 2,

(7.3) u = (b1, c1, b2, c2, · · · , bN/2−1, cN/2−1)⊤ .

This is the ‘state space’ in the discussion that follows. Exponential time-differencing schemecombined with RK4 [6, 21] is implemented to integrate (7.2). The combination of periodic QRalgorithm algorithm and reordering algorithm is used to obtain all exponents and eigenvectors.In addition, Gaussian elimination with partial pivoting (GEPP) is stable for (5.4) and (5.5) ifthe time step in Kuramoto-Sivashinsky integrator is not too large, as GEPP only uses additionand subtraction operations.

Kuramoto-Sivashinsky equation is equivariant under reflection and space translation: −u(−x, t)and u(x+ l, t) are also solutions if u(x, t) is a solution, which corresponds to equivariance of(7.3) under group operation R = diag(−1, 1,−1, 1, · · · ) and g(l) = diag(r1, r2, · · · , rN/2−1),where

rk =

(

cos(qkl) − sin(qkl)sin(qkl) cos(qkl)

)

, k = 1, 2, · · · , N/2 − 1 .

Based on the consideration of these symmetries, there are three types of invariant orbits inKuramoto-Sivashinsky system: periodic orbits in the bk = 0 invariant antisymmetric subspace,

PERIODIC EIGENDECOMPOSITION 15

(a)

0 5 10 15 20

x

0

5

10

15

20

25

30

35

40

t

−0.03

0.03

(b)

0 5 10 15 20

x

0

5

10

15

20

25

30

t

−0.03

0.03

(c)

0 5 10 15 20 25 30 35k

−7

−3

0

11e3

q 2k −q 4kµ2k−1µ2k

1 2 3 4

−0.4

−0.3

−0.2

−0.1

0.0

0.1

Figure 3. (Color online) (a) Preperiodic orbit pp10.25 and (b) relative periodic orbit rp

16.31 for totalevolution time 4Tpp and 2 Trp, respectively. The phase shift for rp

16.31 after one prime period ≃ −2.863. (c)The real parts of Floquet exponents paired for a given k as (k, µ2k−1) and (k, µ2k), for pp

10.25 with truncationnumber N = 64. The dashed line (green) is q2k − q4k. The inset is a magnification of the region containing the8 leading entangled modes.

preperiodic orbits which are self-dual under reflection, and relative periodic orbits with a shiftalong group orbit after one period. As shown in ref. [8], the first type is absent for domainsas small as L = 22, and thus we focus on the last two types of orbits. For preperiodicorbits u(0) = Ru(Tp) , we only need to evolve the system for a prime period Tp which ishalf of the whole period, with the Floquet matrix given by Jp(u) = RJTp(u). A relativeperiodic orbit, u(0) = gpu(Tp), returns after one period Tp to the initial state upon the grouptransform gp = g(lp), so the corresponding Floquet matrix is Jp(u) = gpJ

Tp(u). Here we showhow periodic eigendecomposition works by applying it to one representative preperiodic orbitpp10.25 and two relative periodic orbits rp16.31 and rp57.60 (subscript indicates the period ofthe orbit), described in ref. [8].

Figure 3 shows the time evolution of pp10.25 and rp16.31 and the Floquet spectrum ofpp10.25. At each repeat of the prime period, pp10.25 is invariant under reflection along x = L/2,figure 3 (a), and rp16.31 has a shift along the x direction as time goes on, figure 3 (b). Sincepp10.25 and rp16.31 are both time invariant and equivariant under SO(2) group transformationg(l), there should be two marginal Floquet exponents, corresponding to the velocity field v(x)and group tangent t(x) = Tx respectively, where T is the generator of SO(2) rotation:

T = diag(t1, t2, · · · , tN/2−1), tk =

(

0 −qkqk 0

)

.

Table 1 shows that the 2nd and 3rd, respectively 3rd and 4th exponents of rp16.31, respectivelypp10.25, are marginal, with accuracy as low as 10−12, to which the inaccuracy introduced bythe error in the closure of the orbit itself also contributes. Table 1 and figure 3 (c) show thatperiodic Schur decomposition is capable of resolving Floquet multipliers differing by thousands

16 XIONG DING AND PREDRAG CVITANOVIC

Table 1The first 10 and last four Floquet exponents and Floquet multiplier phases, Λi = exp(T µi ± iθi), for orbits

pp10.25 and rp

16.31, respectively. θi column lists either the phase, if the Floquet multiplier is complex, or ‘-1’ ifthe multiplier is real, but inverse hyperbolic. Truncation number N = 64. The 8 leading exponents correspondto the entangled modes: note the sharp drop in the value of the 9th and subsequent exponents, corresponding tothe isolated modes.

pp10.25 rp

16.31

i µi θi i µi θi1,2 0.033209 ±2.0079 1 0.327913 -4.1096e-13 2 2.8679e-124 -3.3524e-14 -1 3 2.3559e-135 -0.21637 4 -0.13214 -16,7 -0.26524 ±2.6205 5,6 -0.28597 ±2.77248 -0.33073 -1 7 -0.32821 -19 -1.9605 8 -0.3624110 -1.9676 -1 9,10 -1.9617 ±2.2411· · · · · · · · · · · · · · · · · ·

59 -5313.6 -1 59 -5314.460 -5317.6 60 -5317.761 -6051.8 -1 61 -6059.262 -6080.4 62 -6072.9

of orders: when N = 64, the smallest Floquet multiplier for pp10.25 is |Λ62| ≃ e−6080.4×10.25.We should know that this cannot be achieved if we try to get a single Jacobian for the wholeorbit. Figure 3(c) and table 1 also show that for k ≥ 9, Floquet exponents almost lie on thecurve (q2k − q4k). This is the consequence of strong dissipation caused by the linear term in(7.2) for large Fourier mode index. This feature is observed for all the other periodic orbitswe have experimented, and it is a good indicator of existence of a finite dimensional inertialmanifold. Also, Floquet exponents appear in pairs for large indices simply because the realand complex part of high Fourier modes have similar contracting rate from (7.2).

(a)

0 5 10 15 200

2

4

6

8

10

(b)

0 5 10 15 200

2

4

6

8

10

(c)

0 5 10 15 200

2

4

6

8

10

(d)

0 5 10 15 200

2

4

6

8

10

(e)

0 5 10 15 200

2

4

6

8

10

12

14

16

(f)

0 5 10 15 200

2

4

6

8

10

12

14

16

(g)

0 5 10 15 200

2

4

6

8

10

12

14

16

(h)

0 5 10 15 200

2

4

6

8

10

12

14

16

Figure 4. (Color online) (a) ∼ (d) : the 1st (real part), 5th, 10th and 30th Floquet vector along pp10.25

for one prime period. (e) ∼ (h) : the 1st, 4th (real part), 10th (imaginary part) 30th (imaginary part) Floquetvector along rp

16.31 for one prime period. axes and color scale are the same with figure 3.

Figure 4 shows a few selected Floquet vectors along pp10.25 and rp16.31 for one primeperiod respectively. We need to remind the reader that Floquet vectors for a whole period

PERIODIC EIGENDECOMPOSITION 17

(a)

0 2 4 6 8 10 12 14 16k

0.0

0.2

0.4

0.6

0.8

1.0power

(b)

0 2 4 6 8 10 12 14 16k

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

power

Figure 5. (Color online) The power spectrum of the first 30 Floquet vectors for pp10.25 (left) and rp

16.31

(right) at t = 0. Red lines corresponds to the leading 8 Floquet vectors; while the blue lines correspond to theleft 22 Floquet vectors with the ith one localized at index i. Power is defined to be the modular square of Fouriercoefficients of Floquet vectors. The x-axis is labeled by the Fourier mode indices. Only the k > 0 part is shown,the negative k follow by reflection. For complex Floquet vectors, the power spectra of real part and imaginarypart are calculated separately. Since almost all contracting Floquet vectors of rp

16.31 form complex conjugatepairs, their power peaks are far less than 1.

is obtained by solving (5.4) or (5.5), not by evolving Floquet vectors at one time spot to thelater time spots because the evolution procedure is not stable for Floquet vectors. We can seethat the leading few Floquet vectors have turbulent structures containing only long waves forboth pp10.25 and rp16.31, but for Floquet vectors corresponding to strong contracting rates,the configurations are pure sinusoidal curves. The power spectra in Figure 5 demonstrate thispoint too. The leading 8 Floquet vectors have large components in the first 5 Fourier modesand the spectra are entangled with each other; while the remaining Floquet vectors almostconcentrate on a single Fourier mode and are decoupled from each other; more specifically, theith Floquet vector with i ≥ 9 peaks at the ⌈ i

2⌉th mode in figure 5. Takeuchi etc. [34, 42] observesimilar features in covariant vectors along ergodic trajectories and by measuring the tangencybetween these two groups of covariant vectors, they reach a reasonable conclusion aboutthe dimension of inertial manifold of Kuramoto-Sivashinsky equation and complex Ginzburg-Landau equation. Therefore, we anticipate that by analyzing the tangency of Floquet vectorsalong different periodic orbits can also lead to the same conclusion, which is our furtherresearch.

We have noted above that the group property of Jacobian matrix multiplication (2.1)enables us to factorize J

(k) into a product of short-time matrices with matrix elements ofcomparable magnitudes. In practice, caution should be exercised when trying to determinethe optimal number of time increments that the orbit should be divided into. If the numberof time increments m is too large, then, according to the estimates of sect. 6, the computationmay be too costly. If m is too small, then the elements of Jacobian matrix correspondingto the corresponding time increment may range over too many orders of magnitude, causingperiodic eigendecomposition to fail to resolve the most contracting Floquet vector along the

18 XIONG DING AND PREDRAG CVITANOVIC

0 2 4 6 8 10 12 14 16

h/h0

100

10-2

10-6

10-8

10-13

max|(µ

−µ0)/µ0|

(a)

pp10.25

rp57.60

0 20 40 60 80 100

h/h0

100

10-5

10-10

10-12

10-13

max|(µ

−µ0)/µ0|

(b)

pp10.25

rp57.60

Figure 6. (Color online) Relative error of the real part of Floquet exponents associated with different timesteps with which the Floquet matrix is integrated. Two orbits pp

10.25 and rp57.60 are used as an example with the

base case h0 ≈ 0.001. (a) The maximal relative difference of the whole set of Floquet exponents with increasingtime step (decreasing the number of ingredient segments of the orbit). (b) Only consider the first 35 Floquetexponents.

orbit. One might also vary the time step according to the velocity at a give point on the orbit.Here we determined satisfactory m’s by numerical experimentation shown in figure 6. Sincelarger time step means fewer time increments of the orbit, a very small time step (h0 ≈ 0.001)is chosen as the base case, and it is increased to test whether the corresponding Floquetexponents change substantially or not. As shown in figure 6 (a), up to 6h0 the whole Floquetspectrum varies within 10−12 for both pp10.25 and rp57.60. These two orbits represent twodifferent types of invariant solutions which have short and long periods, so we presume thattime step 6h0 is good enough for other short or long orbits too. On the other hand, if only thefirst few Floquet exponents are desired, the time step can be increased further to fulfill the job.As shown in figure 6 (b), if we are only interested in the first 35 Floquet exponents, then timestep 30h0 is small enough. In high dimensional nonlinear systems, often we are not interestedin the dynamics in the very contracting directions because they are usually decoupled fromthe physical modes, and shed little insight into the system properties. Therefore, large timestep could to used in order to save time.

The two marginal directions have a simple geometrical interpretation and provides a metricfor us to measure the convergence of periodic eigendecomposition. Figure 7 (a) depicts thetwo marginal vectors of pp10.25 projected onto the subspace spanned by [a1, b1, a2] (the real,imaginary parts of the first mode and the real part of the second Fourier mode). The firstmarginal eigen-direction (the 3rd Floquet vector in table 1) is aligned with the velocity fieldalong the orbit, and the second marginal direction (the 4th Floquet vector) is aligned with thegroup tangent. The numerical difference between the unit vectors along these two marginaldirections and the corresponding physical directions is shown in figure 7 (b). The differenceis under 10−9 and 10−11 for these two directions, which demonstrates the accuracy of thealgorithm. As shown in table 1, for an preperiodic orbit, such as pp10.25, the trajectory

PERIODIC EIGENDECOMPOSITION 19

a1

−0.2

0.0

0.2

b1−0.20

−0.15−0.10

a20

(a)

0 x(Tp)10-13

10-12

10-11

10-10

10-9(b)

||v3(x)−�v(x)||||v4(x)−t(x)||

Figure 7. (Color online) Marginal vectors and the associated errors. (a) pp10.25 in one period projected

onto [a1, b1, a2] subspace (blue curve), and its counterpart (green line) generated by a small group transformationg(ℓ) , here arbitrarily set to ℓ = L/(20π). Magenta and black arrows represent the first and the second marginalFloquet vectors e3(x) and e4(x) along the prime orbit. (b) The solid red curve is the magnitude of the differencebetween e3(x) and the velocity field ~v(x) along the orbit, and blue dashed curve is the difference between e4(x)and the group tangent t(x) = Tx.

tangent and the group tangent have eigenvalue +1 and −1 respectively, and are thus distinct.However, the two marginal directions are degenerate for an relative periodic orbit, such asrp16.31. So these two directions are not fixed, but the plane that they span is uniquelydetermined. Figure 8 shows the velocity field and group tangent along orbit rp16.31 indeed liein the subspace spanned by these two marginal directions.

8. Conclusion and future work. In this paper, as well as in the forthcoming publication,ref. [9], we use one-dimensional Kuramoto-Sivashinsky system to illustrate the effectivenessand potential wide usage of periodic eigendecomposition applied to stability analysis in dissi-pative nonlinear systems.

On the longer time scale, we hope to apply the method to the study of orbits of muchlonger periods, as well as to the study of high-dimensional, numerically exact time-recurrentunstable solutions of the full Navier-Stokes equations. Currently up to 30 Floquet vectors forplane Couette invariant solutions can be computed [13], but many more will be needed and toa higher accuracy in order to determine the physical dimension of a turbulent Navier-Stokesflow. We are nowhere there yet; we anticipate the need for optimizing and parallelizing suchalgorithms. Also there is opportunity to apply periodic eigendecomposition to Hamiltoniansystems too and we need additional tests to show its ability to preserve symmetries of Floquetspectrum imposed by Hamiltonian systems.

20 XIONG DING AND PREDRAG CVITANOVIC

Figure 8. (Color online) Projection of relative periodic orbit rp16.31 onto the Fourier modes subspace

[b2, c2, b3] (red curve). The dotted curve (lime) is the group orbit connecting the initial and final points. Blueand magenta arrows represent the velocity field and group tangent along the orbit, respectively. Two-dimensionalplanes (cyan) are spanned by the two marginal Floquet vectors at each point (yellow) along the orbit.

Acknowledgments. We are grateful to L. Dieci for introducting us to complex periodicSchur decomposition, K.A. Takeuchi for providing detailed documentation of his previouswork on covariant vectors, R.L. Davidchack for his database of periodic orbits in Kuramoto-Sivashinsky equation, which are the basis of our numerical experiments, and to N.B. Budanur,E. Siminos, M.M. Farazmand and H. Chate for many spirited exchanges, X.D. was supportedby NSF grant DMS-1028133. P.C. thanks G. Robinson, Jr. for support.

REFERENCES

[1] P. Amodio, J. R. Cash, G. Roussos, R. W. Wright, G. Fairweather, I. Gladwell, G. L. Kraut,

and M. Paprzycki, Almost block diagonal linear systems: sequential and parallel solution techniques,and applications, Numer. Linear Algebra Appl., 7 (2000), pp. 275–317.

[2] G. Benettin, L. Galgani, A. Giorgilli, and J. M. Strelcyn, Lyapunov characteristic exponents forsmooth dynamical systems; a method for computing all of them. Part 1: theory, Meccanica, 15 (1980),pp. 9–20.

[3] A. Bojanczyk, G. H. Golub, and P. V. Dooren, The periodic Schur decomposition. Algorithms andapplications, in Proc. SPIE Conference, vol. 1770, 1992, pp. 31–42.

[4] H. Bosetti, H. A. Posch, C. Dellago, and W. G. Hoover, Time-reversal symmetry and covariantlyapunov vectors for simple particle models in and out of thermal equilibrium, Phys. Rev. E, 82 (2010),pp. 1–10.

[5] F. Christiansen, P. Cvitanovic, and V. Putkaradze, Spatiotemporal chaos in terms of unstablerecurrent patterns, Nonlinearity, 10 (1997), pp. 55–70. arXiv:chao-dyn/9606016.

[6] S. M. Cox and P. C. Matthews, Exponential time differencing for stiff systems, J. Comput. Phys.,176 (2002), pp. 430–455.

PERIODIC EIGENDECOMPOSITION 21

[7] P. Cvitanovic, R. Artuso, R. Mainieri, G. Tanner, and G. Vattay, Chaos: Classical and Quantum,Niels Bohr Inst., Copenhagen, 2014. ChaosBook.org.

[8] P. Cvitanovic, R. L. Davidchack, and E. Siminos, On the state space geometry of theKuramoto-Sivashinsky flow in a periodic domain, SIAM J. Appl. Dyn. Syst., 9 (2010), pp. 1–33.arXiv:0709.2944.

[9] X. Ding, P. Cvitanovic, K. A. Takeuchi, H. Chate, E. Siminos, and R. L. Davidchack, Deter-mination of the physical dimension of a dissipative system by periodic orbits. In preparation, 2015.

[10] S. V. Ershov and A. B. Potapov, On the concept of stationary Lyapunov basis, Physica D, 118 (1998),pp. 167–198.

[11] G. Fairweather and I. Gladwell, Algorithms for almost block diagonal linear systems, SIAM Rev.,46 (2004), pp. 49–58.

[12] J. G. F. Francis, The QR transformation: A unitary analogue to the LR transformation. I, Comput.J., 4 (1961), pp. 265–271.

[13] J. F. Gibson, J. Halcrow, and P. Cvitanovic, Visualizing the geometry of state-space in plane Couetteflow, J. Fluid Mech., 611 (2008), pp. 107–130. arXiv:0705.3957.

[14] F. Ginelli, H. Chate, R. Livi, and A. Politi, Covariant Lyapunov vectors, J. Phys. A, 46 (2013),p. 254005. arXiv:1212.3961.

[15] F. Ginelli, P. Poggi, A. Turchi, H. Chate, R. Livi, and A. Politi, Characterizing dynamics withcovariant Lyapunov vectors, Phys. Rev. Lett., 99 (2007), p. 130601. arXiv:0706.0510.

[16] R. Granat, I. Jonsson, and B. Kagstrom, Recursive blocked algorithms for solving periodic triangularSylvester-type matrix equations, in Proc. 8th Intern. Conf. Applied Parallel Computing: State of theArt in Scientific Computing, PARA’06, 2007, pp. 531–539.

[17] R. Granat and B. Kagstrom, Direct eigenvalue reordering in a product of matrices in periodic Schurform, SIAM J. Matrix Anal. Appl., 28 (2006), pp. 285–300.

[18] J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations ofVector Fields, Springer, New York, 1983.

[19] M. Inubushi, M. U. Kobayashi, S.-i. Takehiro, and M. Yamada, Covariant Lyapunov analysis ofchaotic Kolmogorov flows, Phys. Rev. E, 85 (2012), p. 016331.

[20] M. Jolly, R. Rosa, and R. Temam, Evaluating the dimension of an inertial manifold for the Kuramoto-Sivashinsky equation, Advances in Differential Equations, 5 (2000), pp. 31–66.

[21] A.-K. Kassam and L. N. Trefethen, Fourth-order time stepping for stiff PDEs, SIAM J. Sci. Comput.,26 (2005), pp. 1214–1233.

[22] M. Krupa, Bifurcations of relative equilibria, SIAM J. Math. Anal., 21 (1990), pp. 1453–1486.[23] P. V. Kuptsov, Violation of hyperbolicity via unstable dimension variability in a chain with local hyper-

bolic chaotic attractors, J. Phys. A, 46 (2013), p. 254016.[24] P. V. Kuptsov and U. Parlitz, Theory and computation of Covariant Lyapunov Vectors, J. Nonlin.

Sci., 22 (2012), pp. 727–762. arXiv:1105.5228.[25] Y. Kuramoto and T. Tsuzuki, On the formation of dissipative structures in reactiondiffusion systems,

Prog. Theor. Phys., 54 (1975), p. 687699.[26] K. Lust, Improved numerical Floquet multipliers, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 11 (2001),

pp. 2389–2410.[27] A. Lyapunov, Probleme general de la stabilite du mouvement, Ann. of Math. Studies, 17 (1977). Russian

original Kharkow, 1892.[28] D. M. Michelson and G. I. Sivashinsky, Nonlinear analysis of hydrodynamic instability in laminar

flames - II. numerical experiments, Acta Astronaut., 4 (1977), pp. 1207–1221.[29] V. I. Oseledec, A multiplicative ergodic theorem. Liapunov characteristic numbers for dynamical sys-

tems, Trans. Moscow Math. Soc., 19 (1968), pp. 197–221.[30] A. Politi, F. Ginelli, S. Yanchuk, and Y. Maistrenko, From synchronization to Lyapunov exponents

and back, Physica D, 224 (2006), p. 90. arXiv:nlin/0605012.[31] J. C. Robinson, Inertial manifolds for the Kuramoto-Sivashinsky equation, Phys. Lett. A, 184 (1994),

pp. 190–193.[32] D. Ruelle, Ergodic theory of differentiable dynamical systems, Publ. Math. IHES, 50 (1979), pp. 27–58.[33] K. A. Takeuchi, F. Ginelli, and H. Chate, Lyapunov analysis captures the collective dynamics of

large chaotic systems, Phys. Rev. Lett., 103 (2009), p. 154103. arXiv:0907.4298.

22 XIONG DING AND PREDRAG CVITANOVIC

[34] K. A. Takeuchi, H. Yang, F. Ginelli, G. Radons, and H. Chate, Hyperbolic decoupling of tan-gent space and effective dimension of dissipative systems, Phys. Rev. E, 84 (2011), p. 046214.arXiv:1107.2567.

[35] R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, Springer, New York,1988.

[36] L. N. Trefethen, Spectral Methods in MATLAB, SIAM, Philadelphia, 2000.[37] L. N. Trefethen and D. Bau, Numerical Linear Algebra, SIAM, Philadelphia, 1997.[38] D. S. Watkins, Francis’s algorithm, Amer. Math. Monthly, 118 (2011), pp. 387–403.[39] C. L. Wolfe and R. M. Samelson, An efficient method for recovering Lyapunov vectors from singular

vectors, Tellus A, 59 (2007), pp. 355–366.[40] H. Yang and G. Radons, Geometry of inertial manifold probed via Lyapunov projection method, Phys.

Rev. Lett., 108 (2012), p. 154101.[41] H.-l. Yang and G. Radons, Comparison between covariant and orthogonal Lyapunov vectors, Phys.

Rev. E, 82 (2010), p. 046204. arXiv:1008.1941.[42] H.-l. Yang, K. A. Takeuchi, F. Ginelli, H. Chate, and G. Radons, Hyperbolicity and the ef-

fective dimension of spatially-extended dissipative systems, Phys. Rev. Lett., 102 (2009), p. 074102.arXiv:0807.5073.