matrix functions and two-sided linear operator equations

7
Arch. Math., Vol. 40, 332--338 ( 1 9 8 3 ) 0003-889X/83/4004-0332 $ 2.90/0 1983 Birtda~user Verlag, Basel Matrix functions and two-sided linear operator equations By HANs-J. RUNCKEL and UWE PITTELKOW 1. Introduction. The main goal of this paper is to determine in 3. explicit solutions of linear operator equations whose coefficients are functions of two not necessarily commuting matrices A, B. This involves the following procedure. Let a single linear equation F(4, Ty, ~, t) = 0 (where T is a linear operator) together with a solu- tion y(t) = y(2, #, t) (a complex-valued function), both depending on parameters 4,/~, be given. Then by means of analytic functions of a matrix the system of operator equations F(A, T Y, B, t)= 0 is defined and it is shown that a solution- matrix Y(t) is given by Y(t)= y(A, B, t). In 4. special equations are treated and generalizations of known results are obtained. 2. Definition and computation of matrix functions. Without proof we state several known formulas defining an analytic function of a matrix. Let A e C u• be given and let I e C u• be the identity matrix. Assume that c ( A ) ----- 0 where k c(2) =~(2--2,) ~', m, =>1, ml +-'" +mk=m. For instance, c (2) can be the minimal or the characteristic polynomial of A. Then for every eomplex function ](2) one defines (assuming that the derivatives of/on the right side exist) k mr--1 (1) / ( A ) :----- ~ ~,(1/r!)/(r)(2~)C~r where C~r=(A--2~I)rCio i=l r=0 and 6'io = qi(A) ,(A) , qi(2) := c(2)/(2 -- 2i) ~' and where the matrices Cir have the following properties k (2) C~r Cts ~- ~ Ci, r+s, Cim, = 0 for all i, ], r, s and ~ C~0 = I . i=l For other definitions of matrix functions see [4] and [13]. :For practical computations see [13].

Upload: hans-j-runckel

Post on 16-Aug-2016

222 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Matrix functions and two-sided linear operator equations

Arch. Math., Vol. 40, 332--338 ( 1 9 8 3 ) 0003-889X/83/4004-0332 $ 2.90/0 �9 1983 Birtda~user Verlag, Basel

Matrix functions and two-sided linear operator equations

By

HANs-J. RUNCKEL and UWE PITTELKOW

1. Introduction. The main goal of this paper is to determine in 3. explicit solutions of linear operator equations whose coefficients are functions of two not necessarily commuting matrices A, B. This involves the following procedure. Let a single linear equation F(4, Ty, ~, t) = 0 (where T is a linear operator) together with a solu- tion y(t) = y(2, #, t) (a complex-valued function), both depending on parameters 4,/~, be given. Then by means of analytic functions of a matrix the system of operator equations F(A, T Y, B, t ) = 0 is defined and it is shown that a solution- matrix Y(t) is given by Y ( t ) = y(A, B, t). In 4. special equations are treated and generalizations of known results are obtained.

2. Definition and computation of matrix functions. Without proof we state several known formulas defining an analytic function of a matrix. Let A e C u• be given and let I e C u• be the identity matrix. Assume that c(A) ----- 0 where

k

c(2) = ~ ( 2 - - 2 , ) ~', m, =>1, ml + - ' " + m k = m .

For instance, c (2) can be the minimal or the characteristic polynomial of A. Then for every eomplex function ](2) one defines (assuming that the derivatives o f / o n the right side exist)

k mr--1

(1) /(A) :----- ~ ~,(1/r!)/(r)(2~)C~r where C ~ r = ( A - - 2 ~ I ) r C i o i = l r = 0

and

6'io = qi(A) ,(A) , qi(2) : = c(2)/(2 -- 2i) ~'

and where the matrices Cir have the following properties k

(2) C~r Cts ~- ~ Ci, r+s, Cim, = 0 for all i, ], r, s and ~ C~0 = I . i = l

For other definitions of matrix functions see [4] and [13]. :For practical computations see [13].

Page 2: Matrix functions and two-sided linear operator equations

Vol. 40, 1983 Matrix functions 333

Next. let B e C vxv be given such that d (B) ---- 0 where

l

d(/x)=~(/~-- /~ , )~' , n , = l , n i + ' " + n ~ = n .

The matrices Dis are defined by B in an analogous way as the Ctr are defined by A. I f [(2, #) is a complex function or ff [(2,/x) e C uxv, then we use the follo~Sng

abbreviation

0~ ~1(2i , #~):---= r!s[ /(~' Ju)la=~"t'=t"

provided the right side exists. Given a complex funct ion/(2 , #) and generalizing (1) we can now introduce the

following linear operator

k I m~--I n ~ - I

(3) [(A,B)x:=~ ~. ~ ~ (O~/(2,,~))C,~XD~ defmedfor XeC~• i = l ~ = l r = O s=O

where we assume that all derivatives on the right side exist. Finally, we remark that with appropriate modifications all considerations in this

paper remain valid ff A, B belong to an arbitrary topological algebra over C with identi ty 1 such that c(A) ----0, d(B)----O.

3. Linear operator equations. Let Tt: V --~ V, v = 1, 2, ... be linear operators, where V is a vector space over C of complex valued functions of t = (tl . . . . . t~) e R, a region in [ ~ or Cp. For example for p = 1 T~ may be d/dr or E where Ey( t )

b

= y (t + 1) or T~y (t) = ~ K (2,/x, s, t) y(s) ds. Let L, M be subsets of R or C with a

),~ ~ L, /xj ~ M, i <-- i < k, 1 < ~ < 1. Each Tv also may depend on the parameters 2 ~ L , / ~ M .

Next we define for N E N and y(t) e V N

F (2, Ty, 1~, t) = F (2, T (2, ~) y, ~, t) : = ~ / , (2, t) (T~ (2,/~) y (t)) g~ (/~, t), v~J .

where 1,(2, tha t for all

For Y =-

(4)

(5)

t), g~(#, t) are complex functions of t e R, 2 e L , /~ e M. We assume and each t e R [~(A, t) and g~(B, t) exist according to (1).

Y (t) = ( y ~ (t)) ~ V uxv we then define t ~ (A , T Y , B, t) =- /V

F (A, T (A, B) Y, B, t) := ~ [, (A, t) (T , (A , B) Y (t)) g, (B, t), where

k I m~-- i n~-- I r s T T,(A, B) Y (t) := ~ ~ E E Ar (01 02 ( ,(2~, ~j) r (t))) 1)~

i = 1 j=l r = 0 s = 0

assuming the existence of the derivatives on the right side for each Y (t) e V uxv,

Page 3: Matrix functions and two-sided linear operator equations

334 H.-J. RUNCKEL and U. PITTELKOW ARCH. MATH.

and where Tv (2, #) Y (t) : = (Tv (2, k~) y~a (t)) ~ V u• I f T~ is independent of 2 and/z, then (5) reduces to Tv Y (t) = (Tvy:~ (t)) because of (2).

We now want to ~ve (4) another form which is more convenient for the following considerations. Substituting

1r mi--1

]~(A, t) ---- ~ ~ (a[/~(2~, t))C~ o, i = l 0=0

l n~--i g~(B, t) = ~ ~ (~g,(~, t))~),.~

] = 1 a=O

and (5) into (4) and using (2) yields

F ( A ' T Y ' B ' t ) = E ~i ~ ~ E CLr+~ *,=1 i = i = 1 r , 0~ 0 s . a = 0

�9 (~ Oi(T~(),~,/~1) Y(t)))(~g~(/~, t))D1,s+a

= Z Z Z ~ c,, Z (~/,(~,,t)) i = 1 1=1 r=O s=O \ v = l ~=0 a=O \

wl ~ T~(2i, #~) Y(t))(~gg~(/~ h t)) D~s /

and with Leibniz' differentiation rule we obtain k I ma--I m--i

(6) F ( A , T ( A , B ) Y , B , t ) - ~ ~. ~ ~ C , r ( ~ I ~ ( ~ ( 2 i , TY,/~,t))).D~s. = ~'=1 r=O s=O

After these considerations we can state the following

Theorem. Let k Z m~-- i n.*--i

r = ~ ~ y E ~,:,~(t)c, rxDj~ i = 1 j = l r = 0 s=O

with fixed X ~ C u• and complex ]unctlom q~irs, defined on R, be given and let k l m~--i n~--i

Y ( t ) = ~ ~ Z ~.y,]rs(t)CirXDjs, t ~ , i = 1 j=J . r = 0 s = 0

where all yijrs (t) ~ V. In order that Y (t) satis/ies

(7) F (A, T (A, B) Y, B, t) = ~ (t), t ~ 2r

it is su//icient and, i /a l l CirXDjs ~ O, also necessary that

(8) ~ 0~ 0~( (2~, T(2f, ~j)YiLr-e,s-a(t), lul, t)) --- q~ilrs(t) ~ 0 e = 0

holds/or all t e R , 0 <=r <m~, 0 --<8 < n j , 1 --<i --<k, 1 _<] =<l. In particular, in order that Y(t) satis/ies F(A, T Y, B, t ) = O, t e R, with an

arbitrary/ixed X e C u• it is suHicient and, i /a l l C~rXDjs ~ 0 also necessary that

~ o,~ ag(F(~,, Ty~j,._~,~_.(t), F.J, t)) = 0 0 ~ 0 ~ =0

/or all t e l l , 0 <=r < mi, 0 <=s < nj, 1 <--i <--k, 1 <=] <=l.

Page 4: Matrix functions and two-sided linear operator equations

Vol. 40, 1983 Matrix functions 335

P r o o f . Let yields

Y (t) be a solution of (7). Then substituting Y (t) in (6) and using (2)

F(A, T Y , B, t)

~i ( ~ 0~ (F(2i, Ty~ea(t ) , #i, t))) C~ r+qXDj s+~

i = 1 ] = t r = 0 ~ \ a = 0

I f all CirXDIs ~ O, then they are linearly independent because of (2), and (8) follows by comparing coefficients of CirX.D1s in the preceding equation. If conversely (8) holds, then the preceding calculation shows that Y(t) satisfies (7).

Condition (8) will be simplified in

Corollary 1. Let q5 (t) be the same as in the Theorem and let y (Z, #, t) be a complex /unction o/ 2 e L, ix e M, t e R such that y (A, B, t)z exists for each t e 1~ according to (3) where X is the same as in qS(t). Assume also that (~/a2) r (a/a/~) s y(2, ~, t) e v /or all r, s and 2 e L, # E M which occur below and that all T~ satis/y "Leibniz' rule"

1 / a \ ~ / a \s T ), 2 t '

(9) / 4 ~ 1 [ a \ 0 / a \ o

r < m~, 0 -< s < nl, 1 <-- i <-- k, 1 ~ ] ~ l, assuming that all expres- exist. ((9) holds if all T~ are independent o/2, # and commute with (0/~2) r,

/or t e i~, 0 sions in (9)

(ala~)O. Then, in

C~, X Dj~ #

(lo)

order that Y(t):---- y(A, B, t)x is a solution o / (7) i t is su//icient and, i /a l l 0 also necessary that

~ ~ (F (2~, T (2i, #j) y(2i, #j, t), #5, t)) = q?~jr~(t)

holds /or O ~ r < mi, O ~ s < nj, l <--i <--k, 1 ~ ] ~ I, t e R . I n particular, in order that Y (t) ----- y (A, B, t)x satisfies F (A, T Y, B, t) -~ 0 /or

all t ~ R with arbitrary/ixed X e C uxv it is su/ficient and, i / a l l CirXDj~. #: 0 also necessary that

0~ 0~ (F(2~, Ty(2i, ~i, t), #j., t)) = 0

/or all t e R , O ~ r < m i , O ~ s < n I, l <--i <--k, 1 ~ ] ~ 1 .

P r o o f . But y~jrs(t) := ~ri ~y(2~, #j, t). Then (9) together with the usual Leibniz' differentiation rule reduces (8) to (10).

Condition (10) is further simplified in

Corollary 2. Let (f(2,/z, t), y(2, / l , t) be complex /unctions o/ 2 e L , # e M, t e R where L and M shall be open sets, and assume that q~ (A, B, t)x, y (A, B, t)x exist/or each t e 1r according to (3), both involving the same X. Let also (a/a2) r (a/a#)sy(2,/~, t)

Page 5: Matrix functions and two-sided linear operator equations

336 H.-J. RUNCKEL and U. PITTELKOW ARCH. MATH.

V ]or all r, s and ~ ~ L,/~ ~ M which occur below and assume that (9) holds. Then

(11) .F(~, T(~, l~)y(~,~, t) , t~, t ) =- q)(),/~,t)

]or all ~ ~ L, # ~ M, t ~ R implies

(12) F ( A , T ( A , B ) y ( A , B , t ) x , B , t ) - = q ) ( A , B , t ) z , t e R .

iT/, in particular, 1~(,~, T(~,/~)y(,~, /~, t),/~, t) ~ 0 /or all ~ e L , tz ~ M, t ~ _R, then F (A, T (A, B) y (A, B, t)x, B, t) = 0 /or t e .R and each X e C uxv.

P r o o f . Applying (~/~,~)r (~/~/~)s to (11) at ~ = ,~, /~ ----/~ yields (10) with

~ (t) : = ~ ~ q~(~, ~j, t).

Then (12) follows from Corollary 1.

R e m a r k s . 1. All considerations of this paper remain valid if all g~ and all T~ are independent of/~. One only has to drop all operations concerning/~.

2. I f m ~ = - l , n j=- l f o r l ~ i _ < / c - - - - m , l = < ] _ _ < l : n , t h e n L , M i n C o r o l l a r y 2 can be replaced by {~i . . . . . ~m}, {l~l . . . . , fin} respectively.

3. Assume that p----1, T ~ T ~-l, y e n for some linear operator T which is independent of ~, # such that all T ~ commute with all ( a /~ ) r, (O/a#)s which occur below. Next, assume tha t for 0 _--< ~ ~ a ya (2,/~, t) satisfy the same conditions as y (2, #, t) does in Cot. 2 and that Yr (t) :---- Ya (A, B, t)xa satisfies F (A, T Ya (t), B, t) ---- 0. I f T~y~ (t,/~, to) ---- ( ~ holds for 0 __< ~, v __< a, ~ e Z,/~ e M and some to e R, then ap- plying (0/~) r, (~/8/~)~ to T~y~(2, ~, to) = ( ~ at ~ -=-- 2~,/~ = /~ j yields 8~ ~ T~ya (~, #~, to) = 0 for r ~ 0 or s > 0. This and (2) yield

k l

T~Y~(to)-=-~ ~ ( ~ C ~ o X ~ D I o - ~ 5 a , Xcr for 0 ~ : r v ~ a . i=i i=i

4. Examples. We now assume that p ---- 1, and L, M are open. We use Corollary 2 in the following examples.

1. :Y'(t) = / ( A , t) Y( t )g(B, t) -~ q)(A, B, t)x. Here

d T = - - ~ , .F(~ ,Ty ,# , t ) - - - -y ' ( t ) - - / (~ , t )g( l~ , t )y ( t )

and

)) y (~,/~, t) ---- exp ] (~, T) g (/~, T) dT

/( (J )) (assuming the existence for t, to E R, 2 e L, # e M) with y(~.,/.~, to) ---- 1 for all ~ s L, /~ ~ 2/ . Hence by Cor. 2 and Remark 3 Y (t) = y (A, B, t)x is the required solution

Page 6: Matrix functions and two-sided linear operator equations

Vol. 40, 1983 Matrix functions 337

with Y(to) = X e C uxv. I f q (2 ,# , t ) ~ 0, then k l ~ - 1 n j -1

See also [3, 6, 7, 8, 11, 12, 13, 14, 17] and [4, pp. 116--129]. ~r

2. ~ ]~ (A , t) Y ( t )g , (B , t) ---- cp(A, B, t )x for some X e C uxv. Here all T , are the

identity transformation and we put

F(2, ~, t) --- y./,(2, t/g,(~, t),

assuming that all f~, g~ and ~(2, /~, t) are defined for 2 ~ L, /.~ s M, t ~ .R. Then y(2, I~, t) = q)(2, l~, t)/F (~., i~, t), provided F =~ 0 for all 2 s L, i~ ~ M, t ~ R what we want to assume and

k 1 m~--I nj--1 r ( t ) = ~ ~ ~: ~ (g 0~(~(2,, ~j, t)/F(2,, ~j, t)))A~XDj~

i=1 1=1 r=0 s=0

is a solution. Observe that ~(2, #, t) =- 1, implies q~(A, B, t )x = X . See also [9, 15, 16], [4, pp. 215--227].

3. Y"(t) + A:Y( t )B = 0 . Here

y0 (2,/~, t) ---- cos (2/~) ~/2 (~ - to),

yi (2, #, t) = (2/~) -1/2 sin (2/~) i/2 (t -- to) satisfy

(d/dt)2y~(2, #, t) --}- 2#y~(2, /z , t) -~ 0 and (d/dt)~ya(2,/z, t)It=to -~ ~

for 0 ~ ~, r ~ 1, 2, ~u, t, to e C. Hence by Cor. 2 and Remark 3

Y~ (t) : = y~ (A, B, t)x~, ~ = 0, 1,

are solutions of the given equation with (d/dt) ~ Y~(t)It=to = 5~vX~, 0 ~ ~, v ~ 1 and arbitrary fixed X0, X i ~ C uxv. I f X0, X1 ~= 0, then Y~(t), ~ -- 0, 1, are lin- early independent. See also [1, 8], [4, pp. 123--124].

For practical computations of the solutions in the preceding examples see [13].

References

[1] T. M. ~_POSTOL, Explicit formulas for solutions of the second-order matrix differential equa- tion Y " = A Y. Amer. Math. Monthly 82, 159--162 (1975).

[2] L. B I T T ~ , Linear equations with an operator polynomial on the left side. Z. Angew. Math. Mech. 53, 397--408 (1973).

[3] S. D ~ D s , On the evaluation of e At. Matrix Tensor Quart. 23, 141-142 (1973). [4] F. R. G A ~ T ~ C ~ R , The theory of matrices I. New York 1977. [5] W. GAUTSCHI, ~Tber eine Klasse yon linearen Systemen mit konstanten Koeffizienten.

Comment. Math. Helv. 28, 186--196 (1954). [6] R. B. K ~ e m ~ , An explicit formula for e At. Amer. Math. Monthly 74, 1200--1204 (1967).

Archiv der Mathematik 40 22

Page 7: Matrix functions and two-sided linear operator equations

338 H.-J. RUNCKEL and U. PITTELKOW AKCH. MATH.

[7] C. KLUCZ~Y, A certain form of the matrix e At. Zeszyty :Nauk. Politech. ~lask. Mat.-Fiz. Zeszyt 25, 3--10 (1974).

[8] I. I. KOLOD~SR, On exp (tA) with A satisfying a polynomial. J. Math. Anal. Appl. 52, no. 3, 514--524 (1975).

[9] P. LANCASTER, Explicit solutions of linear matrix equations. SIAM Rev. 12, no. 4, 544--56{} (1970).

[10] B. Z. :LINI~IV.LD, On the explicit solution of simultaneous linear difference equations with constant coefficients. Amer. Math. Monthly 47, 552--554 {1940).

[11] J. PARIZET, D~termination de l'exponentielle et recherche du (dogaritkule~ d'un ~14ment d'une alg~bre de Banach unitaire engendrant une sous-alg~bre de dimension finie. C. R. Acad. Sci. Paris S6r. A--B 278, A971--A974 (1971).

[12] E. J. PUTZFa~, Avoiding the Jordan canonical form in the discussion of linear systems with constant coefficients. Amer. Math. Monthly 75, 2--7 {1966).

[13] H.-J. Ru~c~l~ and U. Prrrv.i~ow, Practical computation of matrix functions. To appear. [14] M. ~N. S. SWAMY, On a formula for evaluating e m when the eigenvalues of A are not neces-

sarily distinct. Matrix Tensor Quart. 23, 67-72 (1972). r [15] H. Wlma~l~ and A. D. ZIESUR, Solving the matrix equation ~ ]o(A)Xg~(B)= C. SIAM

Rev. 14, no. 2, 318--323 (1972). ~=l [16] H. WI~MEI~ und A. D. ZIE~Vl~, Blockmatrizen und lineare Matrizengleichungen. Math.

:Nachr. 59, 213--219 (1974). [17] A. W~x(}G and C. DAvIEs, Computation of the exponential of a matrix, I, II. J. Inst. Math.

Appl. 11, 369--375 (1973), and 1~, 273--278 (1975).

Eingegangen am 1.9. 1980")

Anschrift derAutoren:

Hans-J. Runckel und Uwe Pittelkow Abteilungen Mathematik IV und I Universitiit Ulm D-7900 Ulm

*) Eine Neufassung ging am 20. 5.1981 ein. Eine modifizierte l~assung ging am 7. 1.1983 tin.