kronecker products, unitary matrices and - citeseer

28
005 SIAM REVIEW 01989 Society for Industrial and Applied Mathematics Vol. 31, No. 4, pp. 586-613, December 1989 KRONECKER PRODUCTS, UNITARY MATRICES AND SIGNAL PROCESSING APPLICATIONS* PHILLIP A. REGALIAt AND SANJIT K. MITRA$ Abstract. Discrete unitary transforms are extensively used in many signal processing appli- cations, and in the developmefit of fast algorithms Kronecker products have proved quite useful. In this semitutorial paper, we briefly review properties of Kronecker products and direct sums of matrices, which provide a compact notation in treating patterned matrices. A generalized matrix product, which inherits some useful algebraic properties from the standard Kronecker product and allows a large class of discrete unitary transforms to be generated from a single recursion formula, is then introduced. The notation is intimately related to sparse matrix factorizations, and exa.mples are included illustrating the utility of the new notation in signal processing applications. Finally, some novel characteristics of Hadamard transforms and polyadic permutations are derived in the framework of Kronecker products. Key words. Kronecker products, unitary matrices, fast transform algorithms, filter banks AMS(M0S) subject classifications. 15A23, 15A24, 94A99 1. Introduction. The development of a fast Fourier transform algorithm by Cooley and Tukey [I] made a significant impact on the field of digital signal processing. Motivated by their success, many researchers have since developed fast transform implementations corresponding to a wide variety of discrete unitary transforms [2]- [lo], and at present fast algorithms are known for a variety of unitary transforms such as Hadamard, Haar, Slant, Discrete Cosine, and Hartley transforms, etc. Each unitary transform family has its own characteristic properties depending on its basis functions, which in some sense match a transform to an application. For ex- ample, the discrete Fourier transform is well suited to frequency domain analysis and filtering [ll], the Discrete Cosine transform to data compression [12], the Slant trans- form to image coding [13], the Hadamard and Haar transforms to dyadic-invariant signal processing [7],[14], [15], and these and others to generalized spectral analy- sis. Such widespread applications have rendered the research into the structure and properties of discrete unitary transforms a legitimate study of its own. The suitability of a unitary transform in a given application depends not only on its basis functions, but also on the existence of an efficient computational algorithm. To this end, it becomes helpful to better understand the "structure" of various unitary matrices, and the connection of such matrix structures with fast transform realizations. For example, fast transform algorithms are typically developed by recognizing various patterns of the elements of a discrete unitary transform matrix. The existence of such patterns implies some redundancy among the matrix elements which can be exploited in developing sparse matrix factorizations. The product of sparse matrices can result *Received by the editors April 18, 1988; accepted for publication (in revised form) June 15, 1989. This work was supported by the National Science Foundation under grant DCI 85-08017. tD6partement Electronique et Communications, Institut National des T616communications, 9, rue Charles Fourier, 91011 Evry Cedex, France. This work was performed while the author was visiting the Center for Information Processing Research, Department of Electrical and Computer Engineering, University of California, Santa Barbara, California 93106. $Center for Information Processing Research, Department of Electrical and Computer Engi- neering, University of California, Santa Barbara, California 93106.

Upload: others

Post on 11-Feb-2022

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

005 SIAM REVIEW 0 1 9 8 9 Society for Industrial and Applied Mathematics Vol. 31, No. 4, pp. 586-613, December 1989

KRONECKER PRODUCTS, UNITARY MATRICES AND SIGNAL PROCESSING APPLICATIONS*

PHILLIP A. REGALIAt A N D SANJIT K. MITRA$

Abstract. Discrete unitary transforms are extensively used in many signal processing appli- cations, and in the developmefit of fast algorithms Kronecker products have proved quite useful. In this semitutorial paper, we briefly review properties of Kronecker products and direct sums of matrices, which provide a compact notation in treating patterned matrices. A generalized matrix product, which inherits some useful algebraic properties from the standard Kronecker product and allows a large class of discrete unitary transforms to be generated from a single recursion formula, is then introduced. The notation is intimately related to sparse matrix factorizations, and exa.mples are included illustrating the utility of the new notation in signal processing applications. Finally, some novel characteristics of Hadamard transforms and polyadic permutations are derived in the framework of Kronecker products.

Key words. Kronecker products, unitary matrices, fast transform algorithms, filter banks

AMS(M0S) subject classifications. 15A23, 15A24, 94A99

1. Introduction. The development of a fast Fourier transform algorithm by Cooley and Tukey [I] made a significant impact on the field of digital signal processing. Motivated by their success, many researchers have since developed fast transform implementations corresponding to a wide variety of discrete unitary transforms [2]- [ lo],and at present fast algorithms are known for a variety of unitary transforms such as Hadamard, Haar, Slant, Discrete Cosine, and Hartley transforms, etc.

Each unitary transform family has its own characteristic properties depending on its basis functions, which in some sense match a transform to an application. For ex- ample, the discrete Fourier transform is well suited to frequency domain analysis and filtering [ll],the Discrete Cosine transform to data compression [12], the Slant trans- form to image coding [13], the Hadamard and Haar transforms to dyadic-invariant signal processing [7],[14], [15], and these and others to generalized spectral analy- sis. Such widespread applications have rendered the research into the structure and properties of discrete unitary transforms a legitimate study of its own.

The suitability of a unitary transform in a given application depends not only on its basis functions, but also on the existence of an efficient computational algorithm. To this end, it becomes helpful to better understand the "structure" of various unitary matrices, and the connection of such matrix structures with fast transform realizations. For example, fast transform algorithms are typically developed by recognizing various patterns of the elements of a discrete unitary transform matrix. The existence of such patterns implies some redundancy among the matrix elements which can be exploited in developing sparse matrix factorizations. The product of sparse matrices can result

*Received by the editors April 18, 1988; accepted for publication (in revised form) June 15, 1989. This work was supported by the National Science Foundation under grant DCI 85-08017.

tD6partement Electronique et Communications, Institut National des T616communications, 9, rue Charles Fourier, 91011 Evry Cedex, France. This work was performed while the author was visiting the Center for Information Processing Research, Department of Electrical and Computer Engineering, University of California, Santa Barbara, California 93106.

$Center for Information Processing Research, Department of Electrical and Computer Engi- neering, University of California, Santa Barbara, California 93106.

Page 2: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

587 GENERALIZED KRONECKER PRODUCT

in a much simplified computational algorithm compared to the direct implementation of a matrix equation.

The utility of Kronecker products as a compact notation for representing matrix patterns has been recognized by various researchers [16]-[23]. Andrews and Kane [16] have shown how Kronecker product representations lead to efficient computer implementations of numerous discrete unitary transforms and have indicated how Kronecker products can be defined in terms of matrix factorizations. The application of such results to generalized spectral analysis is outlined in [17]. Fino and Algazi [18] have shown that a wide class of discrete unitary transforms can be generated using recursion formulas of generalized Kronecker products with matrix permutations (i.e., element reordering). The authors have shown that many fast transform flowgraphs may be obtained as structural interpretations of their matrix notation. Likewise, Vlasenko and Rao [19] have illustrated the Kronecker product decomposition of various unitary transforms, and its utility in developing fast transform algorithms. The use of Kronecker products arises in a variety of other mathematical applications as well. A survey of Kronecker products with respect to vec-permutation operations can be found in [20], [21], and related statistical applications can be found in [22], [23]. A review of Kronecker product applications in linear system theory, with related extensions, has been presented in [24].

This paper focuses on a newly proposed generalization of the Kronecker product [25], and outlines its utility with some examples from the field of signal processing. The paper is reasonably selfcontained so as to be accessible to a general audience, and as such semitutorial backgound material is included at appropriate places, in addition to various new results. The organization is as follows. Section 2 begins by reviewing briefly some properties of Kronecker products and direct sums of matrices, which together provide a compact notation in treating patterned matrices. Section 3 then introduces a new matrix product which inherits some useful algebraic properties from the standard Kronecker product and allows a large class of discrete unitary transforms to be developed from a single recursion formula. Closed-form expressions are derived for sparse matrix factorizations in terms of this generalized matrix product. When applied to discrete unitary transform matrices, fast transform algorithms can be developed directly upon recognizing patterns in the matrices, without resorting to tedious matrix factorization steps. Some examples illustrating the utility of the new notation, both from the perspective of systematic sparse matrix factorizations and certain signal processing applications, are included in 84.

Section 5 derives some apparently novel properties of Hadamard transformations and polyadic permutations in the context of Kronecker products. In particular, we show the invariance of Hadamard matrices under a bit-permuted ordering similarity transformation, and establish a simple result of the Kronecker decomposability of any polyadic permutation matrix. It is hoped that these results may have some benefits in polyadic-invariant signal processing applications.

2. Notation. In this paper, vectors appear as bold lowercase letters, while ma- trices appear as bold capital letters. For convenience, row and column indices of vectors and matrices are numbered starting from zero rather than 1. We denote by I the identity matrix, by R the relevant unitary matrix of each section, and by P the relevant permutation matrix of each section. A subscript on I or R will denote its dimension (e.g., IN for the ( N x N ) identity matrix), a subscript on P will denote the type of permutation matrix used (such a subscript will always consist of lowercase letters), while in all other cases a subscript is used strictly for indexing purposes.

Page 3: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

588 PHILLIP A. REGALIA AND SANJIT K . MITRA

The elements of vectors and matrices may be analytic functions of the complex variable z, which can be denoted as, e.g., H(z) . Each element of H(z) may be inter- preted as the z-transform of an appropriate time series [ll].The tilde notation [e.g., ~ ( z ) ]denotes the para-conjugate transpose operation. That is, given an analytic matrix H(z) , we have

H(Z) [H*(l /z*)] t ,

where "*" denotes complex conjugation. If H(z ) is evaluated along the unit circle z = ejw, the tilde operator returns the conjugate transpose of H(ejw). In particular, for any constant matrix the tilde operator reduces to the conjugate transpose operation.

To familiarize the reader with the notation used throughout this paper, the re- mainder of this section reviews the properties of Kronecker products and direct sums of matrices in a tutorial fashion.

2.1. Kronecker product of matrices. Given two matrices, A (m x n) and B (k x l ) , the Kronecker product (or direct product) is defined as the (mk x nl) matrix

Abl0 Abll . . (2.1) A X B = l

where b,,, is the (r , s)-th element of B. This definition is in fact a left Kronecker product [23], to which we adhere in this paper. Note in general that A 8 B # B 8A. Rather, A 8B = P ( B 8A)Q, where P and Q are termed "vec-permutation" matrices [20], [21]. The following properties of Kronecker products are summarized for convenience of the reader. Proofs can be found in Bellman [26] or Graybill [23].

Property 2.1.

(2.2) ( A 8B ) 8C = A 8 ( B 8C) .

Thus, the expression A 8B 8C is unambiguous. Property 2.2. Given matrices C (n x q) and D (1 x s ) , then

(2.3) ( A 8B ) ( C 8D ) = (AC) 8 (BD) .

One useful identity which follows from Property 2.2 is

(2.4) A 8D = (AI,) 8 (I1D) = ( A 8Il)(In8D) .

Note that the matrix I, 8D is sparse, and that the matrix A 8I1is block diagonal. Property 2.3.

This property generalizes in a straightforward manner to the conjugate transpose and para-conjugate transpose operations.

Property 2.4. If A is (m x m) and B is (k x k), then

(2.6) det(A 8B ) = (det A)k(det B)m.

Page 4: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

GENERALIZED KRONECKER PRODUCT

Property 2.5. If A and B are orthogonal, then A €3 B is orthogonal. This is easily seen with the aid of the above properties:

Property 2.5 holds true if "orthogonal" is replaced with "unitary" or "para-unitary." In a similar manner, it can be shown that ( A @ B)-l = A - l @ B-1, which shows that if A and B each have full rank, then A @ B has full rank. More generally, if A has rank r l and B has rank 7-2, then A @ B has rank r l r 2

2.2. Direct sum of matrices. Given N matrices, Ai , i = 0 ,1 , . . . , N - 1 (not necessarily of the same dimension), their direct sum is defined [23] as the block diagonal matrix

The matrix of (2.8) can be expressed in the compact notation [20]

The following algebra for direct sums is easily verified. Property 2.6. ( 9

where it is assumed that all matrix multiplications are defined. (ii)

(iii) If each Ai is a square matrix, then

det ( $ Ai ] = n det Ai.

(iv) If A O and A1 are orthogonal, then A O @ A1 is orthogonal. In expressions containing both Kronecker products and direct sums, the Kro-

necker product operation takes precedence.

3. A generalized Kronecker product. In this section we propose a general- ized matrix product, as developed in [25], which inherits some useful algebraic proper- ties from the standard Kronecker product of matrices. This generalized matrix prod- uct allows a wide family of unitary matrices to be developed from a single recursion

Page 5: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

590 PHILLIP A. REGALIA AND SANJIT K . MITRA

formula. Closed form expressions relating this matrix product to sparse matrix factor- izations are obtained, which is particularly useful in factorizing "patterned" matrices. Many unitary matrices, for example, are patterned matrices, for which the sparse rna- trix factorization equivalent of this matrix product directly yields fast transform algo- rithms, without resorting to tedious matrix factorization steps. Section 4 illustrates some applications of the new notation, from developing fast transform algorithms to developing equivalent filter bank representations. We begin by defining a generalized Kronecker product.

DEFINITION3.1. Given a set of N ( m x r ) matrices Ai , i = 0 ,1 , . .. ,N-1, denoted by {A)N, and a ( N x 1) matrix B, we define the ( m N x rl) matrix ({A)N 8 B ) as1

where bi denotes the i th row vector of B. If each matrix Ai is identical, then (3.1) reduces to the usual Kronecker product of matrices.

EXAMPLE3.1. Let

The definition of (3.1) yields

which is recognized as a (4 x 4) DFT matrix with the rows arranged in bit-reversed order.

In the definition of (3.13) and Example 3.1 we have assumed, of course, that the number of matrices in the set {A)N matches the number of rows in the matrix B. At a more abstract level, however, we may drop the subscript N , and the set {A) may be considered an infinite, though periodic, sequence of matrices, which thus admits a finite representation corresponding to one period. Likewise, B need not be a single matrix, but may also be a periodic sequence of (k x 1) matrices (and hence a periodic sequence of row vectors). The matrix product {A) 8 {B) is obtained as the Kronecker product of each matrix in {A) with each row vector in {B), and is easily verified to yield a periodic sequence of (mk x r l ) matrices, which thus admits a finite representation in one period. To affix notation, a set of matrices will always be indicated using the bracket notation (e.g., {A)), whereas if the set consists of a single matrix, the brackets will be omitted. A subscript on the right bracket, if present, will

Although the bracket notation used here is similar t o that proposed by Fino and Algazi [18]in their generalized Kronecker product, their definition is distinct from that proposed here.

Page 6: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

591 GENERALIZED KRONECKER PRODUCT

indicate the number of matrices in the set (or equivalently the period of the infinite sequence of matrices). We illustrate with the following example.

EXAMPLE3.2. Let

Then

Note that each matrix in the resulting set has dimensions (4 x 4). By comparing Examples 3.1 and 3.2, we observe that this matrix product can

result in a single matrix only if the rightmost multiplicand is a single matrix. The following algebraic properties of this generalized Kronecker product are summarized and proved.

Property 3.1.

Proof. Let each matrix in {A) have dimension (ml x n l ) , each matrix in {B) have dimension (ma x na), and each matrix in {C) have dimension (m3 x n 3 ) Then for ({A) @ {B)) @ {C), we find the Ith row vector is

where all is the 11th row vector of { A ) ,b l , is the lath row vector of {B), cl, is the 13th row vector of {C) , and the li can be verified to satisfy

where 1.1 denotes the integer part of the argument. In a similar fashion, we find the lth row vector of {A) @ ({B) @ {C)) is given by

Page 7: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

592 PHILLIP A. REGALIA AND S A N J I T K. MITRA

In view of Property 2.1, the row vectors of (3.7) and (3.9) are the same, which estab- lishes the proof.

DEFINITION3.2. Let {A)N be a sequence of (m x n) matrices and E be a single (n x r ) matrix. Then

where each matrix in the sequence is (m x r). This leads to the next property. Property 3.2.

Proof. Using Definition 3.2 we obtain (3.12)

A o E @ b o F (Ao @ bo)(E @ F ) (A1 @ b l ) (E @ F )

AN-1 E $N-1 F AN-1 E 8 b ~ - 1F (AN-1 @ b ~ - i ) ( E@ F)

Property 3.2 may be understood as a generalization of Property 2.2. The next two identities are useful in developing sparse matrix factorizations. Identity 3.1.

The proof is easily established by noting that both sides of (3.13) yield a block-diagonal matrix containing the matrices Ai.

Identity 3.2. Let p sets of matrices be denoted by k = 0,1, , p - 1,2 where each matrix is (m x n), and the kth set has Nk = mk matrices. Consider the matrix R formed as

Note that, by construction, the last matrix A(0) is a single matrix, as is R. The matrix R admits the sparse matrix factorization

The superscript ( k ) indicates that the matrix A!~)belongs to the kth set.

Page 8: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

593 GENERALIZED KRONECKER PRODUCT

Proof. For convenience, we show the identity for m = n = 2, p = 3, as the identity can be established for other dimensions by following essentially the same pattern of steps. Let R = {A(2))4@ {A(l))p@ A(O) be written explicitly as

where for convenience we have set {D) = {A(2))48 {A(l)I2. Using Property 3.2 we have

Now, to solve for {D) @ 1 2 , note from Identitv 3.1 that

where we have also used Properties 3.2 and 2.6. By combining (3.17) and (3.18), we obtain the sparse matrix factorization

which agrees with (3.15) for m = 2 = n and p = 3. From the expression of (3.15), we have the following properties. Property 3.3. If each matrix AIk) is (para-)unitary, then R is (para-)unitary. This result follows from (3.15) by noting that the Kronecker product, direct sum,

or matrix product of (para-)unitary matrices is a (para-)unitary matrix. Property 3.4. If m = n, such that each matrix A,(", i = O,... ,mk - 1, k =

0, . . . , p - 1, is square, then

(3.20) det R = n n ( d e t A!k))mk

This relation can be developed from the expression of (3.15) by successive application of the algebra of determinants.

4. Examples. In this section we provide some examples illustrating the utility of the notation proposed in the previous section. We consider first the representa- tion of fast transform algorithms corresponding to a wide family of unitary transform

Page 9: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

594 PHILLIP A. REGALIA AND SANJIT K . MITRA

matrices. This exposition allows us to present some relations between the proposed generalized Kronecker product and multidimensional raster scan mappings of data. The section closes by presenting certain applications of the proposed Kronecker prod- uct notation in the context of single rate and multirate filter bank systems. We begin with the following example:

Example 4.1. Find a fast transform algorithm for an (8 x 8) complex BIFORE [27] matrix.

Solution. Let Rg denote the (8 x 8) complex BIFORE matrix, which is given by

Observe that (4.1) can be partitioned as

which is, in terms of our present notation,

Note that the (4 x 4) matrix in (4.2b) is a 4-point complex BIFORE transform. De- noting this matrix by R4 and continuing the process, we obtain

Page 10: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

595 GENERALIZED KRONECKER PRODUCT

Thus, by combining (4.2b) and (4.3) and invoking Identity 3.2, we obtain the sparse matrix factorization

A flowgraph of (4.4) appears as Fig. 4.1, which agrees with known fast transform implementations [3].

From (4.2b) and (4.3) we observe that a recursion formula for the family of com- plex BIFORE matrices is easily expressed in the present notation. In particular, we have

where N is a power of two, and {B)N12 has N/2 matrices in the set, with the i th matrix (counting from i = 0 to N/2 - 1) given by

, i = 1;

Bi = I [:[: ] -;I , otherwise.

DFT matrices may also be expressed using (4.5), provided the rows are permuted into bit-reversed order (which, in a practical implementation, requires the output samples to be sorted if they are desired in normal order). In this case the ith matrix of { B ) N / ~ is given by

where WN = e-j2"lN1 and < i > > is the decimal number obtained from a bit reversal of a log, (N/2)-bit binary representation of i [4, p. 157].3 The matrix sets in Example 3.2

Since this notation is perhaps unfamiliar, we point out t h a t for any power-of-two N, wZiB produces the sequence (1,- j , e - ~ " / ~ , . , ,w , : / ~ - ~ } .e-j3*I4,.

Page 11: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

596 PHILLIP A. REGALIA A N D SANJIT K. MITRA

are recognized as {B)4 and {B)2, for example. Although (4.7) corresponds to radix- 2 FFT algorithms, a similar recursion can be developed for any prime radix. For example, to obtain a radix-3 FFT recursion, in (4.5) N/2 can be replaced everywhere with N/3, and the i-th matrix in { B ) N / ~ can be defined as

where now <i>> is the decimal number obtained from a trit reversal of a log3(N/3)- trit ternary representation of i , and so on. Nonprime radices can be split into prime factors, each of which may be treated in a manner analogous to that above.

Modified Walsh Hadamard Transform (MWHT) [4] matrices may also be ex- pressed with the recursion of (4.5). In this case the ith matrix of { B ) N / ~ is given by

( 4 1 2 , otherwise.

We point out that the Haar transform may be derived from the MHWT through zonal bit-reversed ordering permutations of the input and output data [4], [18].

Finally, Hadamard matrices are known to satisfy the recursion [4], [5], [7]

which may be understood as a special case of (4.5) in which each Bi is identical. With the recursion of (4.5), the matrix RN in all cases defined above satisfies

so that R N / ~is unitary. And, by appealing to Identity 3.2, fast transform algo- rithms in all cases are immediately available.

Page 12: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

597 GENERALIZED KRONECKER PRODUCT

4.1. Relation to multidimensional raster scans. A useful conceptual in- terpretation of Kronecker products can be understood in terms of multidimensional raster scan mappings of data. Let us illustrate the connection by returning briefly to Example 4.1, where we developed the flowgraph of Fig. 4.1 by direct inspection of the matrix (4.1). Referring to Fig. 4.1, suppose the index of each input sample is represented in binary format, i.e., the indices range from 000,001, . . . ,111. The "address" implied by each index can be associated with the vertex of a cube in three- dimensional space. For example, the binary representation of 5 is 101, so that the input sample in(5) in Fig. 4.1 can be placed at the vector location (x, y, z) = (1,0,1) in three-dimensional space. The first set of butterfly operations (reading Fig. 4.1 from the left) are seen to operate pairwise between points whose addresses differ only in the x component. (For example, the sample in(1) is located at address 001, while the sample in(5) is located at address 101; the y and z components of the address agree, the x component is different). The results of the first stage of butterfly computations can overwrite the storage locations at the vertices of the cube. The second stage of the computations is seen to operate pairwise between points whose addresses differ in the y component, and the last stage is seen to operate between points whose addresses differ in the z component. We have thus given an alternate geometric interpretation of the computations shown in Fig. 4.1.

We should point out that a similar type of geometric interpretation can always be developed for the sparse matrix factorization of Identity 3.2. The product in (3.15) can be understood as applying a set of transformations to data with a p di-mensional rectangular support, with the data point assigned an address of the form (addo,. .. ,addp-1). The matrix A(O) operates along the add0 direction, the matrices in the set {A(l)} operate along the add1 direction, with potentially m distinct trans- formations (corresponding to the m matrices in the set), and so forth. By construction of Identity 3.2, the transformations along the addo direction are all the same (namely A(O)); progressively more degrees of freedom are obtained until the last application along direction add,-1, where mp-l distinct transformations can be applied. A gener-alization, from the multidimensional raster mapping point of view, would be to allow different transformations along all directions. For example, the restriction that the transformations along the add0 direction all be the same could be lifted; the matrix A(0) could be effectively replaced by a set of matrices. A similar extension can obvi- ously be applied to the other directions as well. If unitary transformations are applied at each stage the overall transformation can be verified to be unitary, either by using intuitive arguments based on the norm-preservation property of unitary matrices, or by formal arguments based on closure properties of unitary matrices under various matrix products.

This further extension, based on geometric raster scan interpretations, retains all the attractive features of fast implementations, such as in-place computations, reduced computational load compared to a full matrix-vector multiply, access to intermedi- ate products, etc. The corresponding Kronecker product formalism can, with liberal enough interpretation, be shown to be very much in the spirit of the work of Fino and Algazi [18]. The multidimensional raster scan interpretations, however, leave one very important question unanswered: under what conditions can a full matrix be factored into a multidimensional raster scan (or generalized Kronecker product) representa- tion, and how do we proceed systematically with such factorizations? The important contribution of $3 is to present one such systematic method of developing sparse ma- trix factorizations, using a modest extension of existing Kronecker product formalism. The generalized product as developed here retains some useful algebraic properties

Page 13: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

598 PHILLIP A. REGALIA AND SANJIT K. MITRA

of standard Kronecker products, and the factorizations obtained from Identity 3.2 all lend themselves to multidimensional raster scan implementations.

4.2. F i l t e r b a n k applications. We consider first a simple DFT filter bank obtained as

where RN is an ( N x N ) DFT matrix, and a(z) is a "delay" vector obtained as

The word "delay" is used for a (z ) , as the term z-I denotes the backward delay operator [ll]. Thus if a time series {u(k)) is applied to a (z ) , its output becomes [u(n) u(n-1) . . . u(n-N+l)]t , where n is the time index of the most recent sample from the series.

The utility of this filter bank in signal processing stems from a modest signal decorrelation property obtained at the filter bank outputs. According to (4.12) and (4.13), we obtain

which can be written for z = ejw as

N-1

(4.14) C I H i ( e j w ) I 2 = 1 , foral lw. i=o

The filters Hk(z) are seen to be magnitude square complementary along the unit circle in the z-plane, and in particular if the gain of one of the filters approaches unity for some value of w, the gain of the other filters must approach zero. The relation (4.14) is thus seen to constrain the degree of spectral overlap between the filters in the vector h(z) ; to the extent that this spectral overlap is small, the output signals from the separate filters will be approximately uncorrelated with one another. This decorrelation property can be exploited in adaptive filtering [28]-[30] and in transform coding applications [12], for example.

The choice of RN as a DFT matrix in (4.12) is for convenience, since this choice also allows the output samples obtained from the filter bank to be intepreted as a short-time Fourier transform of a sliding window of the input samples. The elements Hk(z) , k = 0,1, . . . ,N-1, of h(z) are shown in this case to satisfy

Thus, the system can be understood as a bank of bandpass filters, where the frequency response of each band is a frequency shifted version of that of the adjacent band; the system offers a modest degree of decorrelation between the respective output signals.

Page 14: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

599 GENERALIZED KRONECKER PRODUCT

Let us choose N = 8 and derive two different realizations. First, express4

where we have used the recursion formula of (4.5) and have appealed directly to Identity 3.2. A flowgraph representation of the filter bank appears as Fig. 4.2(a).

To obtain an alternate realization, observe that the delay vector can be factored as

FIG. 4.2(a). Simple DFT filter bank.

4This is actually an 8-pt DFT matrix with its rows arranged in bit-reversed order.

Page 15: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

600 PHILLIP A. REGALIA A N D SANJIT K. MITRA

As such, the filter bank can be expressed as

where we have used Property 3.2. By using Identity 3.2 with m = 2, n = 1,and p = 3, we obtain

The corresponding signal flowgraph appears as Fig. 4.2(b). Although more delay elements are required, the computational requirements have been reduced to that of a pruned FFT [31], [32] algorithm. For transforms of larger dimension, this can represent a substantial savings in computational complexity. For example, with larger N (taken to be a power of two), a direct implementation of (4.12) as in Fig. 4.2(a) would require

Number of delays = N - 1

Number of butterflies = (N/2) log2 N,

whereas the equivalent realization as in Fig. 4.2(b) would require

Number of delays = (N/2) log2 N

Number of butterflies = N - 1,

where each butterfly operation may require one complex multiplication. In effect, we have traded computational complexity for storage requirements.

We consider finally the application of the Kronecker product notation in the con- text of multirate filter banks [34], which are a common element in subband coding systems [12]. The practical motivation behind signal coding in transmission/reception systems is the need to obtain high signal quality at the receiver using a modest bit transmission rate. Roughly speaking, this is achieved by extracting any redundancy

Page 16: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

GENERALIZED KRONECKER PRODUCT 601

from the signal and compressing this redundancy into a finite set of parameters, the transmission of which is often more efficient than a direct transmission of the signal itself. One common example of such a data compression technique is the linear pre-diction of time series [33]; the set of finite parameters in this case corresponds to the coefficients of a linear prediction error filter. Other data compression strategies can be found in [12].

F I G . 4.2(b). Equivalent D F T filter bank.

In many practical applications the coding of a wideband signal may offer less than satisfactory performance. This is particularly true of speech signals, since the statistical and perceptual characteristics of human speech and hearing vary markedly from one frequency region to another. By first separating a signal into its various subband components, the signal coding operations can be specifically tailored to the separate subband signals. This technique is known as subband coding, and its suc-cessful application results in improved perceptual qualities of the received signal [12].

X(z) - - - - - - - - - - - -- HO(z) *; ++ Go@) .

4- * G1(z)

I : coding and : . I

I , transmission I .. I I .

F I G . 4.3. N - b a n d max ima l l y decimated mul t irate s y s t ems .

The practical implementation of subband coding requires the use of multirate signal processing systems; the connection is explained briefly here. The basic N-channel maximally decimated multirate filter bank system is shown in Fig. 4.3. The first component is the analysis filter bank, whose function is to split the input signal into its separate subband components, each of which is processed by a separate coding

Page 17: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

602 PHILLIP A . REGALIA A N D SANJIT K. MITRA

stage. It should be noted that the analysis filter bank computes N output samples for each input sample; this increased data rate would appear to obscure any possible reduction in bit rate afforded by the coding stage. To circumvent this problem, the output sequences from the analysis bank filters are each subsampled by a factor of N , as indicated by the "down arrow" boxes. This subsampling operation consists of retaining only every N t h sample, and is commonly known as a "decimation" operation [34]. The coding stage operates a t a lower sampling rate, which accounts for the label "multirate." At the receiver end, the (decoded) signals are upsampled by a factor of N (using the "up arrow" box), and then filtered into a composite output signal z ( z ) . The upsampling operation consists of inserting N-1 zero-valued samples between each decoded sample. The filters in the synthesis bank effectively interpolate the signal value between the nonzero samples.

In the absence of any coding errors (i.e., using perfect transmission quality), the multirate system of Fig. 4.3 may be interpreted in terms of classical sampling theorems [35]; we see that the input sequence {x(n))is split into N subsampled sequences by the analysis filter bank, and it is the job of the synthesis filter bank to reconstruct as closely as possible the original input signal. In the following development we address a specific aspect of this reconstructability problem.

The sampling rate changers between the analysis and synthesis filter banks intro- duce aliasing distortion into each subband (to be clarified shortly), and in the absence of coding errors, the output signal z ( z ) may be written as [36]

where again WN = e - j 2 ~ l N .Each term ~ ( z w i ' ) , I r f 0, may be interpreted as the z-transform of a signal whose spectrum relates to that of the input signal by a rotation of 2n l lN radians in the complex z-plane. These components are called aliased versions of the input signal, and their presence in the output signal constitutes alias distortion, which is considered an objectionable by-product of the multirate operations.

By writing (4.19) in matrix form, it is clear that aliasing distortion is absent from the output signal z ( z ) for all possible inputs X(z ) , if and only if the following set of relations holds:

in the form

where el is the unit vector with a "1" in the first position, and T (z ) /N is the overall transfer function in the absence of any aliasing distortion. The matrix H(z ) is termed the aliasing component (AC) matrix corresponding to h(z) [36]. Note that H(z ) is ( N x N ) , where N is the decimation factor. For a given analysis vector h(z) , (4.20) provides the formula for the synthesis bank vector g(z) to provide alias-free reconstruction of the maximally decimated subband signals, provided the inverse of

Page 18: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

603 GENERALIZED KRONECKER PRODUCT

the AC matrix can be found. One instance in which this is always possible is when the AC matrix is para-unitary (371:

(4.21) H(Z) H(z) = I for all 2.

In this case a synthesis filter bank can be obtained as

Note that if h(z) is a stable transfer vector (stability here implying that the poles of its scalar transfer function entries lie strictly inside the unit circle in the 2-plane), g(z) in (4.22) is in general unstable, since the term ~ ( z ) = [H*(l/z*)]t will introduce poles outside the unit circle. In practice this is not limiting, since T(z) can be chosen as an all-pass function.5 The zeros of the all-pass function T(z) are chosen to cancel the unstable poles of the synthesis bank; this has the effect of reflecting the unstable poles back inside the unit circle in the z-plane. The resulting filter bank retains the aliasing cancellation properties, and provides an overall transfer function which is all-pass.

FIG. 4.4. Tree structure cascade of maximally decinated filter banks.

We investigate here the tree-structured cascade of maximally decimated filter bank stages, to see how the AC matrix of the overall filter bank relates to those of the individual sections in the tree. To this end, consider the tree structure of maximally decimated filter banks as per Fig. 4.4. For simplicity in the subsequent notation, we have shown a system where the first stage of the tree is a two-channel filter bank, each channel of which is split into N/2 channels, where for now N is assumed to be a power of 2. These restrictions are not crucial to our development, but serve to fix the notation in what follows.

Now, using multirate identities [34], the decimators in Fig. 4.4 can all be moved to the output of the overall filter bank, while replacing z by 2 2 in the second stage of the tree, which leads to the conceptually equivalent filter bank of Fig. 4.5. The transfer vector obtained prior to the decimators is found as

Next, we introduce the AC matrices corresponding to each stage. For example, we can interpret

51n filter theory, an all-pass function is defined as a function which satifies IT(ejw)l = 1for all real w ; one sees that no frequencies are attenuated by such a filter. If z = zo is a pole location of the filter, then z = l / z i is necessarily a zero location.

Page 19: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

604 P H I L L I P A . R E G A L I A A N D S A N J I T K . M I T R A

as the ( 2 x 2 ) AC matrix obtained if the transfer vector [ F o ( z ) F1(z) l t were to stand alone in a maximally decimated system, with a decimation factor of 2. Likewise, we can let A ( z ) denote the ( N / 2 x N / 2 ) AC matrix corresponding to a ( z ) if it were to stand alone, with a decimation factor of N / 2 :

Our aim is to find the ( N x N ) AC matrix H ( z ) corresponding to the transfer vector h ( z ) in terms of the above two AC matrices. For notational convenience, let us define the permuted AC matrices

where Pbis the bit-reversed ordering permutation matrix of appropriate dimension, i.e., ( N x N ) in (4.26a),and ( N / 2 x N / 2 ) in (4.2613). The following can then be shown

FIG. 4.5. Equivalent system to Fig. 4.4 with all decimators moved to the filter bank output.

Property 4.1. Let h ( z ) be Kronecker decomposable as per (4.23). The AC matrix H 1 ( z )may be Kronecker decomposed as

where the zth matrix B i ( z ) in the set { B ) N / ~is (4.28'1

This relation thus shows the structure of the overall AC matrix in terms of the AC matrices of each section of the tree. In the remainder of this section, we examine conditions of para-unitariness of the various AC matrices.

Suppose first that the ( 2 x 2 ) matrix F ( z ) is para-unitary. Noting that each matrix B i ( z ) in (4.28) is just a "frequency shifted" version of F ( z ) , it follows that if one matrix in the set is para-unitary, so are the others. Now, let (4.27) be rewritten as

Page 20: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

GENERALIZED KRONECKER PRODUCT

where we have used Property 3.2 and Identity 3.1. From the assumption that each matrix in the set { B ) N / ~ is para-unitary, it follows that

is para-unitary. Hence, (4.29) then reveals

H'(z) H' (z) = [ I 2 @ i1 (z2) ][12 @ Af(z2)]

(4.30) = ( z 2 ) ~ ' ( z 2 ) ] ,1 2 @ [if

which shows that H1(z) is para-unitary if and only if Af(z ) is para-unitary. The physical interpretation is that if [Fo(z) Fl(z)]t has a para-unitary AC matrix, then the augmented filter bank of Fig. 4.4 yields a para-unitary AC matrix if and only if the added section has a para-unitary AC matrix. In a similar fashion, it can be shown that if the added section a ( z ) has a para-unitary AC matrix, the overall filter bank has a para-unitary AC matrix if and only if the preceding stage has a para-unitary AC matrix. Thus, we have shown directly that if the AC matrix corresponding to each stage of a tree-structured filter bank is para-unitary, the AC matrix corresponding to the overall filter bank is also para-unitary, owing to the Kronecker decomposability of the overall AC matrix.

5. Kronecker decomposi t ions a n d H a d a m a r d t ransformations. In this section, we derive some results concerning Hadamard matrices and polyadic permu- tation matrices. Namely, we show that the family of Hadamard matrices remain invariant under any bit-permuted ordering similarity transformation (to be clarified subsequently), and that any polyadic permutation matrix admits a Kronecker product decomposition into circulant permutation matrices.

The family of Hadamard matrices can be generated recursively according to

where R2 is the (2 x 2) Hadamard matrix:

In practice, the dimension N of RN in (5.1) is restricted to be a power of 2. We begin by reviewing some relations between Kronecker product representations

and certain vector and matrix permutations. A general theory connecting Kronecker products and vec-permutations is well known [20]; we restrict ourselves here to Kro- necker products between matrices of equal dimensions, and derive the following results in a simple fashion using a novel approach of index vectors.

Page 21: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer
Page 22: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

607 GENERALIZED KRONECKER PRODUCT

If the order of appearance of terms in the Kronecker product is reversed to form xnew, i.e.,

(5.7) Xnew = Xn-1 8 . . .8XI 8XO,

then the elements of xnew can be obtained from those of x from a kit-reversed ordering permutation. (By kit we imply a base k numbering system [15].) Note that rearranging the terms in a less trivial fashion would result in a kit-permuted ordering of x.

The same argument applies to a row vector by using transposition arguments:

Hence

t(5.9) Xnew @ gXf~ ~ 6 , and again the elements of xiewrelate to those of xt through a kit-revered ordering permutation. Putting row and column arguments together, a matrix generalization follows. Thus, let C be the Kronecker product of (k x 1) matrices:

and let C n e w be defined by

(5.11) Cnew = Cn-1 8 . . .@ C18Co.

The matrix Cnew can be obtained from C by permuting the rows into kit-reversed order, and then permuting the columns into lit-reversed order. Mathematically, we have

(5.12) Cnew = PI,C Pf.,

where PI,is the kit-reversed ordering permutation matrix and PI is the lit-reversed ordering permutation matrix, both assumed to be of appropriate dimensions. (Observe that a kit-reversed ordering permutation mapping is its own inverse, so that Pk = Pi = P;'). The generalization to kit-permuted (or lit-permuted) orderings follows in a straightforward fashion.

From the above observations, we obtain the following property. Property 5.1. The Hadamard family of matrices are invariant under any bit-

permuted ordering similarity transformation. Proof. The Hadamard family of matrices may be Kronecker decomposed as

where R2 is the (2 x 2) Hadamard matrix. It is clear that the order of terms in the Kronecker product of (5.13) may be rearranged arbitrarily without affecting the overall matrix, since each term is identical. Hence if Pbp denotes any bit-permuted ordering permutation matrix, R satisfies

Since Pf, = PG', (5.14) shows the desired property.

Page 23: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

608 PHILLIP A. REGALIA A N D SANJIT K. MITRA

Property 5.2. Let the vectors x and y form a Hadamard transform pair, and let Pbpdenote any bit-permuted ordering permutation matrix. Then Pbpx and Pbpy also form a Hadamard transform pair.

Proof. We are given

(5.15) y = R x .

Since (5.14) can be rearranged as R = P;; R Pbp,substitution into (5.15) gives

This in turn implies P b p ~= R P b p x ,

showing the desired result. This property may be interpreted as a scrambling prop- erty, in that by scrambling a given vector x in a bit-permuted fashion, its Hadamard transform is scrambled in an identical fashion.

Having established the first aim of this section, we show next that any polyadic permutation matrix can be decomposed systematically into the Kronecker product of circulant permutation matrices. The proof is established first for dyadic permutation matrices, owing to simplicity of the steps, and then generalized to polyadic permuta- tion matrices. First, let us review a dyadic shift [4], [7], [15]. A vector z is said to be dyadically shifted from x by 1 places if the elements of the two vectors satisfy

where (i) is the binary representation of the decimal number i , and "XOR" denotes the bitwise exclusive OR operation. The concept of a dyadic shift is very important in the study of Hadamard transforms, as it plays the analogous role of a circular shift to a Discrete Fourier Transform. For example, the power spectrum of a Hadamard transform is invariant to a dyadic shift of the input data. Likewise, dyadic convolution and dyadic correlation results for the Hadamard transform parallel those of circular convolution and circular correlation for the DFT.

Since any permutation matrix can be characterized by a reordering of indices, we consider the (2" x 1) index vector expressed as

Let the dyadic shift index 1 be such that (1) has the mth bit as a 1 (counting the least significant bit as the zeroth bit), and all others as zero. It is clear that for any (i) , (i)XOR(l) is obtained by inverting the mth bit. Hence a dyadic shift of 1 places would result in the new index vector

where

Page 24: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

609 GENERALIZED KRONECKER PRODUCT

Since the above reasoning applies for any bit, we find for any shift index 1, 0 5 1 < 2n- 1, expanded in binary format as

the dyadically shifted index vector takes the form

where bm is the mth bit of the binary representation of 1 as per (5.20). Observe that p,bm= PCof (5.19) if bm = 1,and p,bm= I if bm = 0. By rearranging (5.20) we obtain

whence Pdis recognized as the dyadic permutation matrix corresponding to a dyadic shift of 1 places, which shows the desired Kronecker product decomposability.

The extension to base k number systems (so-called k-adic shifts [15]) now follows with only modest effort; we outline the development as follows. Let (i) denote a base k (k-ary) representation of an integer:

(5.22) (i) = bn-1 61 bo, where i = n-1xbmkm,

+ .

m=O

with 0 5 bm 5 k-1 for all m. From this we form the index vector

(5.23) i = [(0) (1) .. . (N-l)]t,

with N = kn. This index vector admits the Kronecker product factorizittian

where, as before, the subscript in each vector denotes the corresponding kit position, and the product of terms implied in (5.24) is to be taken notationally.

Now, given two vectors z and x, the elements of the two are said to be related through a polyadic shift of 1 places (with respect to a base k) if the elements of the two vectors satisfy

where (i) denotes the base-k representation of the integer i, and now "XOR" denotes the kit-wise modulo-k addition operation. Consider a k-adic shift of 1 places applied to the index vector of (5.24), with 1 expanded in a base-k decomposition as

Page 25: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

610 PHILLIP A. REGALIA A N D SANJIT K . MITRA

Analogous to the reasoning leading to (5.20), the k-adically shifted index vector in,, takes the form

where PCis the (k x k) elementary circular shift permutation matrix:

with 0 the (k-1 x 1) vector of zeros. By rearranging (5.27), we obtain

whence p,bO 8pcbl8 . @ P>-I is recognized as the k-adic permutation matrix cor- responding to a shift of 1 places. We then have the following property.

Property 5.3. A (kn x k") permutation matrix is a k-adic permutation matrix if and only if it may be decomposed into a Kronecker product involving only powers of the (k x k) matrix PCof (5.28). The exponent of each factor may be obtained from a base-k representation of the polyadic shift index.

The Kronecker product structure of a polyadic permutation matrix is intimately related to supercirculant matrices. For example, it is known that the eigenvectors of such matrices are Kronecker product decomposable [15]. Similarly, Zalcstein [38] uses a Kronecker product of circulant permutation matrices to rotate a supercirculant matrix into a circulant matrix. However, the definition of a polyadic permutation matrix as per Property 5.3 is, to our knowledge, novel.

We close this section by examining some consequences of Property 5.3. We show first a simple proof of the power spectral invariance property of Hadamard transforms with respect to a dyadic shift in the input vector. Let x be a (2" x 1) vector, and let y and z be obtained, respectively, as the Hadamard transforms of x and a dyadically shifted version of x:

y = R x

z = R P d x

where Pddenotes a dyadic permutation matrix. Since R-1 = R/2", we can solve for z in terms of y as

where, owing to the Kronecker decomposability of both R and Pd,we have

Page 26: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

611 GENERALIZED KRONECKER PRODUCT

Since p,bmis either P$ = 1 2 or PCof (5.19)) each term in (5.30) is a diagonal matrix of elements f1:

The overall Kroecker product of (5.30) is thus also a diagonal matrix of elements f1. This implies for the i th element of y and z that

(5.31) ?~,2= Z?z 3

which shows the power spectral invariance of the Hadamard transformation under a dyadic shift.

Finally, we exploit a sparse matrix factorization of a polyadic permutation matrix, with application to a programmable polyadic shift structure. Such structures are useful, for example, in linear polyadic invariant systems [14], [15]. Although dyadic (base-2) shifts can be implemented most efficiently via direct manipulation of the binary representations intrinsic to a base-2 computer, polyadic shifts corresponding to other bases are not always so systematically implemented. For illustration purposes, we consider a (9 x 9) triadic permutation matrix P, which can always be generated according to

(5.32) P = p,bO8 P:' , where

and the shift index 1 is given by

(5.33) 1 = bo + bl . 3.

By applying Identity 3.2 to (5.32), we obtain

Note that the index m serves only as a counter in the direct summation above. A sketch of (5.34) appears as Fig. 5.1, and allows a programmable triadic shift by adjusting the powers of PCat each stage, in direct correspondence with the ternary representation of the triadic shift index 1 as per (5.33). The technique can be extended to polyadic permutation matrices of larger dimensions in a straightforward fashion.

F I G . 5.1. Programmable triadic shift realization. T h e triadic shift index i s adjustable by changing the exponent of the rotator i n each stage.

Page 27: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

612 PHILLIP A. REGALIA AND SANJIT K. MITRA

6. Concluding remarks. The Kronecker product notation provides a useful formalism to treat patterned matrices and is intimately connected with efficient com- putation algorithms of unitary transformations. In this spirit, a generalized matrix product was proposed in $3 that inherits some useful algebraic properties from the standard Kronecker product. The notation allows a wide family of unitary matrices to be generated using simple variations of a single recursion formula. Closed-form ex- pressions were presented for sparse matrix factorizations in terms of this generalized matrix product, which is useful in developing fast transform algorithms. Section 4 presented an example demonstrating the development of a sparse matrix factorization for a complex BIFORE matrix. Additional applications of the new notation were also shown for single rate and multirate filter banks, as used in signal processing appli- cations. Finally, 55 showed a simple result of the invariance of Hadamard matrices under a bit-permuted ordering similarity transformation, which allowed us to derive a simple scrambling property of the Hadamard transform. Also, it was shown that polyadic permutation matrices can be defined systematically as the Kronecker product of circulant permutation matrices.

7. Acknowledgments. The authors would like to thank the anonymous review- ers for their constructive criticisms.

REFERENCES

[I] J . W. COOLEY A N D J. W. TUKEY, A n algorithm for machine computation Fourier series, Math. Comp., 19 (1965), pp. 297-301.

[2] N . AHMEDA N D S. M. CHENG,On matrix partitioning and a class of algorithms, IEEE Trans. Education, 13 (1970), pp. 103-105.

[3] N. AHMED, K. R. RAO, A N D R. B. SCHULTZ, Fast complex BIFORE transform by matrix partitioning, IEEE Trans. Comput., 20 (1971), pp. 707-710.

[4] N. AHMED A N D K. R. RAO, Orthogonal Dansforms for Digital Signal Processing, Springer-Verlag, Berlin New York, 1975.

[5] D. F . ELLIOT A N D K. R. RAO, Fast Dansforms, Academic Press, New York, 1982. [6] Z. WANG,New algorithm for the slant transform, IEEE Trans. Pattern Analysis and Machine

Intelligence, 4 (1982), pp. 551-555. [7] K. G. BEAUCHAMP, Applications of Walsh and Related Functions, Academic Press, New York,

1984. [8] Z. WANG,Fast Algorithms for the Discrete W Transform and for the Discrete Fourier Trans-

form, IEEE Trans. Acoust. Speech Signal Process., 32 (1984), pp. 803-816. [9] R. STORN,Fast Algorithms for the Discrete Hartley Transform, Archiv. Elektronik ~ b e r t r a -

gungstechnik, 40 (1986), pp. 233-240. [lo] H. S. HOU, The fast Hartley transform algorithm, IEEE Trans. Comput., 36 (1987), pp 147-156. [ll]A. V. OPPENHEIM A N D R. W. SCHAFER, Digital Signal Processing, Prentice-Hall, Englewood

Cliffs, NJ, 1975. [12] N. S. JAYANT A N D P. KNOLL, Digital Coding of Waveforms, Prentice-Hall, Englewood Cliffs,

NJ , 1984. [13] W. K. PRATT, W.-H. CHEN, A N D L. R. WELCH,Slant transform image coding, IEEE Trans.

Comm., 22 (1974), pp. 1075-1093. [14] J . PEARL, Optimal dyadic models of time-invariant systems, IEEE Trans. Comput., 24 (1975),

pp. 598-603. [15] B. G. AGRAWALA N D K. SHENOI, m-adic (polyadic) invariance and its applications, IEEE

Trans. Acoust. Speech Signal Process., 29 (1981), pp. 207-213. [16] H. C. ANDREWS A N D J . K A N E ,Kronecker matrices, computer implementation, and generalized

spectra, J . Assoc. Comput. Mach., 17 (1970), pp. 260-268. [17] H. C. ANDREWS A N D K. L. CASPARI,A generalized technique for spectral analysis, IEEE

Trans. Comput., 19 (1970), pp. 16-25. [18] B. J . F I N O A N D V. R. ALGAZI,A unified treatment of discrete fast unitary transforms, SIAM

J . Comput., 6 (1971), pp. 700-717.

Page 28: KRONECKER PRODUCTS, UNITARY MATRICES AND - CiteSeer

613 GENERALIZED KRONECKER PRODUCT

[19] V. VLASENKO A N D K. R. RAO, Unified matrix treatment of discrete transforms, IEEE Trans. Comput., 28 (1979), pp. 934-938.

[20] H. V. HENDERSON A N D S. R. SEARLE, The vec-permutation matrix, the vec operator and Kronecker products: a review, Linear and Multilinear Algebra, 9 (1981), pp. 271-288.

[21] J . R. MAGNUS A N D H. NEUDECKER,The commutation matrix: some properties and applica- tions, Ann. Statist., 7 (1979), pp. 381-394.

[22] H. NEUDECKER, Some theorems on matrix differentiation with special reference to Kronecker matrix products, J . Amer. Statist. Assoc., 64 (1969), pp. 953-963.

[23] F. A. GRAYBILL, Matrices, with Applications in Statistics, Wadsworth International Group, Belmont, CA, 1983.

[24] J . W. BREWER,Kroneckerproducts and matrix calculus in system theory, IEEE Trans. Circuits and Systems, 25 (1978), pp. 772-781.

[25] P . A . REGALIA,Tree-structured complementary filter banks, M. Sc. thesis, University of Cali- fornia, Santa Barbara, CA, April 1987.

[26] R. E. BELLMAN, Matrix Analysis, McGraw-Hill, New York, 1978. [27] K. R. RAO A N D N. AHMED, Complex BIFORE transform, Internat. J . Systems Science, 2

(1971), pp. 149-162. [28] S. S. NARYAN A N D A. M. PETERSON, Frequency domain least-mean-square algorithm, Proc.

IEEE, 69 (1981), pp. 124-126. [29] S. S. NARYAN, A. M. PETERSON A N D M. J . NARASIMHA,Transform domain LMS algorithm,

IEEE Trans. Acoust. Speech Signal Process., 31 (1983), pp. 609-615. [30] J . C. LEE A N D C. K. UN, Performance of transform-domazn LMS adaptive filters, IEEE Trans.

Acoust. Speech Signal Process., 34 (1986), pp. 499-510. [31] J . D. MARKEL,FFT pruning, IEEE Trans. Audio Electroacoustics, 19 (1971), pp. 305-311. [32] D. P . S K I N N E R , Pruning the decimation i n time FFT algorithm, IEEE Trans. Acoust. Speech

Signal Process., 34 (1976), pp. 193-194. [33] J . MAKHOUL, Linear prediction: a tutorial review, Proc. IEEE, 63 (1975), pp. 561-580. [34] R. E. CROCHIERE A N D L. R. RABINER, Multirate Digital Signal Processing, Prentice-Hall,

Englewood Cliffs, NJ, 1983. [35] P . P . VAIDYANATHAN A N D V. C. LIU, Classical sampling theorems in the context of multirate

and polyphase digital filter bank structures, IEEE Trans. Acoust. Speech Signal Process., 36 (1988), pp. 1480-1495.

[36] M. J . T. SMITH A N D T. P. BARNWELL, 111, A unifying frameworkfor analysis/synthesis systems based on maximally decimated filter banks, Proc. IEEE Internat. Conf. Acoust., Speech Signal Process., Tampa, FL, March 1985, pp. 521-524.

[37] P . P . VAIDYANATHAN, Theory and design of m-channel maximally decimated quadrature mir- TOT filters with arbitrary M , having the perfect reconstruction property, IEEE Trans. Acoust. Speech Signal Process., 35 (1987), pp. 476-492.

[38] Y. ZALCSTEIN,A Note on Fast Cyclic Convolution, IEEE Trans. Comput., 20 (1971), pp. 665-666.