lecture notes on orthogonal polynomials of several variables by dr yaun xu.pdf

Upload: humejias

Post on 14-Apr-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    1/54

    Inzell Lectures on Orthogonal PolynomialsW. zu Castell, F. Filbir, B. Forster (eds.)Advances in the Theory of SpecialFunctions and Orthogonal PolynomialsNova Science PublishersVolume 2, 2004, Pages 135188

    Lecture notes on orthogonal polynomials of sev-

    eral variables

    Yuan Xu

    Department of Mathematics, University of Oregon

    Eugene, Oregon 97403-1222, U.S.A.

    [email protected]

    Summary: These lecture notes provide an introduction to orthogonal polynomials of severalvariables. It will cover the basic theory but deal mostly with examples, paying special attentionto those orthogonal polynomials associated with classical type weight functions supported on

    the standard domains, for which fairly explicit formulae exist. There is little prerequisites forthese lecture notes, a working knowledge of classical orthogonal polynomials of one variablesatisfies.

    Contents

    1. Introduction 136

    1.1 Definition: one variable vs several variables 136

    1.2 Example: orthogonal polynomials on the unit disc 138

    1.3 Orthogonal polynomials for classical type weight functions 140

    1.4 Harmonic and h-harmonic polynomials 141

    1.5 Fourier orthogonal expansion 142

    1.6 Literature 143

    2. General properties 143

    2.1 Three-term relations 143

    2.2 Common zeros of orthogonal polynomials 146

    3. h-harmonics and orthogonal polynomials on the sphere 148

    3.1 Orthogonal polynomials on the unit ball and on the unit sphere 149

    3.2 Orthogonal polynomials for the product weight functions 150

    3.3 h-harmonics for a general reflection group 157

    4. Orthogonal polynomials on the unit ball 158

    135

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    2/54

    136 Yuan Xu

    4.1 Differential-difference equation 159

    4.2 Orthogonal bases and reproducing kernels 160

    4.3 Rotation invariant weight function 164

    5. Orthogonal polynomials on the simplex 165

    6. Classical type product orthogonal polynomials 169

    6.1 Multiple Jacobi polynomials 169

    6.2 Multiple Laguerre polynomials 172

    6.3 Multiple generalized Hermite polynomials 174

    7. Fourier orthogonal expansions 176

    7.1 h-harmonic expansions 176

    7.2 Orthogonal expansions on Bd and on Td 180

    7.3 Product type weight functions 182

    8. Notes and Literature 185

    References 186

    1. Introduction

    1.1. Definition: one variable vs several variables. Letbe a positive Borelmeasure on R with finite moments. Forn N0, a nonnegative integer, the numbern =

    R

    tnd is the n-th moment ofd. The standard Gram-Schmidt processapplied to the sequence{n} with respect to the inner product

    f, g =R

    f(t)g(t)d(t)

    of L2(d) gives a sequence of orthogonal polynomials {pn}n=0, which satisfiespn, pm = 0, ifn =m, andpnis a polynomial of degree exactlyn. The orthogonalpolynomials are unique up to a constant multiple. They are called orthonormal

    if, in addition,pn, pn = 1, and we assume that the measure is normalized byR

    d = 1 when dealing with orthonormality. If d = w(t)dt, we say that pnare associated with the weight function w. The orthogonal polynomials enjoymany properties, which make them a useful tool in various applications and arich source of research problems. A starting point of orthogonal polynomials ofseveral variables is to extend those properties from one to several variables.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    3/54

    Orthogonal polynomials of several variables 137

    To deal with polynomials in several variables we use the standard multi-indexnotation. A multi-index is denoted by = (1, . . . , d) Nd0. For Nd0 andx Rd a monomial in variables x1, . . . , xd of index is defined by

    x =x11 . . . xd

    d

    .

    The number|| = 1 + + d is called the total degree of x. We denotebyPdn := span{x :|| = n, Nd0} the space of homogeneous polynomials ofdegreen, by dn:= span{x : || n, Nd0} the space of polynomials of (total)degree at mostn, and we write d = R[x1, . . . , xd] for the space of all polynomialsofd variables. It is well known that

    rdn:= dimPdn=

    n+ d 1

    n

    and dimdn=

    n+ d

    n

    .

    Letbe a positive Borel measure on Rd with finite moments. For Nd0, denoteby=

    Rd

    xd(x) the moments of. We can apply the Gram-Schmidt processto the monomials with respect to the inner product

    f, g=Rd

    f(x)g(x)d(x)

    ofL2(d) to produce a sequence of orthogonal polynomials of several variables.One problem, however, appears immediately: orthogonal polynomials of severalvariables are not unique. In order to apply the Gram-Schmidt process, we needto give a linear order to the moments which means an order amongst themulti-indices ofNd0. There are many choices of well-defined total order (for ex-ample, the lexicographic order or the graded lexicographic order); but there isno natural choice and different orders will give different sequences of orthogonalpolynomials. Instead of fixing a total order, we shall say that P

    dn is an

    orthogonal polynomial of degree n with respect to d ifP, Q = 0, Q d with deg Q

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    4/54

    138 Yuan Xu

    For example, this allows us to derive a proper analogy of the three-term relationfor orthogonal polynomials in several variables and proves a Favards theorem.We adopt this point of view and discuss results of this nature in Section 2.

    1.2. Example: orthogonal polynomials on the unit disc. Before we go onwith the general theory, let us consider an example of orthogonal polynomialswith respect to the weight function

    W(x, y) =2 + 1

    2 (1 x2 y2)1/2, > 1/2, (x, y) B2,

    on the unit disc B2 ={(x, y) :x2 +y2 1}. The weight function is normalizedso that its integral over B2 is 1. Among all possible choices of orthogonal basesforVdn, we are interested in those for which fairly explicit formulae exist. Severalfamilies of such bases are given below.

    For polynomials of two variables, the monomials of degree n can be ordered by{xn, xn1y , . . . , x yn1, yn}. Instead of using the notation P,|| = |1 + 2| =n,to denote a basis forV2n, we sometimes use the notationPnk withk = 0, 1, . . . , n.The orthonormal bases given below are in terms of the classical Jacobi and Gegen-

    bauer polynomials. The Jacobi polynomials are denoted by P(a,b)n , which are or-

    thogonal polynomials with respect to (1 x)a(1 + x)b on [1, 1] and normalizedbyP

    (a,b)n (1) =

    n+an

    , and the Gegenbauer polynomials are denoted by Cn , which

    are orthogonal with respect to (1 x2)1/2 on [1, 1], andCn(x) = ((2)n/( + 1/2)n)P

    (1/2,1/2)n (x),

    where (c)n= c(c+ 1) . . . (c+ n 1) is the Pochhammer symbol.

    1.2.1. First orthonormal basis. Consider the family

    Pnk(x, y) =hk,nCk++ 1

    2nk (x)(1 x2)

    k2 Ck

    y1 x2

    , 0 k n,

    wherehk,n are the normalization constants.

    Since Ck (x) is an even function ifk is even and is an odd function ifk is odd,Pnk are indeed polynomials in

    2n. The orthogonality of these polynomials can be

    verified using the formulaB2

    f(x, y)dxdy =

    11

    1x21x2

    f(x, y)dxdy

    =

    11

    11

    f(x,

    1 x2t)

    1 x2dxdt.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    5/54

    Orthogonal polynomials of several variables 139

    1.2.2. Second orthonormal basis. Using polar coordinatesx = r cos ,y = r sin ,we define

    hnj,1P( 1

    2,n2j+d2

    2 )

    j (2r2 1)rn2j cos(n 2j), 0 2j n,

    hnj,2P( 12 ,n2j+

    d22 )

    j (2r2 1)rn2j sin(n 2j), 0 2j n 1,

    wherehnj,i are the normalization constants.

    For eachnthese give exactlyn+1 polynomials. That they are indeed polynomialsin (x, y) of degree n can be verified using the relations r= x,

    cos m= Tm(x/x), and sin m/ sin = Um1(x/x),where Tm and Um are the Chebyshev polynomials of the first and the secondkind. The orthogonality of these polynomials can be verified using the formula

    B2 f(x, y)dxdy= 1

    0 2

    0 f(r cos , r sin )drdr.

    1.2.3. An orthogonal basis. A third set is given by

    Pnk(x, y) =C+1/2n

    x cos

    k

    n+ 1+ y sin

    k

    n+ 1

    , 0 k n.

    In particular, if = 1/2, then the polynomials

    Pnk(x, y) = 1

    Un

    x cos k

    n+ 1+ y sin

    k

    n+ 1

    , 0 k n,

    form an orthonormal basis with respect to the Lebesgue measure on B2. Thecase = 1/2 first appeared in [22] in connection with a problem in computertomography.

    1.2.4. Appells monomial and biorthogonal bases. The polynomials in these basesare denoted byVnk andU

    nk for 0 k n(cf. [2]). The polynomialsVnk are defined

    by the generating function

    (1 2(b1x+ b2y) + b2)1/2 =n=0

    nk=0

    bk1bnk2 V

    nk (x, y), b= (b1, b2),

    and they are called the monomial orthogonal polynomials since

    Vnk (x, y) =xkynk + q(x, y),

    whereq 2n1. The polynomials Unk are defined by

    Unk(x, y) = (1 x2 y2)+12

    k

    xknk

    ynk(1 x2 y2)n+1/2.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    6/54

    140 Yuan Xu

    Both Vnk andUnk belong toV2n, and they are biorthogonal in the sense that

    B2Vnk (x, y)U

    nj(x, y)W(x, y) = 0, k=j.

    The orthogonality follows from a straightforward computation of integration byparts.

    1.3. Orthogonal polynomials for classical type weight functions. In theideal situation, one would like to have fairly explicit formulae for orthogonal poly-nomials and their various structural constants (such as L2-norm). The classicalorthogonal polynomials of one variable are good examples. These polynomialsinclude the Hermite polynomials Hn(t) associated with the weight function e

    t2

    on R, the Laguerre polynomials Lan(t) associated withtaet on R+= [0, ), and

    the Jacobi polynomials P(a,b)n (t) associated with (1 t)a(1 + t)b on [1, 1]. Up

    to an affine linear transformation, they are the only families of orthogonal poly-

    nomials (with respect to a positive measure) that are eigenfunctions of a secondorder differential operator.

    One obvious extension to several variables is using tensor product. For 1 j dletwj be the weight function on the intervalIj R and denote bypn,j orthogonalpolynomials of degreenwith respect towj. Then for the product weight function

    W(x) =w1(x1) . . . wd(xd), x I1 . . . Id,the product polynomials P(x) =

    dj=1pj ,j(xj) are orthogonal with respect to

    W. Hence, as extensions of the classical orthogonal polynomials, we can haveproduct Hermite polynomials associated with

    WH(x) =ex2

    , x Rd,product Laguerre polynomials associated with

    WL(x) =xe|x|, x Rd+, i> 1,

    the product Jacobi polynomials associated with

    Wa,b(x) =d

    i=1

    (1 xi)ai(1 + xi)bi, x [1, 1]d, ai, bi> 1,

    as well as the mixed product of these polynomials. Throughout this lecture, thenotationx stands for the Euclidean norm and|x| stands for the 1-norm ofx

    Rd

    . The product basis in this case is also the monomial basis. There areother interesting orthogonal bases; for example, for the product Hermite weightfunction, another orthonormal basis can be given in polar coordinates.

    More interesting, however, are extensions that are not of product type. Thegeometry ofRd is rich, i.e., there are other attractive regular domains; the unitball Bd ={x :x 1} and the simplex Td ={x Rd : x1 0, . . . , xd

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    7/54

    Orthogonal polynomials of several variables 141

    0, 1 |x| 0} are two examples. There are orthogonal polynomials on Bd andon Td for which explicit formulae exist, and their study goes back at least as faras Hermite (cf. [2] and [11, Vol. 2, Chapt. 12]). The weight functions are

    W

    B

    (x) = (1 x2

    )

    1/2

    , x Bd

    , > 1/2,and

    WT(x) =d

    i=1

    |xi|i1/2(1 |x|)d+11/2, x Td, i > 1/2.

    In both cases, there are explicit orthonormal bases that can be given in termsof Jacobi polynomials. In Section 4 and 5 we discuss these two cases and theirextensions.

    There is no general agreement on what should be called classicalorthogonal poly-nomials of several variables. For d = 2 Krall and Sheffer [18] gave a classifica-

    tion of orthogonal polynomials that are eigenfunctions of a second order partialdifferential operator, which shows that only five such families are orthogonalwith respect to a positive measure: product Hermite, product Laguerre, productHermite-Laguerre, orthogonal polynomials with respect to WB on the disc and

    with respect to WT on the triangle T2. Clearly these families should be called

    classical, but perhaps equally entitled are product Jacobi polynomials and a scoreof others.

    1.4. Harmonic and h-harmonic polynomials. Another classical example oforthogonal polynomials of several variables is the spherical harmonics. The

    Laplace operator onR

    d

    is defined by =21+ + 2d,

    wherei =/xi. Harmonic polynomials are polynomials that satisfy P = 0,and spherical harmonics are the restriction of homogeneous harmonic polynomialson the sphere Sd1. LetHdn be the set of homogeneous harmonic polynomials ofdegreen;Hdn= Pdn ker . It is known that

    P Hdn if and only ifSd1

    PQd= 0, Q d, deg Q < n,

    where d is the surface measure of Sd1. An orthonormal basis for spherical

    harmonics can also be given in terms of Jacobi polynomials. The fact that theLebesgue measure d is invariant under the orthogonal group O(d) plays animportant role in the theory of spherical harmonics.

    An important extension of harmonic polynomials are the h-harmonics introducedby Dunkl [7], in which the role of the rotation group is replaced by a reflectiongroup. The h-harmonics are homogeneous polynomials that satisfy hP = 0,

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    8/54

    142 Yuan Xu

    where h is a second order differential-difference operator. This h-Laplacian canbe decomposed as

    h = D21+ . . .+ D2d,where

    Di are the first order differential-difference operators (Dunkls operators)

    which commute, that is,DiDj =DjDi. The h-harmonics are orthogonal withrespect to h2d where h is a weight function invariant under the underlyingreflection group. Examples include h(x) =

    di=1 |xi|i invariant under Zd2 and

    h(x) =

    i

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    9/54

    Orthogonal polynomials of several variables 143

    1.6. Literature. The main early references of orthogonal polynomials of severalvariables are Appell and de Feriet [2], and Chapter 12 of Erdelyi et. al. [11],as well as the influential survey of Koornwinder on orthogonal polynomials oftwo variables [15]. There is also a more recent book of Suetin [30] on orthogonal

    polynomials of two variables, which is in the spirit of the above references. Wefollow the presentation in the recent book of Dunkl and Xu [10]. However, ourmain development for orthogonal polynomials with respect to the classical typeweight functions follows a line that does not require background in reflectiongroups, and we also include some more recent results. We will not give referencesto every single result in the main body of the text; the main references and thehistorical notes will appear at the end of the lecture notes.

    2. General properties

    By general properties we mean those properties that hold for orthogonal polyno-mials associated with weight functions that satisfy some mild conditions but arenot any more specific.

    2.1. Three-term relations. For orthogonal polynomials of one variable, oneimportant property is the three-term relation, which states that every systemof orthogonal polynomials{pn}n=0 with respect to a positive measure satisfies athree-term relation

    xpn(x) =anpn+1(x) + bnpn(x) + cnpn1(x), n 0, (2.1)wherep1(x) = 0 by definition, an, bn, cn Rand ancn+1> 0; ifpn are orthonor-mal polynomials, then cn = an

    1. Furthermore, Favards theorem states that

    every sequence of polynomials that satisfies such a relation must be orthogonal.

    Let{P : Nd0} be a sequence of orthogonal polynomials in d variables andassume that{P :|| = n} is a basis ofVdn. The orthogonality clearly impliesthat xiP(x) is orthogonal to all polynomials of degree at most n2 and atleast n + 2, so that it can be written as a linear combination of orthogonalpolynomials of degree n 1,n ,n + 1, although there are many terms for each ofthese three degrees. Clearly this can be viewed as a three-term relation in termsofVdn1, Vdn, Vdn+1. This suggests to introduce the following vector notation:

    Pn(x) = (Pn (x))||=n= (P(1)(x), . . . , P (rdn) (x))

    T =Gnxn + . . . ,

    where (1)

    , . . . , (rn)

    is the arrangement of elements in { Nd0 : || = n}

    according to the lexicographical order (or any other fixed order), and xn =

    (x(1)

    , . . . , x(rdn) )T is the vector of the monomials of degree n; the matrix Gn

    of sizerdn rdn is called the leading coefficient ofPn, and it is invertible. We notethat ifSis a nonsingular matrix of size||, then the components ofSPn are alsoa basis forVdn. In terms ofPn, we have the three-term relation:

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    10/54

    144 Yuan Xu

    Theorem 2.1. LetP be orthogonal polynomials. Forn0, there exist uniquematricesAn,i : r

    dn rdn+1, Bn,i : rdn rdn, andCTn,i : rdn rdn1, such that

    xiPn= An,iPn+1+ Bn,iPn+ Cn,iPn1, 1 i d, (2.2)

    where we defineP1 = 0 andC1,i= 0. IfP are orthonormal polynomials, thenCn,i = A

    Tn1,i.

    Proof. Looking at the components, the three-term relation is evident. The co-efficient matrices satisfy An,iHn+1 =

    xiPnP

    Tn+1d, Bn,iHn =

    xiPnP

    Tnd, and

    An,iHn+1 =HnCTn+1,i, where Hn =

    PnP

    Tnd is an invertible matrix. Hence the

    coefficient matrices are unique. If P are orthonormal, then Hn is an identitymatrix andCn,i = A

    Tn1,i.

    For orthonormal polynomials, An,i =

    xiPnPTn+1d and Bn,i =

    xiPnP

    Tnd,

    which can be used to compute the coefficient matrices.

    Example. For the first orthonormal basis on the unit disc in the previous section,we have

    Bn,1= Bn,2 = 0, An,1 =

    a0,n 0

    a1,n 0. . .

    ...

    an,n 0

    ,and

    An,2=

    e0,n d0,n 0c1,n e1,n d1,n

    ..... .

    .. .

    .. .

    ...

    cn1,n dn1,n 0 cn,n en,n dn,n

    ,where the coefficients can all be computed explicitly.

    For d = 1 the relation reduces to the classical three-term relation. Moreover, letAn= (A

    Tn,1| . . . |ATn,d)T denote the joint matrix ofAn,1, . . . , An,d, then the following

    is an analog of the condition ancn+1> 0:

    Theorem 2.2. Forn 0and1 i d,rank An,i = rank Cn+1,i = rdn.Moreover,for the joint matrixAn ofAn,i and the joint matrixC

    Tn ofC

    Tn,i,

    rank An= rdn+1 and rank CTn+1= rdn+1.

    Proof. Comparing the leading coefficients of the both sides of (2.2) shows thatAn,iGn+1= GnLn,i, whereLn,iis the transformation matrix defined byLn,ix

    n+1 =xix

    n, which implies that rank Ln,i = rdn. Hence, rank An,i = r

    dn as Gn is in-

    vertible. Furthermore, let Ln be the joint matrix of Ln,1, . . . , Ln,d. Then the

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    11/54

    Orthogonal polynomials of several variables 145

    components ofLnxn+1 contain every x,|| = n+ 1. Hence Ln has full rank,

    rank Ln= rdn+1. Furthermore,AnGn+1= diag{Gn, . . . , Gn}Lnfrom which follows

    that rank An = rdn+1. The statement on C

    Tn,i and C

    Tn follows from the relation

    An,iHn+1= HnCTn+1,i.

    Just as in the one variable case, the three-term relation and the rank conditions ofthe coefficients characterize the orthogonality. A linear functionalLis said to bepositive definite ifL(p2)>0 for all nonzero polynomials p dn. The following isthe analog of Favards theorem, which we only state for the case ofCn,i = A

    Tn1,i

    and P orthonormal.

    Theorem 2.3. Let{Pn}n=0 ={Pn :|| = n, n N0}, P0 = 1, be an arbitrarysequence ind. Then the following statements are equivalent.

    (1). There exists a linear functionL which defines a positive definite linearfunctional on

    d

    and which makes{Pn}n=0 an orthogonal basis ind

    .(2). Forn 0, 1 i d, there exist matricesAn,i andBn,i such that

    (a) the polynomialsPnsatisfy the three-term relation(2.2)withCn,i = ATn1,i,

    (b) the matrices in the relation satisfy the rank condition in Theorem 2.2.

    The proof follows roughly the line that one uses to prove Favards theorem ofone variable. The orthogonality in the theorem is given with respect to a positivedefinite linear functional. Further conditions are needed in order to show that thelinear functional is given by a nonnegative Borel measure with finite moments.For example, ifLf = f d for a measure with compact support in (1) ofTheorem 2.3, then the theorem holds with one more condition

    supk0

    Ak,i2 < and supk0

    Bk,i2< , 1 i d

    in (2). The known proof of such refined results uses the spectral theory of self-adjoint operators.

    Although Favards theorem shows that the three-term relation characterizes or-

    thogonality, it should be pointed out that the relation is not as strong as in thecase of one variable. In one variable, the coefficients of the three-term relation(2.1) can be any real numbers satisfying an > 0 (in the case of orthonormalpolynomials cn = an1). In several variables, however, the coefficients of thethree-term relations have to satisfy additional conditions.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    12/54

    146 Yuan Xu

    Theorem 2.4. The coefficients of the three-term relation of a sequence of or-thonormal polynomials satisfy

    Ak,iAk+1,j =Ak,jAk+1,i,

    Ak,iBk+1,j+ Bk,iAk,j =Bk,jAk,i+ Ak,jBk+1,i,ATk1,iAk1,j+ Bk,iBk,j + Ak,iA

    Tk,j =A

    Tk1,jAk1,i+ Bk,jBk,i+ Ak,jA

    Tk,i,

    fori =j , 1 i, j d, andk 0, whereA1,i = 0.

    Proof. The relations are obtained from computing the matricesL(xixjPkPTk+2),L(xixjPkPTk ), andL(xixjPkPTk+1) in two different ways, using the three-term re-lation (2.2) to replace xiPn andxjPn, respectively.

    These equations are called the commuting conditions. Since they are necessaryfor polynomials to be orthogonal, we cannot choose arbitrary matrices to gen-erate a family of polynomials satisfying the three-term relation and hope to get

    orthogonal polynomials.

    As an application, let us mention that the three-term relation implies a Christoffel-Darboux formula. Recall that reproducing kernel Kn(x, y) is defined in Section1.5.

    Theorem 2.5. (The Christoffel-Darboux formula) Forn 0,

    Kn(x, y) =

    An,iPn+1(x)

    TPn(y) PTn(x)An,iPn+1(y)

    xi yi , 1 i d,forx =y and

    Kn(x, x) = PTn (x)An,iiPn+1(x)

    [An,iPn+1(x)]

    TPn(x).

    The proof follows just as in the case of one variable. Note, that the right handside depends oni, but the left hand side is independent ofi.

    2.2. Common zeros of orthogonal polynomials. If{pn} is a sequence oforthogonal polynomials of one variable, then all zeros ofpn are real and distinct,and these zeros are the eigenvalues of the truncated Jacobi matrix Jn.

    Jn=

    b0 a0 a0 b1 a1

    . . . . . . . . .

    an2 bn1 an1 an1 bn

    ,where an and bn are coefficients of the three-term relation satisfied by the or-thonormal polynomials. This fact has important applications in a number ofproblems.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    13/54

    Orthogonal polynomials of several variables 147

    The zero set for a polynomial in several variables can be a point, a curve, andan algebraic variety in general a difficult object to study. However, using thethree-term relation, it is possible to study the common zeros ofPn(x); that is, thecommon zeros of all P,

    |

    |= n. Note, that this means the common zeros of all

    polynomials inVdn, which are independent of the choice of the bases. Throughoutthis subsection we assume that{P} is a sequence of orthonormal polynomialswith respect to a positive measure .

    Using the coefficient matrices of the three-term relation (2.2), we define the trun-cated block Jacobi matrices Jn,i as follows:

    Jn,i =

    B0,i A0,i AT0,i B1,i A1,i

    . . . . . . . . .

    ATn3,i Bn

    2,i An

    2,i

    ATn2,i Bn1,i

    , 1 i d.

    Then Jn,i is a square matrix of size N N with N = dimdn1. We say that = (1, . . . , d)

    T Rd is a joint eigenvalue ofJn,1, . . . , J n,d, if there is a = 0, RN, such that Jn,i = i for i = 1, . . . , d; the vector is called a jointeigenvectorassociated to .

    Theorem 2.6. A point = (1, . . . , d)T Rd is a common zero of Pn if and

    only if it is a joint eigenvalue ofJn,1, . . . , J n,d; moreover, a joint eigenvector ofis(PT0 (), . . . ,P

    Tn1())

    T.

    Proof. If Pn() = 0, then the three-term relation for Pk, 0 k n1, isthe same as Jn,i = i with = (P

    T0 (), . . . ,P

    Tn1())

    T. On the other hand,suppose = (1, . . . , d) is an eigenvalue ofJn,1, . . . , J n,d with a joint eigenvector

    = (xT0 , . . . , xTn1)

    T, xj Rrdj . Let us define xn = 0. ThenJn,i= i impliesthat {xk}nk=0satisfies the same (firstn1 equations of the) three-term relation as{Pk()}nk=0does. The rank condition onAn,ishows inductively that x0= 0 unlessis zero. But= 0 as an eigenvector and we can assume that x0= 1 = P0. Then{yk}nk=0 with yk = xk Pk satisfies the same three-term relation. But y0 = 0,it follows from the rank condition that yk = 0 for all 1k n. In particular,yn= Pn() = 0.

    The main properties of the common zeros ofP

    n are as follows:Corollary 2.1. All common zeros of Pn are real distinct points and they aresimple. The polynomials in Pn have at most N = dim

    dn1 common zeros and

    Pn hasNzeros if and only if

    An1,iATn1,j =An1,jATn1,i, 1 i, j d. (2.3)

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    14/54

    148 Yuan Xu

    Proof. Since the matrices Jn,i are symmetric, the joint eigenvalues are real. Ifxis a common zero, then the Christoffel-Darboux formula shows that

    PTn(x)An,iiPn+1(x) =Kn(x, x)> 0,

    so that at least one of the partial derivatives ofPn is not zero at x; that is, thecommon zero is simple. SinceJn,iis anNNsquare matrix, there are at mostNeigenvalues, and Pn has at most N common zeros. Moreover, Pn has N distinctzeros if and only ifJn,1, . . . , J n,d can be simultaneously diagonalized, which holdsif and only ifJn,1, . . . , J n,d commute,

    Jn,iJn,j =Jn,jJn,i, 1 i, j d.From the definition of Jn,i and the commuting conditions in Theorem 2.4, theabove equation is equivalent to the condition

    ATn2,iAn2,j + Bn1,iBn1,j =ATn2,jAn2,i+ Bn1,jBn1,i.

    The third equation of the commuting condition leads to the desired result. The zeros of orthogonal polynomials of one variable are nodes of the Gaussianquadrature formula. A similar result can be stated in several variables for thecommon zeros ofPn. However, it turns out that the condition (2.3) holds rarely;for example, it does not hold for those weight functions that are centrally sym-metric (the support set of W is symmetric with respect to the origin andW(x) = W(x) for all x in ). Consequently, Pn does not have N commonzeros in general and the straightforward generalization of Gaussian quadratureusually does not exist. The relation between common zeros of orthogonal poly-nomials and quadrature formulae in several variables is quite complicated. Oneneeds to study common zeros of subsets of (quasi-)orthogonal polynomials, andthe main problem is to characterize or identify subsets that have a large numberof common zeros. In the language of polynomial ideals and varieties, the prob-lem essentially comes down to characterize or identify those polynomial idealsgenerated by (quasi)-orthogonal polynomials whose varieties are large subsets ofpoints, and the size of the variety should equal to the codimension of the ideal.Although some progress has been made in this area, the problem remains openfor the most part.

    3. h-harmonics and orthogonal polynomials on the sphere

    After the first subsection on the relation between orthogonal polynomials on thesphere and those on the ball, we discuss h-harmonics in two steps. The maineffort is in a long subsection devoted to the case of the product weight function,which can be developed without any background in reflection groups, and it isthis case that offers most explicit formulae. The theory ofh-harmonics associatedto general reflection groups is summarized in the third subsection.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    15/54

    Orthogonal polynomials of several variables 149

    3.1. Orthogonal polynomials on the unit ball and on the unit sphere.If a measure is supported on the unit sphere Sd1 = {x Rd : x = 1}ofRd,then the integrals of the positive polynomials (1 x2)2n overSd1 are zero, sothat the general properties of orthogonal polynomials in the previous section no

    longer hold. There is, however, a close relation between orthogonal structure onthe sphere and that on the ball, which can be used to study the general propertiesfor orthogonal polynomials on the sphere.

    As a motivating example, recall that in polar coordinates y1 = r cos and y2 =r sin , the spherical harmonics of degree n on S1 are given by

    Y(1)n (y) =rn cos n= rnTn(y1/r), Y

    (2)n (y) =r

    n sin n= rny2Un1(y1/r),

    where Tn and Un are the Chebyshev polynomials, which are orthogonal withrespect to 1/

    1 x2 and 1 x2 on B1 = [1, 1], respectively. This relation

    can be extended to higher dimension. In the following we work withSd instead

    ofS

    d

    1

    .Let H be a weight function defined on Rd+1 and assume that H is nonzeroalmost everywhere when restricted toSd, even with respect toyd+1, and centrallysymmetric with respect to the variablesy = (y1, . . . , yd); for example, H is evenin each of its variables, H(y) = W(y21, . . . , y

    2d+1). Associated with H define a

    weight functionWBH on Bd by

    WBH(x) =H(x,

    1 x2), x Bd. (3.1)We use the notationVdn(W) to denote the space of orthogonal polynomials ofdegreen with respect to W. Let{P}and{Q} denote systems of orthonormalpolynomials with respect to the weight functions

    WB1 (x) = 2WBH(x)/

    1 x2 and WB2 (x) = 2WBH(x)1 x2,respectively. We adopt the following notation for polar coordinates: for y Rd+1writey= (y1, . . . , yd, yd+1) = (y

    , yd+1) and

    y= r(x, xd+1), where r= y, (x, xd+1) Sd.For|| =n and|| =n 1 we define the following polynomials

    Y(1) (y) =rnP(x) and Y

    (2) (y) =r

    nxd+1Q(x), (3.2)

    and define Y(2),0 (y) = 0. These are in fact homogeneous orthogonal polynomials

    with respect to H d on S

    d

    :Theorem 3.1. LetH(x) =W(x21, . . . , x

    2d+1) be defined as above. ThenY

    (1) and

    Y(2) in (3.2)are homogeneous polynomials of degree|| onRd+1 and they satisfy

    SdY(i) (y)Y

    (j) (y)H(y)dd(y) =,i,j , i, j= 1, 2.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    16/54

    150 Yuan Xu

    Proof. Since both weight functionsWB1 andWB2 are even in each of its variables,

    it follows thatP andQ are sums of monomials of even degree if||is even andsums of monomials of odd degree if|| is odd. This is used to show that Y(i) (y)are homogeneous polynomials of degree n in y. SinceY

    (1) , when restricted toSd,

    is independent ofxd+1 andY(2) contains a single factorxd+1, it follows thatY(1)and Y

    (2) are orthogonal with respect to H dd on S

    d for any and . Since His even with respect to its last variable, the elementary formula

    Sdf(x)dd(x) = 2

    Bd

    f(x,

    1 x2)dx/

    1 x2

    shows that the orthogonality of{Y(i) } follows from that of the polynomials P(fori = 1) orQ (fori = 2), respectively.

    Let us denote byHd+1n (H) the space of homogeneous orthogonal polynomials ofdegreen with respect toHdon Sd. Then the relation (3.2) defines a one-to-one

    correspondence between an orthonormal basis ofHd+1n (H) and an orthonormalbasis ofVdn(WB1 ) xd+1Vdn1(WB2 ). Therefore, we can derive certain propertiesof orthogonal polynomials on the spheres from those on the balls. An immediateconsequence is

    dim Hd+1n (H) =

    n+ d

    d

    n+ d 2d

    = dim Pd+1n dim Pd+1n2.

    Furthermore, we also have the following orthogonal decomposition:

    Theorem 3.2. For eachn N0 andP Pd+1n , there is a unique decomposition

    P(y) =

    [n/2]k=0

    y2kPn2k(y), Pn2k Hd+1n2k(H).

    The classical example is the Lebesgue measure H(x) = 1 on the sphere, whichgives the ordinary spherical harmonics. We discuss a family of more generalweight functions in detail in the following section.

    3.2. Orthogonal polynomials for the product weight functions. We con-sider orthogonal polynomials in Hd+1n (h2) withhbeing the product weight func-tion

    h(x) =

    d+1i=1

    |xi|i, i> 1, x Rd+1.Because of the previous subsection, we consider Sd instead ofSd1. The elementsofHd+1n (h2) are called the h-harmonics, which is a special case of Dunkls h-harmonics for reflection groups. The function h is invariant under the groupZd+12 , which simply means that it is invariant under the sign changes of each

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    17/54

    Orthogonal polynomials of several variables 151

    variable. We shall work with this case first without referring to general reflectiongroups. An expository of the theory ofh-harmonics is given in the next section.

    3.2.1. An orthonormal basis. Since the weight function h is of product type, anorthonormal basis with respect toh2can be given using the spherical coordinates

    x1 = r cos d,

    x2 = r sin dcos d1,

    . . .

    xd = r sin d . . . sin 2cos 1,

    xd+1= r sin d . . . sin 2sin 1,

    with r 0, 01 1, we use the following notation: associated to = (1, . . . , d+1), define

    j = (j, . . . , d+1), 1 j d + 1.Since d+1 consists of only the last element of, write d+1 = d+1. Similarlydefinej for Nd0.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    18/54

    152 Yuan Xu

    Theorem 3.3. Let d 1. In spherical coordinates an orthonormal basis ofHd+1n (h2) is given by

    Y

    n,i

    (x) = [A

    n

    ]

    1r

    nd1

    j=1 C(aj ,j)j (cos dj+1)(sin dj+1)|j+1|Yid(cos 1, sin 1),where Nd+10 ,|| = n, aj =|j+1|+|j+1|+ dj2 , Yid with i = 1, 2 aretwo-dimensionalh-harmonics with parameters(d1, d) and

    [An]2 =

    1|| + d+12

    n

    dj=1

    |j+1| + |j| + d j+ 2

    2

    j

    .

    This can be verified by straightforward computation, using the integral

    Sd

    f(x)dd(x) = 0Sd1

    f(cos , sin x)dd1(x)sind1 d

    inductively and the orthogonality ofC(,)n .

    3.2.2. h-harmonics. There is another way of describing the space Hd+1n (h2) usinga second order differential-difference operator that plays the role of the Laplaceoperator for the ordinary harmonics. Fori 0, define Dunkls operatorsDj by

    Djf(x) =jf(x) + j f(x) f(x1, . . . , xj, . . . , xd+1)xj

    , 1 j d + 1.

    It is easy to see that these first order differential-difference operators map PdnintoPdn1. A remarkable fact is that these operators commute, which can be verifiedby an easy computation.

    Theorem 3.4. The operatorsDi commute:DiDj = DjDi, 1 i, j d + 1.

    Proof. Letxj = (x1, . . . , xj, . . . , xd+1). A simple computation shows that

    DiDjf(x) =ijf(x) + ixi

    (jf(x) jf(xj)) + jxj

    (if(x) if(xi))

    +

    ijxixj (f(x) f(xj) f(xi) f(xji)),

    from whichDiDj = DjDi is evident. The operatorDi plays the role ofi. The h-Laplacian is defined by

    h= D21+ + D2d+1.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    19/54

    Orthogonal polynomials of several variables 153

    Its a second order differential-difference operator. If all i= 0 then h becomesthe classical Laplacian . A quick computation shows that

    hf(x) = f(x) + 2d+1

    j=1 jxj j f(x) d+1

    j=1 j f(x) f(x1, . . . , xj , . . . , xd)x2j .Let us write h = Lh +Dh, where Lh is the differential part and Dh is thedifference part of the above equation. The following theorem shows that h-harmonics are homogeneous polynomials Psatisfying hP = 0.

    Theorem 3.5. Supposefandg are homogeneous polynomials of different degreessatisfyinghf= 0 andhg= 0, then

    Sd

    f(x)g(x)h2(x)d= 0.

    Proof. Assume i 1 and use analytic continuation to extend the range ofvalidity to 0. The following formula can be proved using Greens identity:

    Sd

    fn

    gh2d=Bd+1

    (gLhf+ f, g)h2dx,

    where f/n denotes the normal derivative of f. If f is homogeneous, thenEulers equation shows that f/n= (deg f)f. Hence,

    (deg f deg g)Sd

    f gh2d=

    Bd+1

    (gLhf f Lhg)h2dx

    =

    Bd+1

    (gDhf fDhg)h2dx= 0,

    since the explicit formula ofDh shows that it is a symmetric operator.

    There is a linear operator V, called the intertwining operator, which acts betweenordinary harmonics and h-harmonics. It is defined by the properties

    DiV= Vi, V1 = 1, VPn Pn.It follows that hV = V so that if P is an ordinary harmonic polynomial,thenVPis anh-harmonic. In the case Z

    d+12 ,Vis given by an integral operator:

    Theorem 3.6. Fori 0,

    Vf(x) = [1,1]d+1 f(x1t1, . . . , xd+1td+1)d+1

    i=1 ci(1 + ti)(1 t2i )

    i1dt,

    where c = (+ 1/2)/(

    ()) and if any one of i = 0, the formula holdsunder the limit

    lim0

    c

    11

    f(t)(1 t2)1d(t) = [f(1) + f(1)]/2.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    20/54

    154 Yuan Xu

    Proof. Denote the difference part ofDi asDi so thatDi= i+Di. Clearly,jVf(x) =

    [1,1]d+1

    jf(x1t1, . . . , xd+1td+1)tj

    d+1

    i=1ci(1 + ti)(1 t2i )i1dt.

    Taking into account the parity of the integrand, an integration by parts shows

    DjVf(x) = [1,1]d+1

    jf(x1t1, . . . , xd+1td+1)(1 tj)d+1i=1

    ci(1 + ti)(1 t2i )i1dt.

    Adding the last two equations givesDiV = Vi. As one important application, a compact formula of the reproducing kernel can begiven in terms ofV. The reproducing kernel Pn(h

    2; x, y) ofHd+1n (h2) is defined

    uniquely by the property

    Sd Pn(h2; x, y)Q(y)h2(y)d(y) =Q(x), Q Hd+1n (h2).IfY is an orthonormal basis ofHd+1n (h2), then Pn(h2; x, y) =

    Y(x)Y(y).

    Theorem 3.7. Fori 0, andy x = 1,

    Pn(h2; x, y) =

    n+ || + d12|| + d1

    2

    V

    C||+ d1

    2n

    , yy

    (x)yn.

    Proof. Let Kn(x, y) = V(x) (x, yn)/n!. Using the defining property of V,

    it is easy to see that Kn(x, D(y))f(y) = f(x) for f Pd+1n . Fixing y letp(x) = Kn(x, y); then Pn(h

    2; x, y) = 2

    n(||+ d/2)nprojnp(x), where projn isthe projection operator fromP

    d+1

    n ontoHd+1

    n (h2

    ). The projection operator canbe computed explicitly, which gives

    Pn(h2; x, y) =

    0jn/2

    +

    d2

    n

    2n2j2 n d/2)jj!

    x2jy2jKn2j(x, y).

    Whenx = 1, we can write the right hand side as V(Ln(, y/y))(x), wherethe polynomialLn is a constant multiple of the Gegenbauer polynomial.

    For the classical harmonics (= 0), these are the so-called zonal harmonics,

    Pn(x, y) =n+ d1

    2d12

    Cd1

    2n (x, y), x, y Sd.

    As one more application of the intertwining operator, we mention an analogue ofthe Funk-Hecke formula for ordinary harmonics. Denote byw the normalizedweight function

    w(t) = ( + 1)

    (+ 1/2)(1 t2)1/2, t [1, 1],

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    21/54

    Orthogonal polynomials of several variables 155

    whose corresponding orthogonal polynomials are the Gegenbauer polynomials.

    Theorem 3.8. Let f be a continuous function on [1, 1]. Let Yhn Hdn(h2).Then

    Sd

    Vf(x, )(y)Yhn(y)h2(y)d(y) =n(f)Yhn(x), x Sd,wheren(f) is a constant defined by

    n(f) = 1

    C||+(d1)/2n (1)

    11

    f(t)C||+ d1

    2n (t)w||+(d1)/2(t)dt.

    The case = 0 is the classical Funk-Hecke formula.

    3.2.3. Monomial basis. Another interesting orthogonal basis ofHd+1n (h2) can begiven explicitly. Let V be the intertwining operator.

    Definition 3.1. LetP be polynomials defined by

    V

    (1 2b, + b2)|| d12

    (x) =

    Nd+10

    bP(x), b Rd+1.

    The polynomialsP are indeed homogeneous polynomials and they can be givenexplicitly in terms of the Lauricella function of type B which is defined by

    FB(, ; c; x) =

    ()()(c)||!

    x, , Nd+10 , c R,

    where the summation is taken over Nd+1

    0 . For Nd+1

    0 , let [/2] denotethe multi-index whose elements are [i/2] where [a] denotes the integer part ofa. The notation ()abbreviates the product (0)0 (d+1)d+1.Theorem 3.9. For Nd+10 ,

    P(x) =2||(|| + d1

    2 )

    !

    (12)[+12 ]

    ( + 12)[+12 ]Y(x),

    whereY are given by

    Y(x) = x

    FB+ +12 ,+12 + 12 ;|| || d32 ; 1x21 , . . . , 1x2d+1 .Proof. Let = || + d12 . The multinomial and binomial formulae show that

    (1 2b, x + b2) =

    b2||

    ()||||(+ )( )!! 2

    2||x2.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    22/54

    156 Yuan Xu

    Applying the intertwining operator and using the formula

    Vx2 =

    (12

    )[+12 ]

    ( + 12

    )[+12

    ]

    +12

    + 12

    +12 + 12x2

    completes the proof.

    Since (a)k = 0 ifa < k ,Y(x) is a homogeneous polynomial of degree||. Fur-thermore, when restricted to Sd, Y(x) =x

    + lower order terms. The followingtheorem says that we can call Y monomial orthogonal polynomials.

    Theorem 3.10. For Nd+10 , Y are elements ofHd+1|| (h2).

    Proof. We show that Y are orthogonal to x for Nd+10 and|| n1.

    If one of the components of is odd, then the orthogonality follows fromchanging sign of that component in the integral. Hence, we can assume thatall components of are even and we only have to work with x for|| =|| 2, since every polynomial of degree n on the sphere is the restriction of ahomogeneous polynomial of degree n. Using the beta integral on the sphere, atedious computation shows that

    SdP(x)x

    h2(x)d(x) =( + 1

    2)+

    2

    (|| + d+12 )||

    (+ [+12 ])([+12 ] + 12)(+

    2 + 1

    2)!

    =( + 12)+2

    (|| + d+12

    )||

    d+1i=1

    2F1

    i+ [i+12 ], [i+12 ] i+ 12i+i

    2 i+ 12

    ; 1

    .

    Since at least one i < i, this last term is zero using the Chu-Vandermondeidentity for 2F1.

    A standard Hilbert space argument shows that among all polynomials of theformx+ polynomials of lower degrees, Y has the smallest L

    2(h2d)-norm andY is the orthogonal projection of x

    ontoHd+1n (h2). Let us mention anotherexpression ofY. For any Nd, define homogeneous polynomialsH (cf. [44])

    H(x) = x2||+d2+2||Dx2||d+2

    ,

    whereD = D11 . . . Ddd .Theorem 3.11. For Nd+10 and|| =n,

    H(x) = (1)n2n|| + d 12n

    Y(x).

    For= 0, the polynomialsHare called Maxwells representation. That they areconstant multiples of monomial polynomials follows from the recursive relation

    H+i(x) = (2|| + d 2 + 2||)xiH(x) + x2DiH,

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    23/54

    Orthogonal polynomials of several variables 157

    wherei= (0, . . . , 1, . . . , 0) is the i-th standard unit vector in Rd+1.

    3.3. h-harmonics for a general reflection group. The theory ofh-harmonics

    is established for a general reflection group ([6, 7, 8]). For a nonzero vectorv Rd+1 define the reflection v byxv := x 2x, vv/v2, x Rd+1, wherex, y denotes the usual Euclidean inner product. A finite reflection group G isdescribed by its root systemR, which is a finite set of nonzero vectors in Rd+1 suchthat u, v R implies uv R, and G is the subgroup of the orthogonal groupgenerated by the reflections{u : u R}. IfR is not the union of two nonemptyorthogonal subsets, the corresponding reflection group is called irreducible. Note,that Zd+12 is reducible, a product of d+ 1 copies of the irreducible group Z2.There is a complete classification of irreducible finite reflection groups. Thelist consists of root systems of infinite families Ad1 withG being the symmetricgroup ofdobjects,BdwithGbeing the symmetry group of the hyper-octahedron

    {1, . . . , d+1} ofRd+1

    , Dd with G being a subgroup of the hyper-octahedralgroup ford 4, the dihedral systemsI2(m) withG being the symmetric group ofregularm-gons in R2 form 3, and several other individual systems H3, H4, F4and E6, E7, E8.

    Fix u0 Rd+1 such thatu, u0 = 0. The set of positive rootsR+ with respectto u0 is defined by R+ ={u R :u, u0 > 0} so that R = R+ (R+). Amultiplicity function v v ofR+ R is a function defined on R+ with theproperty that u = v ifu is conjugate to v; in other words, the function isG-invariant. Fix a positive root system R+. Then the function h defined by

    h(x) = vR+ |x, v|v , x Rd+1,is a positive homogeneous function of degree:=

    vR+ v andh(x) is invari-

    ant under G. The h-harmonics are homogeneous orthogonal polynomials on Sd

    with respect to h2d. Beside the product weight functionh =d+1

    i=1|xi|i, themost interesting to us are the case Ad for which R+= {i j :i > j}and

    h(x) =

    1i,jd+1|xi xj|, 0,

    which is invariant under the symmetric group Sd, and the case Bd+1 for which

    R+= {i j, i+ j :i < j} {: 1 i d + 1}and

    h(x) =d+1i=1

    |xi|0

    1i,jd+1|x2i x2j |1, 0, 1 0,

    which is invariant under the hyper-octahedral group.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    24/54

    158 Yuan Xu

    For a finite reflection group G with positive rootsR+and a multiplicity function,Dunkls operators are defined by

    Dif(x) :=if(x) + vR+ (v)

    f(x) f(xv)

    x, v v, i

    , 1

    i

    d + 1,

    where 1, . . . , d+1 are the standard unit vectors ofRd+1. The remarkable fact

    that these are commuting operators,DiDj =DjDi, holds for every reflectiongroup. Theh-Laplacian is defined again by h =D21+ + D2d+1, which playsthe role of the Laplacian in the theory of ordinary harmonics. The h-harmonicsare the homogeneous polynomials satisfying the equation hP= 0 and Theorem3.5 holds also for a general reflection group.

    There again exists an intertwining operatorV between the algebra of differentialoperators and the commuting algebra of Dunkls operators, and it is the uniquelinear operator defined by

    VPn Pn, V1 = 1, DiV= Vi, 1 i d.The representation of the reproducing kernel in Theorem 3.7 in terms of theintertwining operator holds for a general reflection group. It was proved byRosler [27] thatV is a nonnegative operator; that is, Vp 0 ifp 0.However, unlike the case of G = Zd+12 , there are few explicit formulae knownfor h-harmonics with respect to a general reflection group. In fact, there is noorthonormal basis ofHd+1n (h2) known for d > 0. Recall, that the basis given inTheorem 3.3 depends on the product nature of the weight function there. For theorthogonal basis, one can still show that H defined in the previous section are

    h-harmonics and{H :|| = n, Nd+10 , d+1 = 0or 1} is an orthogonal basis,but explicit formulae forH and itsL

    2(h2d)-norm are unknown. Another basisforHd+1n (h2) consists ofVY, whereY is a basis for the spaceHd+1n of ordinaryharmonics. However, other than Zd+12 , an explicit formula ofV is known onlyin the case of the symmetric group S3 on R

    3, and the formula is complicatedand likely not in its final form. In fact, even in the case of dihedral groups onR

    2 the formula ofV is unknown. The first non-trivial case should be the groupI2(4) =B2 for which h(x) = |x1x2|1 |x21 x22|2.

    4. Orthogonal polynomials on the unit ball

    As we have seen in the Section 3.1, the orthogonal polynomials on the unit ball areclosely related to orthogonal polynomials on the sphere. This allows us to derive,working with G Z2 on Rd+1, various properties for orthogonal polynomialswith respect to the weight function h(x)(1 x)1/2, where h is a reflectioninvariant function defined on Rd. Again we will work with the case ofh being a

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    25/54

    Orthogonal polynomials of several variables 159

    product weight function,

    WB,(x) =d+1

    i=1|xi|2i(1 x)1/2, i 0, > 1/2

    for which various explicit formulae can be derived from the results in Section 3.2.In the case = 0, WB = W

    B0, is the classical weight function on B

    d, which isinvariant under rotations.

    4.1. Differential-difference equation. For y Rd+1 we use the polar coordi-nates y = r(x, xd+1), where r =y and (x, xd+1) Sd. The relation in Sec-tion 3.1 states that ifP are orthogonal polynomials with respect to W

    B,, then

    Y(y) = rnP(x) are h-harmonics with respect to h,(y) =

    di=1 |yi|i|yd+1|.

    Since the polynomials Y defined above are even in yd+1, we only need to deal

    with the upper half space{y Rd+1

    :yd+1 0}. In order to write the operatorforPn in terms ofx Bd, we choose the following mapping:

    y (r, x) : y1 = rx1, . . . , yd= rxd, yd+1= r

    1 x21 x2d,

    which is one-to-one from{y Rd+1 : yd+1 0} to itself. We rewrite the h-Laplacian in terms of the new coordinates (r, x). Let ,h denote theh-Laplacianassociated with the weight function h,, and preserve the notation h for theh-

    Laplacian associated with the weight function h(x) =d

    i=1 |xi|i forx Rd.Proposition 4.1. Acting on functions onRd+1 that are even inyd+1, the operator,

    h

    takes the form

    ,h = 2

    r2+

    d + 2|| + 2r

    r+

    1

    r2,h,0

    in terms of the coordinates (r, x) in{y Rd+1 : yd+1 0}, where the sphericalpart,h,0 , acting on functions in the variablesx, is given by

    ,h,0 = h x, 2 (2|| + 2 + d 1)x, ,in which the operatorsh and = (1, . . . , d) are all acting onx variables.

    Proof. Since ,

    h is just

    (y)

    h acting on functions defined on Rd+1 for h(y) =d+1

    i=1|yi|i (withd+1= ), its formula is given in Section 3.2. Writing r andxiin terms ofy under the change of variables y (r, x) and computing the partialderivativesxi/yi, the chain rule implies that

    yi=xi

    r+

    1

    r

    xi

    xix, (x)

    , 1 i d + 1,

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    26/54

    160 Yuan Xu

    where for i = d+ 1 we use the convention that xd+1 =

    1 x22 and define/xd+1= 0. A tedious computation gives the second order derivatives

    2

    y2i

    = x2i2

    r2

    +1 x2i

    r

    r

    + 1

    r2

    2

    x2i

    (1

    x2i )

    x,

    (x)

    xi

    x,

    (x)

    xi+

    xi

    xi x2i x, (x)

    1r

    r 1

    r2

    xi

    xi x2i x, (x)

    x, (x).

    Using these formulae and the fact that f(y1, . . . , yd+1) f(y1, . . . , yd+1) = 0 forf even in yd+1, the stated equation follows from a straightforward computation.

    We use the notationVdn(W) to denote the space of orthogonal polynomials ofdegree exactlyn with respect to the weight function W.

    Theorem 4.1. The orthogonal polynomials inVdn(WB,) satisfy the differentialdifference equation

    h x, 2 (2|| + 2 + d 1)x,

    P = n(n+ d + 2|| + 2 1)P.

    Proof. Let P Vdn(WB,). The formula in the Proposition 4.1 applied to thehomogeneous polynomialY(y) =r

    nP(x) gives

    0 = ,h Y(y) =rn2[n(n+ d + 2|| + 2 1)P(x) + ,h,0P(x)].

    The stated result follows from the formula for ,h,0 .

    For = 0, h becomes the ordinary Laplacian and the equation becomes a

    differential equation, which is the classical differential equation in [2]; note, thatin this case the weight function WB is rotation invariant. A similar differential-

    difference equation holds for the weight functions of the formh2(x)(1x2)1/2forh associated with a general reflection group.

    4.2. Orthogonal bases and reproducing kernels. From the correspondenceY(y) = r

    nP(x), y = r(x, xd+1) Rd and x Bd, several orthogonal basesforVdn(WB,) follow from the bases for the h-harmonics in Section 3.2. All or-thonormal bases are with respect to the weight function normalized to have unitintegral. Associated with x = (x1, . . . , xd) Rd, define xj = (x1, . . . , xj) for1

    j

    dand x0= 0.

    Theorem 4.2. An orthonormal basis ofVdn(WB,)is given by{P: Nd0, || =n} defined by

    P(x) = [hB ]1

    dj=1

    (1 xj12)j

    2C(aj ,j)j xj1 xj12

    ,

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    27/54

    Orthogonal polynomials of several variables 161

    whereaj = + |j+1| + |j| + dj2 andhB are given by

    [hB ]2 =

    1

    (

    |

    |+ + d+12 )n

    d

    j=1 + |j+1| + |j | +d j+ 2

    2 j.

    Proof. In the spherical coordinates of (x, xd+1) Sd, cos dj =xj+1/

    1 xj2and sin dj =

    1 xj+12/

    1 xj2. Hence, this basis is obtained from the

    h-harmonic basis in Theorem 3.3 using the correspondence.

    Another orthonormal basis can be given in the polar coordinates. UsingBd

    f(x)dx=

    10

    rd1Sd1

    f(rx)d(x)dr,

    the verification is a straightforward computation.

    Theorem 4.3. For 0 j n/2 let{Shn2j,} denote an orthonormal basis ofHdn2j(h2); then the polynomials

    P,j(x) = [cBj,n]

    1 C(,n2j+||+ d12 )2j (x)Sh,n2j(x)form an orthonormal basis ofVdn(WB,), in which the constants are given by

    [cBj,n]2 =

    (|| + + d+12 )(n 2j+ || + d2)(|| + d

    2)(n 2j+ || + + d+1

    2 )

    .

    Another interesting basis, the monomial basis, can be derived from the monomial

    h-harmonics in Section 3.2.3. This is a basis for which P(x) = cx + . . .,corresponding tod+1= 0 of the basis in Definition 3.1. We denote this basis byPB, they are defined by the generating function

    [1,1]d1

    (1 2(b1x1t1+ . . .+ bdxdtd) + b2)d

    i=1

    ci(1 + ti)(1 t2i )i1dt

    =Nd0

    bPB(x),

    where=|| ++ d12 . This corresponds to the case ofbd+1 = 0 in Definition3.1. The explicit formulae of these polynomials follow from Theorem 3.9,

    PB(x) =2||(|| + + d12 )

    !

    (12)[+12 ]

    ( + 12

    )[+12 ]

    x

    FB

    +

    + 12

    , + 1

    2

    +1

    2; || || d 3

    2 ;

    1

    x21, . . . ,

    1

    x2d+1

    .

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    28/54

    162 Yuan Xu

    Clearly the highest degree ofPB(x) is a multiple ofx, and these are monomial

    polynomials. In the case of the classical weight functionWB, = 0, and usingthe fact that

    + + 12 + 12 +12 = 2 2+12,the formula ofPB can be rewritten as

    V(x) = PB(x) =

    2||( + d12 )||!

    x

    FB

    2

    , + 12

    ; || d 32

    ; 1

    x21, . . . ,

    1

    x2d+1

    ,

    which are Appells monomial orthogonal polynomials. In this case, there is an-

    other basis defined by

    U(x) = (1)||(2)||2||( + 12)||!

    (1 x2)+ 12 ||

    x11 . . . xdd

    (1 x2)||+ 12 ,

    which is biorthogonal to the monomial basis in the following sense:

    Theorem 4.4. The polynomials{U} and{V} are biorthogonal,

    wB BdV(x)U(x)W

    B(x)dx=

    + d12

    |

    |+ + d1

    2

    (2)||!

    ,.

    Proof. Since{V} form an orthogonal basis, we only need to consider the case|| ||. Using the Rodrigues formula and integration by parts yields

    wB

    Bd

    V(x)U(x)WB(x)dx

    =wB(2)||

    2||( + 12)||!

    Bd

    ||x11 . . . x

    dd

    V(x)

    (1 x2)||+ 12 dx.

    However, since V is a constant multiple of a monomial orthogonal polynomial,

    ||

    x V(x) = 0 for || > ||, which proves the orthogonality. A simple computationgives the constant for the case|| = ||. Among other explicit formulae that we get, the compact formula for the repro-ducing kernel is of particular interest. Let us denote the reproducing kernel ofVdn(W) byPn(W; x, y) as defined in Section 1.5.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    29/54

    Orthogonal polynomials of several variables 163

    Theorem 4.5. Forx, y Bd, the reproducing kernel can be written as an integral

    Pn(WB,; x, y) =

    n+ || + + d12|| + + d1

    2

    11[1,1]d

    C||++d12n

    t1x1y1+ + tdxdyd+ s1 x21 y2

    di=1

    ci(1 + ti)(1 t2i )i1dt c(1 s2)1ds.

    Proof. Let h(y) =d+1

    i=1|yi|i with d+1 = . Then the correspondence inSection 3.1 can be used to show that

    Pn(WB,; x, y) =

    1

    2

    Pn(h

    2; x, (y,

    1 |y|2)) + Pn(h2; x, (y,

    1 |y|2))

    ,

    from which the stated formula follows from Theorem 3.6. In particular, taking the limit i 0 for i = 1, . . . , d, we conclude that for theclassical weight function WB,

    Pn(WB; x, y) =c

    n+ + d12 + d1

    2

    (4.1)

    11

    C+ d1

    2n

    x, y + t1 x21 y2(1 t2)1dt.Even in this case the formula has been discovered only recently. Ford= 1, thisreduces to the classical product formula of the Gegenbauer polynomials:

    Cn(x)Cn(y)

    Cn(1) =c

    11

    Cn(xy+ t

    1 x2

    1 y2)(1 t2)1dt.

    There is also an analogue of the Funk-Hecke formula for orthogonal polynomialsonBd. The most interesting case is the formula for the classical weight function:

    Theorem 4.6. Let f be a continuous function on [1, 1]. Let P Vdn(WB).Then

    Bdf(x, y)P(y)WB(y)dy= n(f)P(x), x = 1,

    wheren(f) is the same as in Theorem 3.8 with|| replaced by.

    As a consequence, it follows that the polynomialC(+(d1)/2)n (x, ) with satis-

    fying= 1 is an element ofVdn(WB ). Furthermore, ifalso satisfies= 1,then

    BdC

    + d12

    n (x, )C+d1

    2n (x, )WB(x)dx= nC

    +d12

    n (, ),

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    30/54

    164 Yuan Xu

    wheren= ( + (d 1)/2)/(n + + (d 1)/2). The basis in Subsection 1.2.3 isderived from this integral.

    Several results given above hold for the weight functions h2(x)(1 x2)1/2,where h is one of the weight functions in Section 3.3 that are invariant underreflection groups. Most notably is Theorem 4.2, which holds with||replaced by =

    vR+ v. Analogous of Theorems 4.4 and 4.5, in which the intertwining

    operator is used, also hold, but the formulae are not really explicit since neitheran explicit basis forHdn(h2) nor the formula for V are known for h2 associatedwith the general reflection groups.

    4.3. Rotation invariant weight function. If(t) is a nonnegative even func-tion on R with finite moments, then the weight function W(x) = (x) is arotation invariant weight function on Rd. Such a function is call a radial func-tion. The classical weight functionWB(x) corresponds to(t) = (1 t2)1/2 for|t| < 1 and (t) = 0 for|t| > 1. The orthonormal basis in Theorem 4.3 can beextended to such a weight function.

    Theorem 4.7. For0 j n/2 let{Sn2j,} denote an orthonormal basis forHdn2j of ordinary spherical harmonics. Letp(2n4j+d1)2n denote the orthonormalpolynomials with respect to the weight function|t|2n4j+d1(t). Then the poly-nomials

    P,j (x) =p(2n4j+d1)2j (x)S,n2j(x)

    form an orthonormal basis ofVdn(W) withW(x) =(x).

    Since |t|2n4j+d1(t) is an even function, the polynomialsp(2n4j+d1)2j (t) are evenwith respect to W. Hence, P,j(x) are indeed polynomials of degree n in x. Theproof is an simple verification upon writing the integral in polar coordinates.

    As one application, we consider the partial sums of the Fourier orthogonal expan-sion Sn(W; f) defined in Section 1.5. Letsn(w, g) denote the n-th partial sumof the Fourier orthogonal expansion with respect to the weight function w(t) onR. The following theorem states that the partial sum of a radial function with

    respect to a radial weight function is also a radial function.

    Theorem 4.8. LetW(x) =(x)be as above andw(t) = |t|d1(t). Ifg : R R andf(x) =g(x), then

    Sn(W; f, x) =sn(w; f, x), x Rd.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    31/54

    Orthogonal polynomials of several variables 165

    Proof. The orthonormal basis in the previous theorem gives a formula for thereproducing kernel,

    Pn(x, y) = 02jnp(2n4j+d1)2j (x)p(2n4j+d1)2j (y)

    n+ d1

    2d12

    xn2jyn2jCd1

    2n2j(x, y),

    where we have used the fact thatSn2j,are homogeneous and

    Sm,(x)Sm,(y)

    forx, y Sd1 is a zonal polynomial. Using polar coordinates,

    Sn(W; f, x) =

    0

    g(r)

    Sd1

    Pn(x,ry)d(y)rd1(r)dr.

    Since Cd1

    2n (x, y) is a zonal harmonic, its integral over Sd1 is zero for n > 0

    and is equal to the surface measure d1 ofSd

    1

    ifn = 0. Hence, we get

    Sn(W; f, x) =d1

    0

    g(r)p(d1)n (r)w(r)dr p(d1)n (x).

    Since orthonormal bases are assumed to be with respect to the normalized weightfunction, setting g = 1 shows that there is no constant in front ofsn(w; g).

    5. Orthogonal polynomials on the simplex

    Orthogonal polynomials on the simplex Td =

    {x

    R

    d : x1

    0, . . . , xd

    0, 1

    |x| > 0} are closely related to those on the unit ball. The relation depends onthe basic formula

    Bdf(y21 , . . . , y

    2d)dy=

    Td

    f(x1, . . . , xd) dxx1 . . . xd

    .

    LetWB(x) =W(x21, . . . , x2d) be a weight function defined onB

    d. Associated withWB define a weight function WT on Td by

    WT(y) =W(y1, . . . , yd)

    y1 . . . yd, y= (y1, . . . , yd) Td.Let Vd2n(WB;Zd2) denote the subspace of orthogonal polynomials in Vd2n(WB) thatare even in each of its variables (that is, invariant under Zd

    2). IfP

    Vd

    2n(WB;Zd

    2),

    then there is a polynomial R dn such that P(x) = R(x21, . . . , x2d). Thepolynomial R is in fact an element ofVdn(WT).Theorem 5.1. The relation P(x) = R(x

    21, . . . , x

    2d) defines a one-to-one cor-

    respondence between an orthonormal basis ofV2n(WB,Zd2) and an orthonormalbasis ofVdn(WT).

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    32/54

    166 Yuan Xu

    Proof. Assume that{R}||=n is an orthonormal polynomial inVdn(WT). If Nd0 has one odd component, then the integral ofP(x)x with respect toWBover Bd is zero. If all components of are even and|| < 2n, then it can bewritten as = 2 with

    N

    d0 and

    |

    | n

    1. The basic integral can be used

    to convert the integral ofP(x)x2 over Bd to the integral over Td, so that theorthogonality ofR implies thatP is orthogonal to x

    .

    The classical weight function WT on the simplex Td is defined by

    WT(x) =d

    i=1

    |xi|i1/2(1 |x|)d+11/2, x Td, i > 1.

    Sometimes we write WT as WT, with = d+1, as the orthogonal polynomi-

    als with respect to WT, on Td are related to the orthogonal polynomials with

    with respect to WB, on Bd. Below we give explicit formulas for these classical

    orthogonal polynomials on the simplex.

    Theorem 5.2. The orthogonal polynomials inVdn(WT) satisfy the partial differ-ential equation

    di=1

    xi(1 xi) 2P

    x2i 2

    1i

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    33/54

    Orthogonal polynomials of several variables 167

    {P2 : Nd0,|| = n} forms an orthonormal basis forVd2n(WB,;Zd2). Hence,using the fact that C

    (,)2n is given in terms of Jacobi polynomial, we get

    Theorem 5.3. With respect to WT , the polynomials

    P(x) = [hT ]1

    dj=1

    1 |xj|1 |xj1|

    |j+1|p(aj ,bj)j

    2xj1 |xj1| 1

    ,

    where aj = 2|j+1| + |j+1| + dj12 and bj = j 12 , are orthonormal and thenormalization constantshT are given by

    [hT ]2 =

    (|| + d+12

    )2||dj=1(2|j+1| + |j| + dj+22 )2j

    .

    On the simplexTd, it is often convenient to define xd+1= 1|x| and work with thehomogeneous coordinates (x1, . . . , xd+1). One can also derive a basis of orthogonalpolynomials that are homogeneous in the homogeneous coordinates. In fact, therelation between orthogonal polynomials on Bd and Sd allows us to work withh-harmonics. Let us denote byHd+12n (h2,Zd+12 ) the subspace ofh-harmonics inHd+12n (h2) that are even in each of its variables. Let{Sn(x21, . . . , x2d+1) :|| =n, Nd0} be an orthonormal basis ofHd2n(h2,Zd+12 ). Then{Sn(x1, . . . , xd+1) :|| =n, Nd0}forms an orthonormal homogeneous basis ofVdn(WT).For xTd letX = (x1, . . . , xd, xd+1) denote the homogeneous coordinates. LetY be the monomial basis ofh-harmonics in Section 3.2.3. Then Y2 are even ineach of their variables, which gives monomial orthogonal polynomials inVdn(WT)in the homogeneous coordinatesX:= (x1, . . . , xd+1) with xd+1= 1

    |x|,

    P(x) =XFB

    , +1

    2; 2|| || d 3

    2 ;

    1

    x1, . . . ,

    1

    xd+1

    .

    Furthermore, changing summation index shows that the above Lauricella functionof typeB can be written as a constant multiple of the Lauricella function of typeAdefined by

    FA(c, ; ; x) =

    (c)||()()!

    x, , Nd+10 , c R,

    where the summation is taken over Nd+10 . This gives the following:

    Theorem 5.4. For each Nd+10 with|| =n, the polynomials

    R(x) =FA(|| + || + d, ; + 1; X), x Tdare orthogonal polynomials inVdn(WT) and

    R= (1)n (n+ || + d)n( + 1)

    X +p, p dn1,

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    34/54

    168 Yuan Xu

    where 1= (1, . . . , 1) Rd+1.

    The polynomial Ris a polynomial whose leading coefficient is a constant multipleofX for

    N

    d+1

    0 . The set

    {R :

    N

    d+1

    0 ,

    ||= n

    }clearly has more elements

    than it is necessary for a basis ofVdn(WT); its subset with d+1= 0 forms a basisofVdn(WT). Let us write VT (x) = R(,0)(x). Then{VT : Nd0, || = n} isthe monomial basis ofVdn(WT). The notation V goes back to [2], we write asuperscript T to distinguish it from the notation for the intertwining operator.Associated with VT is its biorthogonal basis, usually denoted byU.

    Theorem 5.5. For Nd,|| =n, the polynomialsU defined by

    U(x) =x1+1/21 . . . x

    d+1/2d (1 |x|)d+1+1/2

    ||

    x11 . . . xd

    d x1+1

    1/2

    1 . . . xd+d

    1/2

    d (1 |x|)|

    |+d+1

    1/2

    .

    are polynomials inVdn(WT) and they are biorthogonal to polynomialsV,Td

    V(x)U(x)WT(x)dx=

    ( + 1/2)(d+1+ 1/2)||(|| + (d + 1)/2)2|| !,.

    Proof. It follows from the definition that U is a polynomial of degree n. Inte-grating by parts leads to

    wT Td

    V(x)U(x)WT(x)dx

    =wT

    Td

    ||x11 . . . x

    dd

    V(x) di=1

    xi+i 12i (1 |x|)||+d+1

    12 dx.

    SinceV is an orthogonal polynomial with respect toWT, the left integral is zero

    for|| > ||. For|| ||the fact thatVis a monomial orthogonal polynomialgives

    ||

    x11 . . . x

    dd

    V(x) =!,,

    from which the stated formula follows. The fact thatU are biorthogonal to Valso shows that they are orthogonal polynomials with respect to WT.

    The correspondence between orthogonal polynomials in Theorem 5.1 also givesan explicit formula for the reproducing kernel associated with WT.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    35/54

    Orthogonal polynomials of several variables 169

    Theorem 5.6. Let= || + (d 1)/2. Then

    Pn(WT,, x , y) =

    2n+

    [1,1]d+1

    C2n

    x1y1 t1+ + xd+1yd+1td+1

    d+1

    i=1

    ci(1 + ti)(1 t2i )i1dt,

    wherexd+1= 1 |x| andyd+1= 1 |y|.

    Proof. Recall that we write WT,for WT with d+1= . Using the correspondence

    in Theorem 5.1, it can be verified that

    Pn(WT,; x, y) =

    1

    2d

    Zd2

    P2n

    WB,; (1

    x1, . . . , d

    xd ), (

    y1, . . . ,

    yd)

    .

    Hence the stated formula follows from Theorem 4.5.

    6. Classical type product orthogonal polynomials

    As mentioned in the introduction, ifW(x) is a product weight function

    W(x) =w1(x1) . . . wd(xd), x Rd,then an orthonormal basis with respect to Wis given by the product orthogonalpolynomials P(x) =p1,1(x1) . . . pd,d(xd), where pm,i is the orthogonal polyno-mial of degreemwith respect to the weight function wi. In this section we discussthe product classical polynomials and some of their extensions.

    6.1. Multiple Jacobi polynomials. Recall that the product Jacobi weightfunctions is denoted by Wa,b in Section 1.3. One orthonormal basis is givenby

    P(x) =p(a1,b1)1

    (x1) . . . p(ad,bd)d

    (xd),

    where p(a,b)m is them-th orthonormal Jacobi polynomial. Although this basis of

    multiple Jacobi polynomials are simple, there is no close formula for the repro-ducing kernel Pn(Wa,b; x, y) in general. There is, however, a generating functionforPn(Wa,b; x, 1), where1 = (1, 1, . . . , 1) [1, 1]d. It is based on the generatingfunction (Poisson formula) of the Jacobi polynomials,

    G(a,b)(r; x) :=k=0

    p(a,b)k (1)p

    (a,b)k (x)r

    n

    = 1 r

    (1 + r)a+b+2 2F1

    a+b+22

    , a+b+32

    b+ 1 ;

    2r(1 + x)

    (1 + r)2

    , 0 r

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    36/54

    170 Yuan Xu

    which gives a generating function for the multiple Jacobi polynomials that canbe written in term of the reproducing kernel Pn(Wa,b) as

    n=0 P(Wa,b; x, y)rn =d

    i=1 G(ai,bi)(r; xi) :=G(a,b)d (r; x).Moreover, in some special cases, we can derive an explicit formula for Pn(W; x, 1).

    Let us consider the reproducing kernel for the case that a = b and ai are half-integers. This corresponds to the multiple Gegenbauer polynomials with respectto the weight function

    W(x) =d

    i=1

    (1 xi)i1/2, i > 1/2, x [1, 1]d,

    Recall that the generating function of the Gegenbauer polynomials is given by

    1 r2(1 2rx+ r2)+1 =

    n=0

    + n

    Cn(x)rn.

    Since +n C

    n(x) =Cn(1) Cn(x), it follows that Pn(W; x, 1) satisfies a generating

    function relation

    (1 r2)ddi=1(1 2rxi+ r2)i+1

    =n=0

    Pn(W; x, 1)rn.

    If N0, then the reproducing kernel can be given in terms of the divideddifference, defined inductively by

    [x0]f=f(x0), [x0, . . . xm]f= [x1, . . . xm]f [x0, . . . xm1]f

    xm x0 .

    The divided difference [x1, . . . , xd]fis a symmetric function ofx1, . . . , xd.

    Theorem 6.1. Leti Nd0. Then

    Pn(W; x, 1) = [1+1

    x1, . . . , x1, . . . ,

    d+1 xd, . . . , xd]Gn,

    with

    Gn(t) = (1)[d+12 ]2(1 t2)d12

    Tn(t) ford even,

    Un1(t) ford odd.

    Proof. In the case of i = 0, the left hand side of the generating function forPn(W0; x, 1) can be expanded as a power series using the formula

    [x1, . . . , xd] 1

    a b() = bd1di=1(a bxi)

    ,

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    37/54

    Orthogonal polynomials of several variables 171

    which can be proved by induction on the number of variables; the result is

    (1 r2)d

    di=1(1 2rxi+ r2)

    =(1 r2)d

    (2r)d1 [x1, . . . , xd]

    1

    1 2r() + r2

    =(1 r2)d(2r)d1

    [x1, . . . , xd]

    n=d1Un()rn

    using the generating function of the Chebyshev polynomials of the second kind.Using the binomial theorem to expand (1r2)d and the fact that Um+1(t) =sin m/ sin and sin m = (eim eim)/(2i) with t = cos , the last term canbe shown to be equal to

    r=0[x1, . . . , xd]Gnr

    n.

    For i > 1, we use the fact that (dk/dxk)Cn(x) = 2

    k()kC+knk (x) and + n =

    ( + k) + (n k), which impliesdk

    dxk Pn(W; x, 1) = 2k

    ( + 1)kPnk(W+k; x, 1),so that the formula

    d

    dx1[x1, x2, . . . , xd]g= [x1, x1, x2, x3, . . . , xd]g

    and the fact that the divided difference is a symmetric function of its knots canbe used to finish the proof.

    For multiple Jacobi polynomials, there is a relation between Pn(Wa,b; x, y) andPn(Wa,b; x, 1). This follows from the product formula of the Jacobi polynomials,

    P

    (,)

    n (x1)P

    (,)

    n (x2)P

    (,)n (1)

    = 0 10

    P(,)n (2A2(x1, x2,r ,) 1)dm,(r, ),

    where > > 1/2,A(x1, x2,r ,)

    =1

    2

    (1 + x1)(1 + x2) + (1 x1)(1 x2)r2 + 2

    1 x21

    1 x22 r cos

    1/2and

    dm,(r, ) =c,(1 r2)1r2+1(sin )2drd,in which c,is a constant so that the integral ofdm, over [0, 1]

    [0, ] is 1.

    Sometimes the precise meaning of the formula is not essential, and the followingtheorem of [12] is useful.

    Theorem 6.2. Leta,b > 1. There is an integral representation of the form

    p(a,b)n (x)p(a,b)n (y) =p

    (a,b)n (1)

    11

    p(a,b)n (t)d(a,b)x,y (t), n 0,

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    38/54

    172 Yuan Xu

    with the real Borel measuresd(a,b)x,y on[1, 1] satisfying 1

    1|d(a,b)x,y (t)|dt M, 1< x, y

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    39/54

    Orthogonal polynomials of several variables 173

    Theorem 6.4. The multiple Laguerre polynomials associated to WL satisfy thepartial differential equation

    d

    i=1 xi 2P

    x2i+

    d

    i=1 (i+ 1) xiPxi= nP.Proof. We make a change of variables x x/ in the equation satisfied by theorthogonal polynomials with respect to WT, and then divide by and take thelimit . The limit relation, however, does not give an explicit formula for the reproducingkernel. Just as in the case of multiple Jacobi polynomials, there is no explicit for-mula for the kernel Pn(W

    L; x, y) in general. There is, however, a simple formula

    if one of the arguments is 0.

    Theorem 6.5. The reproducing kernel forVd

    n(WL

    ) satisfiesPn(W

    L; x, 0) =L

    ||+d1n (|x|), x Rd+.

    Proof. The generating function of the Laguerre polynomials is

    (1 r)a1 expxr

    1 r

    =n=0

    Lan(x)rn, |r|

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    40/54

    174 Yuan Xu

    with respect to the weight functions

    d

    i=1x0i

    1i

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    41/54

    Orthogonal polynomials of several variables 175

    Theorem 6.7. For0< z

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    42/54

    176 Yuan Xu

    7. Fourier orthogonal expansion

    The n-th partial sums of the Fourier orthogonal expansion do not converge forcontinuous functions pointwisely or uniformly. It is necessary to consider summa-

    bility methods of the orthogonal expansions, such as certain means of the partialsums. One important method are the Cesaro (C, ) means.

    Definition 7.1. Let{cn}n=0 be a given sequence. For > 0, the Cesaro (C, )means are defined by

    sn=

    nk=0

    (n)k(n )k ck.

    The sequence{cn} is (C, ) summable by Cesaros method of order to s if snconverges to s asn .

    If= 0, then s

    n is then-th partial sum of{cn}and we write s0

    n assn.

    7.1. h-harmonic expansions. We start with the h-harmonics. Using the re-producing kernelPn(h

    2; x, y), then-th (C, ) means of the h-harmonic expansion

    Sn(h2; f) can be written as an integral

    Sn(h2; f, x) =ch

    Sd

    f(y)Kn(h2; x, y)h

    2(y)d(y),

    where ch is the normalization constant of h2 and K

    n(h

    2; x, y) are the Cesaro

    (C, ) means of the sequence{Pn(h2; x, y)}. If = 0, then Kn(h2) is the n-thpartial sum ofPk(h

    2).

    Proposition 7.1. Letf C(Sd). Then the (C, ) meansSn(h2; f, x) convergeto f(x) if

    In(x) :=ch

    Sd

    |Kn(h2; x, y)|h2(y)d < ;the convergence is uniform ifIn(x) is uniformly bounded.

    Proof. First we show that ifpis a polynomial thenSn(h2;p) converge uniformly to

    p. Indeed, letSn(h2; f) denote then-th partial sum of the h-harmonic expansion

    off (= 0 ofSn). It follows that

    S

    n(h2

    ; f) =

    + n

    n

    k=0

    (

    n)k

    (1 n)k Sk(h2

    ; f).

    Assume p dm. By definition,Sn(h2;p) =p ifn m. Hence,

    Sn(h2;p, x) p(x) =

    + n

    m1k=0

    (n)k(1 n)k

    Sk(h

    2;p, x) p(x)

    ,

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    43/54

    Orthogonal polynomials of several variables 177

    which is of sizeO(n1) and converges to zero uniformly as n . Now thedefinition of In(x) shows that|Sn(h2; f, x)| In(x)f, wheref is theuniform norm offtaken over Sd. The triangular inequality implies

    |Sn(h; f, x)

    f(x)

    | (1 + I(x))

    f

    p

    +|Sn(h;p, x)

    p(x)

    |.

    Sincef C(Sd), we can choosepsuch thatfp < . Recall that the explicit formula of the reproducing kernel is given in terms of theintertwining operator. Let pn(w; x, y) be the (C, ) means of the reproducingkernels of the Gegenbauer expansion with respect tow(x) = (1x2)1/2. Then

    pn(w, x, 1) are the (C, ) means of n+

    Cn(x). Let = || + (d 1)/2, it follows

    from Theorem 3.7 that

    Kn(h2; x, y) =V

    pn(w; x, , 1)

    (y).

    Theorem 7.1. If 2||+d, then the(C, )means of theh-harmonic expansionwith respect to h2 define a positive linear operator.

    Proof. The (C, ) kernel of the Gegenbauer expansion with respect to w ispositive if 2 + 1 (cf [3, p. 71]), and V is a positive operator. The positivity shows that In(x) = 1 for all x, hence it implies that S

    n(h

    2; f)

    converges uniformly to the continuous function f. For convergence, however,positivity is not necessary. First we state an integration formula for the inter-twining operator.

    Theorem 7.2. LetV be the intertwining operator. Then

    Sd Vf(x)h2(x)d(x) =A Bd+1 f(x)(1 x2)||1dx,

    forf L2(h2; Sd) such that both integrals are finite. In particular, ifg: R Ris a function such that all integrals below are defined, then

    SdVg(x, )(y)h2(y)d(y) =B

    11

    g(tx)(1 t2)||+d22 dt,

    whereA andB are constants whose values can be determined by settingf(x) =1 andg(t) = 1, respectively.

    To prove this theorem, we can work with the Fourier orthogonal expansion ofVf in terms of the orthogonal polynomials with respect to the classical weight

    function WB||1/2 on the unit ball Bd+1. It turns out that ifPn Vdn(WB||1/2),then the normalized integral ofVPn with respect toh

    2overS

    d is zero forn >0,and 1 forn = 0, so that the integral ofVf overS

    d is equal to the constant termin the orthogonal expansion on Bd+1. Of particular interest to us is the secondformula, which can also be derived from the Funk-Hecke formula in Theorem3.8. It should be mentioned that this theorem holds for the intertwining operator

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    44/54

    178 Yuan Xu

    with respect to every reflection group (with|| replaced by ), even though anexplicit formula for the intertwining operator is unknown in general. The formulaplays an essential role in the proof of the following theorem.

    Theorem 7.3. Letf C(Sd

    ). Then the Cesaro(C, )means of theh-harmonicexpansion offconverge uniformly onSd provided > || + (d 1)/2.

    Proof. Using the fact that V is positive and Theorem 7.2, we conclude that

    In(x) chSd

    V|pn(w; x, , 1)|(y)h2(y)d(y) =b 1

    1|pn(w; 1, t)|w(t)dt,

    whereb is a constant (in fact, the normalization constant ofw). The fact thatthe (C, ) means of the Gegenbauer expansion with respect to w converge if andonly if > finishes the proof.

    The above theorem and its proof in fact hold for h-harmonics with respect toany reflection group. It reduces the convergence of the h-harmonics to that ofthe Gegenbauer expansion. Furthermore, since supx |In(x)| is also the L1 normofSn(h

    2; f), the Riesz interpolation theorem shows that S

    n(h

    2; f) converges in

    Lp(h2; Sd), 1 p < , in norm if > || + (d 1)/2.

    It is natural to ask if the condition onis sharp. With the help of Theorem 7.2,the above proof is similar to the usual one for the ordinary harmonics in the sensethat the convergence is reduced to the convergence at just one point. For theordinary harmonics, the underlying group is the orthogonal group and Sd is itshomogeneous space, so reduction to one point is to be expected. For the weightfunctionh(x) =

    d+1i=1

    |xi

    |i, however, the underlying group Zd+12 is a subgroup

    of the the orthogonal group, which no longer acts transitively on Sd. In fact, inthis case, the condition on is not sharp. The explicit formula of the reproducingkernel for the product weight function allows us to derive a precise estimate forthe kernel Kn(h

    2; x, y): forx, y Sd and >(d 2)/2,

    |Kn(h2; x, y)| cd+1

    j=1(|xjyj| + n1|x y| + n2)jn(d2)/2(|x y| + n1)+ d+12

    +

    d+1j=1(|xjyj| + |x y|2 + n2)j

    n(|x y| + n1)d+1

    ,

    where x= (

    |x1

    |, . . . ,

    |xd+1

    |) and y= (

    |y1

    |, . . . ,

    |yd+1

    |). This estimate allows us to

    prove the sufficient part of the following theorem:

    Theorem 7.4. The(C, ) means of theh-harmonic expansion of every continu-

    ous function forh(x) =d+1

    i=1|xi|i converge uniformly to fif and only if >(d 1)/2 + || min

    1id+1i.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    45/54

    Orthogonal polynomials of several variables 179

    The sufficient part of the proof follows from the estimate, the necessary partfollows from that of the orthogonal expansion with respect to WB, in Theorem7.7, see below. For = 0, the order (d1)/2 is the so-called classical indexfor the ordinary harmonics. The proof of the theorem shows that the maximum

    ofIn(x) appears on the great circles defined by the intersection ofSd and thecoordinate planes. An estimate that takes the relative position ofx Sd intoconsideration proves the following result:

    Theorem 7.5. Letf C(Sd). If >(d 1)/2, thenSn(h2, f; x) converges tof(x) for everyx Sdint defined by

    Sdint= {x Sd :xi= 0, 1 i d + 1}.

    This shows that the great circles{x Sd : xi = 0} are like a boundary on thesphere for summability with respect to h2; concerning convergence the situationis the same as in the classical case where we have a critical index (d

    1)/2 at

    those points away from the boundary.

    The sequence of the partial sums Sn(h2; f) does not converge to f iff is merely

    continuous, since the sequence is not bounded. The sequence may converge forsmooth functions, and the necessary smoothness of the functions depends on theorder ofSn(h

    2; f) as n .

    Theorem 7.6. Let= || + (d 1)/2. Then asn ,Sn(h2; ) = sup

    f1Sn(h2; f) = O

    n

    .

    In particular, iff C[]+1(Sd), thenSn(h2; f) converge to funiformly.

    Proof. By the definition ofSn(h2; f) and Theorem 7.2, we get

    |Sn(h2; f, x)| b 11

    nk=0

    k+

    Ck (t)

    w(t)dt f.The integral of the partial sum of the Gegenbauer polynomials is known to bebounded byO(n). The smoothness of f shows that there is a polynomial Pof degree n such thatf P cn[]1, from which the convergence followsfrom the fact that Sn(h; P) =P.

    Again, this theorem holds for h associated with every reflection group. Forthe product weight function h(x) =d+1i=1|xi|i, the statement in Theorem 7.6suggests the following conjecture:

    Sn(h2; ) = O

    n

    with =d 1

    2 + || min

    1id+1i;

    furthermore, for x Sdint,|Sn(h2; f, x)| is of orderO(nd1

    2 ). If the estimate ofKn(h

    2; x, y) could be extended to = 0, then the conjecture would be proved.

  • 7/27/2019 Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf

    46/54

    180 Yuan Xu

    However, in the proof given in [21], the restriction > (d1)/2 seems to beessential.

    7.2. Orthogonal expansions on Bd and on Td. For a weight function W on

    Bd or Td we denote the (C, ) means of the orthogonal expansion with respectto W as Sn(W; f). It follows that

    Sn(W; f, x) =c

    Kn(W; x, y)f(y)W(y)dy,

    where =Bd orTd,cis the constant defined byc1 =

    W(y)dy, and Kn(W)is the (C, ) means of the sequence Pn(W). Using the correspondence in Theo-rem 3.1, most of the results for h-harmonics can be extended to the orthogonalexpansions with respect to WB, on the ball B

    d.

    Theorem 7.7. The(C, )means of the orthogonal expansion of every continuous

    function with respect to WB, converge uniformly to fif and only if

    >(d 1)/2 + || + min{1, . . . , d, }. (7.1)Furthermore, forfcontinuous onBd, Sn(W

    B,, f; x) converges to f(x) for every

    x Bdint, whereBdint= {x Bd : x 1 and xi= 0, 1 i d}

    provided >(d 1)/2.

    Proof. The sufficient part of the first and the second statement follow fromTheorems 7.4 and 7.5, upon using the fact that

    Kn(WB,; x, y) =

    12

    Kn(h

    2; x, (y,

    1 |y|2)) + Kn(h2; x, (y, 1 |y|2))

    and the elementary integration formula in the proof of Theorem 3.1. The neces-sary part of the first statement uses the fact that the expansion reduces to the gen-eralized Gegenbauer expansion at certain points. Let w,(t) = |t|2(1 t2)1/2.Denote byKn(w,; t, s) the (C, ) means of the kernel for the generalized Gegen-bauer expansion. Let=|| ++ (d 1)/2. From the explicit formula of thereproducing kernel, it follows that

    Kn(WB,; x, 0)