bayesian kernel models: theory and applicationssayan/osustats.pdfinverse regression. theory as well...

69
Kernel models and penalized loss Bayesian kernel model Priors on measures Estimation and inference Results on data Open problems Bayesian Kernel Models: theory and applications Sayan Mukherjee Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 10, 2007 Sayan Mukherjee Bayesian Kernel Models: theory and applications

Upload: others

Post on 12-Oct-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Bayesian Kernel Models: theory and applications

Sayan Mukherjee

Department of Statistical ScienceInstitute for Genome Sciences & Policy

Department of Computer ScienceDuke University

May 10, 2007

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 2: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Shameless plug for tomorrow

Learning gradients: simultaneous regression and inverseregressionThis talk concerns models on high-dimensional data that aresimultaneously predictive – regression – and also infer theunderlying geometric structure of the data relevant to prediction –inverse regression.Theory as well as applications to gene expression analysis and digitrecognition will be presented.The word Bayes will not appear in tomorrow’s talk. However, therewill be differential geometry, graphical models, and animalhusbandry.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 3: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Fun SAMSI program this spring

High dimensional inference and random matricesSpring semester continuation – Geometry and randommatrices

Misha Belkin – CSEYoon Lee – Statistics

Working group in computational statistics.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 4: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Relevant papers

Characterizing the function space for Bayesian kernel models.Natesh Pillai, Qiang Wu, Feng Liang, Sayan Mukherjee,Robert L. Wolpert. Journal Machine Learning Research, inpress.

Understanding the use of unlabelled data in predictivemodelling. Feng Liang, Sayan Mukherjee, and Mike West.Statistical Science, in press.

Non-parametric Bayesian kernel models. Feng Liang, KaiMao, Ming Liao, Sayan Mukherjee and Mike West.Biometrika, submitted.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 5: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Table of contents

1 Kernel models and penalized loss2 Bayesian kernel model

Direct prior elicitationPriors and integral operators

3 Priors on measuresLevy processesGaussian processesBayesian representer form

4 Estimation and inferenceLikelihood and prior specificationVariable selectionSemi-supervised learning

5 Results on data6 Open problems

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 6: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Regression

data = {Li = (xi , yi )}ni=1 with Li

iid∼ ρ(X ,Y ).

X ∈ X ⊂ IRp and Y ⊂ IR and p � n.

A natural ideaf (x) = EY [Y |x ].

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 7: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

An excellent estimator

f (x) = arg minf∈bs

[error on data + smoothness of function]

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 8: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

An excellent estimator

f (x) = arg minf∈bs

[error on data + smoothness of function]

error on data = L(f , data) = (f (x) − y)2

smoothness of function = ‖f ‖2K =

|f ′(x)|2dx

big function space = reproducing kernel Hilbert space = HK

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 9: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

An excellent estimator

f (x) = arg minf∈HK

[

L(f , data) + λ‖f ‖2K

]

The kernel: K : X × X → IR e.g. K (u, v) = e (−‖u−v‖2).

The RKHS

HK =

{

f

∣f (x) =

i=1

αiK (x , xi ), xi ∈ X , αi ∈ R, ` ∈ N

}

.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 10: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Representer theorem

f (x) = arg minf∈HK

[

L(f , data) + λ‖f ‖2K

]

f (x) =n∑

i=1

aiK (x , xi ).

Great when p � n.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 11: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Very popular and useful

1 Support vector machines

f (x) = arg minf ∈HK

[

n∑

i=1

|1 − yi · f (xi)|+ + λ‖f ‖2K

]

,

2 Regularized Kernel regression

f (x) = arg minf∈HK

[

n∑

i=1

|yi − f (xi )|2 + λ‖f ‖2K

]

,

3 Regularized logistic regression

f (x) = arg minf∈HK

[

n∑

i=1

ln(

1 + e−yi ·f (xi ))

+ λ‖f ‖2K

]

.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 12: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Why go Bayes – uncertainty

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 13: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Priors via spectral expansion

HK =

{

f

∣f (x) =

∞∑

i=1

aiφi(x) with

∞∑

i=1

a2i /λi < ∞

}

,

φi (x) and λi ≥ 0 are eigenfunctions and eigenvalues of K :

λiφi (x) =

XK (x , u)φi (u)dγ(u).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 14: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Priors via spectral expansion

HK =

{

f

∣f (x) =

∞∑

i=1

aiφi(x) with

∞∑

i=1

a2i /λi < ∞

}

,

φi (x) and λi ≥ 0 are eigenfunctions and eigenvalues of K :

λiφi (x) =

XK (x , u)φi (u)dγ(u).

Specify a prior on HK via a prior on A

A ={

(

ak

)∞

k=1

k

a2k/λk < ∞

}

.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 15: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Priors via spectral expansion

HK =

{

f

∣f (x) =

∞∑

i=1

aiφi(x) with

∞∑

i=1

a2i /λi < ∞

}

,

φi (x) and λi ≥ 0 are eigenfunctions and eigenvalues of K :

λiφi (x) =

XK (x , u)φi (u)dγ(u).

Specify a prior on HK via a prior on A

A ={

(

ak

)∞

k=1

k

a2k/λk < ∞

}

.

Hard to sample and relies on computation of eigenvalues andeigenvectors.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 16: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Priors via duality

The duality between Gaussian processes and RKHS implies thefollowing construction

f (·) ∼ GP(µf ,K ),

where K is given by the kernel.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 17: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Priors via duality

The duality between Gaussian processes and RKHS implies thefollowing construction

f (·) ∼ GP(µf ,K ),

where K is given by the kernel.

f (·) /∈ HK almost surely.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 18: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Integral operators

Integral operator LK

: Γ → G

G =

{

f

∣f (x) := L

K[γ](x) =

XK (x , u) dγ(u), γ ∈ Γ

}

,

with Γ ⊆ B(X ).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 19: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Integral operators

Integral operator LK

: Γ → G

G =

{

f

∣f (x) := L

K[γ](x) =

XK (x , u) dγ(u), γ ∈ Γ

}

,

with Γ ⊆ B(X ).

A prior on Γ implies a prior on G.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 20: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Equivalence with RKHS

For what Γ is HK = span(G) ?

What is L−1K

(HK ) =??. This is hard to characterize.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 21: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Equivalence with RKHS

For what Γ is HK = span(G) ?

What is L−1K

(HK ) =??. This is hard to characterize.

The candidates for Γ will be

1 square integrable functions

2 integrable functions

3 discrete measures

4 the union or integrable functions and discrete measures.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 22: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Square integrable functions are too small

Proposition

For every γ ∈ L2(X ), LK [γ] ∈ HK . Consequently,

L2(X ) ⊂ L−1K (HK ).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 23: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Square integrable functions are too small

Proposition

For every γ ∈ L2(X ), LK [γ] ∈ HK . Consequently,

L2(X ) ⊂ L−1K (HK ).

Corollary

If Λ = {k : λk > 0} is a finite set, then LK (L2(X )) = HK

otherwise LK (L2(X )) $ HK . The latter occurs when the kernel K

is strictly positive definite, the RKHS is infinite-dimensional.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 24: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Signed measures are (almost) just right

Measures: The class of functions L1(X ) are signed measures.

Proposition

For every γ ∈ L1(X ), LK [γ] ∈ HK . Consequently,

L1(X ) ⊂ L−1K (HK ).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 25: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Signed measures are (almost) just right

Measures: The class of functions L1(X ) are signed measures.

Proposition

For every γ ∈ L1(X ), LK [γ] ∈ HK . Consequently,

L1(X ) ⊂ L−1K (HK ).

Discrete measures:

MD =

{

µ =

n∑

i=1

ciδxi:

n∑

i=1

|ci | < ∞, xi ∈ X , n ∈ N

}

.

Proposition

Given the set of finite discrete measures, MD ⊂ L−1K (HK ).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 26: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Signed measures are (almost) just right

Nonsingular measures: M = L1(X ) ∪MD

Proposition

LK (M) is dense in HK with respect to the RKHS norm.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 27: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

Signed measures are (almost) just right

Nonsingular measures: M = L1(X ) ∪MD

Proposition

LK (M) is dense in HK with respect to the RKHS norm.

Proposition

B(X ) $ L−1K (HK (X )).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 28: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Direct prior elicitationPriors and integral operators

The implication

Take home message – need priors on signed measures.

A function theoretic foundation for random signed measures suchas Gaussian, Dirichlet and Levy process priors.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 29: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Bayesian kernel model

yi = f (xi) + ε, εiid∼ No(0, σ2).

f (x) =

XK (x , u)Z (du)

where Z (du) ∈ M(X ) is a signed measure on X .

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 30: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Bayesian kernel model

yi = f (xi) + ε, εiid∼ No(0, σ2).

f (x) =

XK (x , u)Z (du)

where Z (du) ∈ M(X ) is a signed measure on X .

π(Z |data) ∝ L(data|Z ) π(Z ),

this implies a posterior on f .

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 31: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Levy processes

A stochastic process Z := {Zu ∈ R : u ∈ X} is called a Levyprocess if it satisfies the following conditions:

1 Z0 = 0 almost surely.

2 For any choice of m ≥ 1 and 0 ≤ u0 < u1 < ... < um, therandom variables Zu0 ,Zu1 − Zu0 , ...,Zum − Zum−1 areindependent. (Independent increments property)

3 The distribution of Zs+u − Zs is independent of Zs (Temporalhomogeneity or stationary increments property).

4 Z has cadlag paths almost surely.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 32: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Levy processes

Theorem (Levy-Khintchine)

If Z is a Levy process, then the characteristic function of Zu : u ≥ 0 has the following form:

E[eiλZu ] = exp

(

u

"

iλa −1

2+

Z

R\{0}[e

iλw− 1 − iλw1{w :|w|<1}(w)]ν(dw)

#)

,

where a ∈ IR, σ2 ≥ 0 and ν is a nonnegative measure on IR withR

R(1 ∧ |w|2)ν(dw) < ∞.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 33: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Levy processes

drift term a

variance of Brownian motion σ2

ν(dw) the jump process or Levy measure.

exp{

u[

iλa − 12σ2λ2

]}

exp{

u∫

R\{0}

[

e iλw − 1 − iλw1{w :|w |<1}(w)]

ν(dw)}

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 34: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Two approaches to Gaussian processes

Two modelling approaches

1 prior directly on the space of functions by sampling frompaths of the Gaussian process defined by K ;

2 Gaussian process prior on Z (du) implies on prior on functionspace via integral operator.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 35: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Prior on random measure

A Gaussian process prior on Z (du) is a signed measure sospan(G) ⊂ HK.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 36: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Direct prior elicitation

Theorem (Kallianpur)

If {Zu, u ∈ X} is a Gaussian process with covariance K and mean

m ∈ HK and HK is infinite dimensional, then

P(Z• ∈ HK ) = 0.

The sample paths are almost surely outside HK .

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 37: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

A bigger RKHS

Theorem (Lukic and Beder)

Given two kernel functions R and K, R dominates K (R � K) if HK ⊆ HR . Let R � K. Then

‖g‖R ≤ ‖g‖K , ∀g ∈ HK .

There exists a unique linear operator L : HR → HR whose range is contained in HK such that

〈f , g〉R = 〈Lf , g〉K , ∀f ∈ HR , ∀g ∈ HK .

In particularLRu = Ku , ∀u ∈ X .

As an operator into HR , L is bounded, symmetric, and positive.Conversely, let L : HR → HR be a positive, continuous, self-adjoint operator then

K(s, t) = 〈LRs , Rt〉R , s, t ∈ X

defines a reproducing kernel on X such that K ≤ R.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 38: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

A bigger RKHS

If L is nucular (an operator that is compact with finite traceindependent of basis choice) then we have nucular dominanceR��K .

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 39: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

A bigger RKHS

If L is nucular (an operator that is compact with finite traceindependent of basis choice) then we have nucular dominanceR��K .

Theorem (Lukic and Beder)

Let K and R be two reproducing kernels. Assume that the RKHS

HR is separable.

A necessary and sufficient condition for the existence of a Gaussian

process with covariance K and mean m ∈ HR and with trajectories

in HR with probability 1 is that R��K.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 40: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

A bigger RKHS

If L is nucular (an operator that is compact with finite traceindependent of basis choice) then we have nucular dominanceR��K .

Theorem (Lukic and Beder)

Let K and R be two reproducing kernels. Assume that the RKHS

HR is separable.

A necessary and sufficient condition for the existence of a Gaussian

process with covariance K and mean m ∈ HR and with trajectories

in HR with probability 1 is that R��K.

Characterize HR by L−1K (HK ).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 41: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Dirichlet process prior

f (x) =

XK (x , u)Z (du) =

XK (x , u)w(u)F (du)

F (du) is a distribution and w(u) a coefficient function.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 42: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Dirichlet process prior

f (x) =

XK (x , u)Z (du) =

XK (x , u)w(u)F (du)

F (du) is a distribution and w(u) a coefficient function.

Model F using a Dirichlet process prior: DP(α,F0)

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 43: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Bayesian representer form

Given Xn = (x1, ..., xn)iid∼ F

F | Xn ∼ DP(α + n,Fn), Fn = (αF0 +

n∑

i=1

δxi)/(α + n).

E[f | Xn] = an

K (x , u)w(u) dF0(u)+n−1(1−an)

n∑

i=1

w(xi )K (x , xi ),

an = α/(α + n).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 44: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Levy processesGaussian processesBayesian representer form

Bayesian representer form

Taking limα → 0 to represent a non-informative prior:

Proposition (Bayesian representor theorem)

fn(x) =

n∑

i=1

wi K (x , xi ),

wi = w(xi )/n.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 45: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Likelihood

yi = f (xi ) + εi = w0 +

n∑

j=1

wjK (xi , xj) + εi , i = 1, ..., n

where εi∼No(0, σ2).

Y ∼ No(w0ι + Kw , σ2I ).

where ι = (1, ..., 1)′.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 46: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Prior specification

Factor: K = F∆F ′ with ∆ := diag(λ21, ..., λ

2n) and w = F∆−1β.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 47: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Prior specification

Factor: K = F∆F ′ with ∆ := diag(λ21, ..., λ

2n) and w = F∆−1β.

π(w0, σ2) ∝ 1/σ2

τ−1i ∼ Ga(aτ/2, bτ/2)

T := diag(τ1, ..., τn)

β ∼ No(0,T )

w |K ,T ∼ No(0,F∆−1T∆−1F ′).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 48: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Prior specification

Factor: K = F∆F ′ with ∆ := diag(λ21, ..., λ

2n) and w = F∆−1β.

π(w0, σ2) ∝ 1/σ2

τ−1i ∼ Ga(aτ/2, bτ/2)

T := diag(τ1, ..., τn)

β ∼ No(0,T )

w |K ,T ∼ No(0,F∆−1T∆−1F ′).

Standard Gibbs sampler simulates p(w ,w0, σ2|data).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 49: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Kernel model extension

Kν(x , u) = K (√

ν ⊗ x ,√

ν ⊗ u)

with ν = {ν1, ..., νp} with νk ∈ [0,∞) as a scale parameter.

kν(x , u) =

p∑

k=1

νk xk uk ,

kν(x , u) =

(

1 +

p∑

k=1

νk xk uk

)d

,

kν(x , u) = exp

(

−p∑

k=1

νk(xk − uk)2

)

.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 50: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Prior specification

νk ∼ (1 − γ)δ0 + γ Ga(aν , aνs), (k = 1, . . . , p),

s ∼ Exp(as), γ ∼ Be(aγ , bγ)

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 51: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Prior specification

νk ∼ (1 − γ)δ0 + γ Ga(aν , aνs), (k = 1, . . . , p),

s ∼ Exp(as), γ ∼ Be(aγ , bγ)

Standard Gibbs sampler does not work: Metropolis-Hastings.Could use help – Chris ?

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 52: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Problem setup

Labelled data : (Y p,X p) = {(ypi , xp

i ); i = 1 : np} iid∼ ρ(Y ,X |φ, θ).

Unlabelled data: X m = {xmi , i = (1) : (nm)} iid∼ ρ(X |θ).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 53: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Problem setup

Labelled data : (Y p,X p) = {(ypi , xp

i ); i = 1 : np} iid∼ ρ(Y ,X |φ, θ).

Unlabelled data: X m = {xmi , i = (1) : (nm)} iid∼ ρ(X |θ).

How can the unlabelled data help our a predictive model ?

data = {Y ,X ,Xm}

p(φ, θ|data) ∝ π(φ, θ)p(Y |X , φ)p(X |θ)p(X m|θ).

Need very strong dependence between θ and φ.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 54: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Bayesian kernel model

Result of DP prior

fn(x) =

np+nm∑

i=1

wi K (x , xi ).

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 55: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Bayesian kernel model

Result of DP prior

fn(x) =

np+nm∑

i=1

wi K (x , xi ).

Same as in Belkin and Niyogi but without

minf ∈HK

[

L(f , data) + λ1‖f ‖2K + λ2‖f ‖2

I

]

.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 56: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Bayesian kernel model

Result of DP prior

fn(x) =

np+nm∑

i=1

wi K (x , xi ).

1 θ = F (·) so that p(x |θ)dx = dF (x) - the parameter is the fulldistribution function itself;

2 p(y |x , φ) depends intimately on θ = F ; in fact, θ ⊆ φ in thiscase and dependence of θ and φ is central to the model.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 57: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Likelihood and prior specificationVariable selectionSemi-supervised learning

Bayesian kernel model

Result of DP prior

fn(x) =

np+nm∑

i=1

wi K (x , xi ).

1 θ = F (·) so that p(x |θ)dx = dF (x) - the parameter is the fulldistribution function itself;

2 p(y |x , φ) depends intimately on θ = F ; in fact, θ ⊆ φ in thiscase and dependence of θ and φ is central to the model.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 58: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Simulated data – semi-supervised

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 59: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Simulated data – semi-supervised

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 60: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Simulated data – semi-supervised

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 61: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Cancer classification – semi-supervised

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.80.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Percentage of Labelled Examples

Aver

age

(Har

d) E

rror R

ate

Without Unlabellled DataWith Unlabelled Data

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 62: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

MNIST digits – feature selection

5 10 15 20 25

5

10

15

20

25

5 10 15 20 25

5

10

15

20

25

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 63: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

MNIST digits – feature selection

0 5 10 15 20 25 30 35 40 450

0.05

0.1

0.15

0.2

0.25

0.3

0.35

With variable selectionWithout variable selection

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 64: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Discussion

Lots of work left:

Further refinement of integral operators and priors in terms ofSobolev spaces.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 65: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Discussion

Lots of work left:

Further refinement of integral operators and priors in terms ofSobolev spaces.

Semi-supervised setting: relation of kernel model and priorswith Laplace-Beltrami and graph Laplacian operators.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 66: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Discussion

Lots of work left:

Further refinement of integral operators and priors in terms ofSobolev spaces.

Semi-supervised setting: relation of kernel model and priorswith Laplace-Beltrami and graph Laplacian operators.

Semi-supervised setting: Duality between diffusion processeson manifolds and Markov chains.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 67: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Discussion

Lots of work left:

Further refinement of integral operators and priors in terms ofSobolev spaces.

Semi-supervised setting: relation of kernel model and priorswith Laplace-Beltrami and graph Laplacian operators.

Semi-supervised setting: Duality between diffusion processeson manifolds and Markov chains.

Bayesian variable selection: Efficient sampling and search inhigh-dimensional space.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 68: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Discussion

Lots of work left:

Further refinement of integral operators and priors in terms ofSobolev spaces.

Semi-supervised setting: relation of kernel model and priorswith Laplace-Beltrami and graph Laplacian operators.

Semi-supervised setting: Duality between diffusion processeson manifolds and Markov chains.

Bayesian variable selection: Efficient sampling and search inhigh-dimensional space.

Numeric stability and statistical robustness.

Sayan Mukherjee Bayesian Kernel Models: theory and applications

Page 69: Bayesian Kernel Models: theory and applicationssayan/OSUstats.pdfinverse regression. Theory as well as applications to gene expression analysis and digit recognition will be presented

Kernel models and penalized lossBayesian kernel model

Priors on measuresEstimation and inference

Results on dataOpen problems

Summary

Its extra work but it pays to be Bayes :)

Sayan Mukherjee Bayesian Kernel Models: theory and applications