probabilistic programming2
TRANSCRIPT
Probabilistic Programming
Easy to think =⇒ easy to write, easy to run
1. Write model description, generate inference method.
2. Modular models
3. Don’t resort to C++/Java to write simple things.
4. Allow graphical models with changing graphs & data structures
5. Lazy computation for MCMC.
Probabilistic Programming
Easy to think =⇒ easy to write, easy to run
1. Write model description, generate inference method.
2. Modular models
3. Don’t resort to C++/Java to write simple things.
4. Allow graphical models with changing graphs & data structures
5. Lazy computation for MCMC.
Probabilistic Programming
Easy to think =⇒ easy to write, easy to run
1. Write model description, generate inference method.
2. Modular models
3. Don’t resort to C++/Java to write simple things.
4. Allow graphical models with changing graphs & data structures
5. Lazy computation for MCMC.
Probabilistic Programming
Easy to think =⇒ easy to write, easy to run
1. Write model description, generate inference method.
2. Modular models
3. Don’t resort to C++/Java to write simple things.
4. Allow graphical models with changing graphs & data structures
5. Lazy computation for MCMC.
Probabilistic Programming
Easy to think =⇒ easy to write, easy to run
1. Write model description, generate inference method.
2. Modular models
3. Don’t resort to C++/Java to write simple things.
4. Allow graphical models with changing graphs & data structures
5. Lazy computation for MCMC.
Probabilistic Programming
Easy to think =⇒ easy to write, easy to run
1. Write model description, generate inference method.
2. Modular models
3. Don’t resort to C++/Java to write simple things.
4. Allow graphical models with changing graphs & data structures
5. Lazy computation for MCMC.
Extensions of Graphical Models
1. Control flow
I x ∼ normal (if i then y else z , σ2)I x [i] = z[category[i]]I x [i] ∼ normal(x [parent[i]], σ2)
2. Data structures (Native)(x,y)[x,y,z]ReversibleMarkov α Q π Λ
3. Random numbers of random variablesI n ∼ geometric 0.5
x ∼ iid n (normal 0 1)
Extensions of Graphical Models
1. Control flowI x ∼ normal (if i then y else z , σ2)
I x [i] = z[category[i]]I x [i] ∼ normal(x [parent[i]], σ2)
2. Data structures (Native)(x,y)[x,y,z]ReversibleMarkov α Q π Λ
3. Random numbers of random variablesI n ∼ geometric 0.5
x ∼ iid n (normal 0 1)
Extensions of Graphical Models
1. Control flowI x ∼ normal (if i then y else z , σ2)I x [i] = z[category[i]]
I x [i] ∼ normal(x [parent[i]], σ2)
2. Data structures (Native)(x,y)[x,y,z]ReversibleMarkov α Q π Λ
3. Random numbers of random variablesI n ∼ geometric 0.5
x ∼ iid n (normal 0 1)
Extensions of Graphical Models
1. Control flowI x ∼ normal (if i then y else z , σ2)I x [i] = z[category[i]]I x [i] ∼ normal(x [parent[i]], σ2)
2. Data structures (Native)(x,y)[x,y,z]ReversibleMarkov α Q π Λ
3. Random numbers of random variablesI n ∼ geometric 0.5
x ∼ iid n (normal 0 1)
Extensions of Graphical Models
1. Control flowI x ∼ normal (if i then y else z , σ2)I x [i] = z[category[i]]I x [i] ∼ normal(x [parent[i]], σ2)
2. Data structures (Native)(x,y)[x,y,z]ReversibleMarkov α Q π Λ
3. Random numbers of random variablesI n ∼ geometric 0.5
x ∼ iid n (normal 0 1)
Extensions of Graphical Models
1. Control flowI x ∼ normal (if i then y else z , σ2)I x [i] = z[category[i]]I x [i] ∼ normal(x [parent[i]], σ2)
2. Data structures (Native)(x,y)[x,y,z]ReversibleMarkov α Q π Λ
3. Random numbers of random variablesI n ∼ geometric 0.5
x ∼ iid n (normal 0 1)
Models (and Distributions) are functions
Consider the M0(κ, ω) codon model for positive selection
I Really, HKY (κ) is a nucleotide submodel → M0(HKY (κ), ω)
We really want models to be parameterized by other models
I ... and distributions parameterized by distributions!I logNormal µ σ = expTransform (normal µ σ)I dirichlet_process n α (normal 0 1)
SOMEtimes we want models parameterized by FUNCTIONS on models.
I M8 = mixture (beta a b) (\w -> m0(k,w))
Models (and Distributions) are functions
Consider the M0(κ, ω) codon model for positive selectionI Really, HKY (κ) is a nucleotide submodel → M0(HKY (κ), ω)
We really want models to be parameterized by other models
I ... and distributions parameterized by distributions!I logNormal µ σ = expTransform (normal µ σ)I dirichlet_process n α (normal 0 1)
SOMEtimes we want models parameterized by FUNCTIONS on models.
I M8 = mixture (beta a b) (\w -> m0(k,w))
Models (and Distributions) are functions
Consider the M0(κ, ω) codon model for positive selectionI Really, HKY (κ) is a nucleotide submodel → M0(HKY (κ), ω)
We really want models to be parameterized by other models
I ... and distributions parameterized by distributions!I logNormal µ σ = expTransform (normal µ σ)I dirichlet_process n α (normal 0 1)
SOMEtimes we want models parameterized by FUNCTIONS on models.
I M8 = mixture (beta a b) (\w -> m0(k,w))
Models (and Distributions) are functions
Consider the M0(κ, ω) codon model for positive selectionI Really, HKY (κ) is a nucleotide submodel → M0(HKY (κ), ω)
We really want models to be parameterized by other modelsI ... and distributions parameterized by distributions!
I logNormal µ σ = expTransform (normal µ σ)I dirichlet_process n α (normal 0 1)
SOMEtimes we want models parameterized by FUNCTIONS on models.
I M8 = mixture (beta a b) (\w -> m0(k,w))
Models (and Distributions) are functions
Consider the M0(κ, ω) codon model for positive selectionI Really, HKY (κ) is a nucleotide submodel → M0(HKY (κ), ω)
We really want models to be parameterized by other modelsI ... and distributions parameterized by distributions!I logNormal µ σ = expTransform (normal µ σ)
I dirichlet_process n α (normal 0 1)
SOMEtimes we want models parameterized by FUNCTIONS on models.
I M8 = mixture (beta a b) (\w -> m0(k,w))
Models (and Distributions) are functions
Consider the M0(κ, ω) codon model for positive selectionI Really, HKY (κ) is a nucleotide submodel → M0(HKY (κ), ω)
We really want models to be parameterized by other modelsI ... and distributions parameterized by distributions!I logNormal µ σ = expTransform (normal µ σ)I dirichlet_process n α (normal 0 1)
SOMEtimes we want models parameterized by FUNCTIONS on models.
I M8 = mixture (beta a b) (\w -> m0(k,w))
Models (and Distributions) are functions
Consider the M0(κ, ω) codon model for positive selectionI Really, HKY (κ) is a nucleotide submodel → M0(HKY (κ), ω)
We really want models to be parameterized by other modelsI ... and distributions parameterized by distributions!I logNormal µ σ = expTransform (normal µ σ)I dirichlet_process n α (normal 0 1)
SOMEtimes we want models parameterized by FUNCTIONS on models.
I M8 = mixture (beta a b) (\w -> m0(k,w))
Models (and Distributions) are functions
Consider the M0(κ, ω) codon model for positive selectionI Really, HKY (κ) is a nucleotide submodel → M0(HKY (κ), ω)
We really want models to be parameterized by other modelsI ... and distributions parameterized by distributions!I logNormal µ σ = expTransform (normal µ σ)I dirichlet_process n α (normal 0 1)
SOMEtimes we want models parameterized by FUNCTIONS on models.I M8 = mixture (beta a b) (\w -> m0(k,w))
Future Work
1. Dynamic instantiation of random variables:
I x = repeat (normal 0 1)I n = geometric 0.5I y = f (take n xs)
Future Work
1. Dynamic instantiation of random variables:I x = repeat (normal 0 1)
I n = geometric 0.5I y = f (take n xs)
Future Work
1. Dynamic instantiation of random variables:I x = repeat (normal 0 1)I n = geometric 0.5
I y = f (take n xs)
Future Work
1. Dynamic instantiation of random variables:I x = repeat (normal 0 1)I n = geometric 0.5I y = f (take n xs)
Source
https://github.com/bredelings/BAli-Phy
Other software for Bayesian InferenceI RevBayesI BEAST 1I BEAST 2I ChurchI Venture