2 Bayesian statistics 331-Week 2

2.1 Summary

The ingredients of Bayesian inference

The likelihood: f(xθ)=i=1nf(xiθ)

is a function of θ.

The prior: π(θ)

is the probability of θ prior to data collection.

The joint distribution: h(x;θ)

of θ and x, factorized as h(x;θ)=f(xθ)π(θ).

The posterior distribution, h(θx)

is the probability of the unknown upon consideration of the current data.

The marginal likelihood: m(x)

or evidence can be obtained by integrating out θ from the joint distribution m(x)=θπ(θ)f(xθ)𝑑θ.

The prior predictive distribution: f(xx)

is the probability of a future observation, x, before the data is looked at.

The posterior predictive distribution: f(xx)

is the probability of a future observation, x, given the data in hand, x.