5 Week 5 Bayesian statistics: Decisions

5.1 Ingredients of a Decision problem

Ingredients of a decision

  1. 1.

    A set 𝒟 of decisions which are available to the decision maker;

  2. 2.

    A loss function Θ×𝒟, where L(θ,d) is the loss associated with decision d𝒟.

  3. 3.

    A parameter space, θΘ, the distribution of states of nature, π(θ);

  4. 4.

    An observation space, y𝒴, the distribution of the observations f(yθ);

  5. 5.

    A reward space, . We assume there is an ordering on these rewards. Our utility or loss function can be written as a function of the reward and our belief in what the reward is: L(r(θ),d) where r

The loss function

  • No decision can be taken without potential losses to the client.

  • The loss is a negative utility.

    L(θ,d)=-U(θ,d)
  • The loss function is denoted by L(θ,d) and represents the payoff by a decision maker (statistician) if he takes the decision d𝒟, and the real state of nature is θΘ.

  • The loss function satisfies the following properties, L(d,d)=0 and L(d,θ) is non-decreasing function of |d-θ|

Loss functions when making point estimates

Often the decision maker must make an estimate. It can be a forecast or it can be a bid. The decision maker can be penalised for discrepancies through the use of a loss function. We now look at some common loss functions and calculate the best decision and the expected posterior loss having made the best decision.

  • Squared error loss (SLE)

    L(d,θ)=(d-θ)2
  • Absolute error loss

    L(d,θ)=|d-θ|
  • Asymmetric linear loss

    L(d,θ) = {K1(θ-d)d<θK2(d-θ)d>θ

Loss functions when making point estimates

  • Hit or miss loss

    L(d,θ) = {0|d-θ|<ϵ1|d-θ|>ϵ