Home page for accesible maths 7 Linear transformations

Style control - access keys in brackets

Font (2 3) - + Letter spacing (4 5) - + Word spacing (6 7) - + Line spacing (8 9) - +

7.2 Random Vectors and Matrices

A random vector 𝑿=(X1,,Xn) has n elements which are random variables with some joint distribution. For instance if n=3, 𝑿=(X1,X2,X3) with a joint distribution FX1,X2,X3. Matrix notation provides an excellent method of handling some parts of this information succinctly.

Definition.

When the elements of the vector 𝑿 are the random variables Xi for i=1,2,,n, the expectation, 𝖤[𝑿], is the n×1 vector with elements 𝖤[Xi] and the variance, 𝖵𝖺𝗋[𝑿], is the n×n matrix with elements 𝖢𝗈𝗏[Xi,Xj].

Example 7.2.1.

The expectations of X1, X2, X3, are 0, -1, 2, respectively. Write down 𝖤[𝑿].

Solution. 

𝖤[𝑿]=[0-12].
Example 7.2.2.

The variance matrix of 𝑿 is

𝖵𝖺𝗋[𝑿]=[101032125].
  1. (a)

    What is the dimension of the vector 𝑿?

  2. (b)

    What is the variance of X3?

  3. (c)

    What is the covariance of X2 and X3?

  4. (d)

    What is the correlation between X2 and X3?

  5. (e)

    Which variables are uncorrelated?

  6. (f)

    Which variables are independent?

Solution. 

  1. (a)

    dim(𝑿)=3 as 𝖵𝖺𝗋[𝑿] is a 3×3 matrix.

  2. (b)

    𝖵𝖺𝗋[X3]=5.

  3. (c)

    𝖢𝗈𝗏[X2,X3]=2.

  4. (d)

    𝖢𝗈𝗋𝗋[X2,X3]=2/3×5.

  5. (e)

    Uncorrelated: X1,X2.

  6. (f)

    We cannot answer this question from the information provided.

In summary the variance matrix is defined by

𝖵𝖺𝗋[𝑿]=(𝖵𝖺𝗋[X1]𝖢𝗈𝗏[X1,X2]𝖢𝗈𝗏[X1,Xn]𝖢𝗈𝗏[X2,X1]𝖵𝖺𝗋[X2]𝖢𝗈𝗏[X2,Xn]𝖢𝗈𝗏[Xn,X1]𝖢𝗈𝗏[X2,Xn]𝖵𝖺𝗋[Xn]).
Example 7.2.3.

Random variables X and Y have variances σX2 and σY2 respectively and correlation ρXY. Write down the variance matrix of the vector (X,Y) and simplify if σX=σ=σY.

Solution. 

𝖵𝖺𝗋[[XY]] =[σX2ρXYσXσYρXYσXσYσY2]
=σ2[1ρXYρXY1].

Notes

Variance matrices are sometimes called variance-covariance matrices.

  1. 1.

    The expectation vector is simply the vector of expectations; the variance matrix has variances down the diagonal, covariances as off-diagonals.

  2. 2.

    The variance matrix is always symmetric, because 𝖢𝗈𝗏[Xi,Xj]=𝖢𝗈𝗏[Xj,Xi], and positive semi-definite (see Section 7.4 Example 7.4.1 for the reason).

Independent identically distributed (iid) random variables

We will often be interested in random variables X1,X2,,Xn that are independent identically distributed, or iid for short. This condition requires that

  1. (a)

    each Xi has the same distribution and

  2. (b)

    they are independent.

This course does not cover independence of multiple random variables in depth, but the definition is similar to the bivariate case:

FX1,X2,,Xn(x1,x2,,xn)=FX1(x1)FX2(x2)FXn(xn).

This implies pairwise independence: every pair of random variables Xi, Xj for ij is independent. However independence is a stronger condition than this i.e. there are examples of pairwise independent random variables that do not satisfy the full independence definition.

For iid random variables with 𝖵𝖺𝗋[Xi]=σ2 for all i,

𝖵𝖺𝗋[𝑿]=(σ2000σ2000σ2)=σ2𝑰.

Random matrices: a random matrix W is a matrix with elements Wi,j, i=1,,m, j=1,,n, each of which is a random variable.

Definition.

Let W be an m×n random matrix. The expectation, 𝖤[W] is the m×n matrix with elements 𝖤[W]i,j=𝖤[Wi,j], i=1,,m, j=1,,n.

We now derive formulae for the expectations and variances of linear transformations.