The model in Equation 7.1 can be written in matrix form as:
where is the design matrix of covariate measurements, are the covariate effects, is the variance of the spatial process at the locations , is the () identity matrix and is the variance of the spatially uncorrelated random effects. We can write , where the -element of the matrix is formed by evaluating and denotes Euclidean distance. Put and let , we re-write the model above as
(7.3) |
The log-likelihood for a set of data is then:
(7.4) |
For given and , we can maximise the expression above exactly using estimated generalised least squares (EGLS); this yields estimates of and , denoted respectively and . Now, given and , we can numerically maximise the profile log-likelihood to get estimates for and , denoted and . Next, given and , we can maximise exactly to obtain and and again numerically maximise the resulting profile likelihood to obtain and .
Iterating this procedure, we find the MLEs of , , and and the estimated Hessian, from which we can derive confidence intervals. There are two issues worth noting.
As with all complex maximum likelihood routines, ideally one should attempt the maximisation process from several starting values. Note that the variogram (see below) can be used to generate initial values.
Since the above algorithm requires the invertion of an matrix, it can be prohibitively slow for large datasets
Regardless of which method of estimation is chosen, it is always a good idea to plot the variogram (see below) to check if the MLEs of , , and look sensible.
Estimated Generalised Least Squares (I) Here we show how to derive the MLE for in a model with correlated residuals. Start with , where and is a covariance matrix. Since is symmetric and positive definite, we can use the eigendecomposition to write it as , which means . We write . Pre-multiplying the original equation by gives , so , where is the identity matrix. Therefore, in pre-multiplying by , we reduce our original model to one that can be solved by ordinary least squares and we can derive the EGLS estimate of by solving the matrix equation for . Premultiplying by gives
(7.5) |
Estimated Generalised Least Squares (II) Here we show how to derive the MLE for and in Equation 7.3. Applying EGLS (i.e. Equation 7.5) to this model yields ; the s cancel, hence
Now suppose that we have obtained the MLE for so that conditional on this parameter,
First observe that and so i.e. the variance of the column vector is an estimate of . Since we compute this as , that is,