Title: Inference for GLMMs with approximate likelihoods: when is a Laplace approximation good enough?


Abstract:
In Generalized Linear Mixed Models, and in many other latent variable models, the likelihood function is an integral over the latent variables, and is often very difficult to compute. Several approaches to inference for these models rely on finding a Laplace approximation to the likelihood, and using this approximate likelihood as a proxy for the exact likelihood to do inference on the model parameters. This approach often seems to work well in practice, but there are some examples of models where inference obtained using a Laplace approximation to the likelihood is not close to the inference obtained using the exact likelihood. When should we be worried about the quality of inference obtained by using a Laplace approximation to the likelihood? I will discuss recent work on this question, and consider how the structure of the model should behave to ensure that inference using the Laplace approximation will "tend towards" the inference with the exact likelihood, as the amount of information available about the parameters grows. I will also discuss the more practical problem of how to determine whether inference using a Laplace approximation might be unreliable in any given case, and describe some alternative approximations which may be useful in these situations. 

 

 

Add to my calendar

Back to listing