Skip to content

My Research

PhD project: Exchangeable Particle Filter for Reaction Networks

Supervisors: Prof. Chris Sherlock and Dr. Lloyd Chapman

This project aims to improve the efficiency of particle-MCMC inference on reaction-network models. In particular, to mitigate the particle-degeneracy issue that makes inference infeasible on reaction networks where the state space is high-dimensional and the transition mass functions are (essentially, or truly) intractable (for example an SEIR model where the population is stratified according to location and/or age).
A reaction network is a continuous-time Markov chain on a large or countably infinite state space. Common examples include the popular SEIR epidemic model, the Lotka-Volterra predator-prey model and models for the interaction of species within a genome. Reaction networks are particularly important because they can capture the exact dynamics of the interactions between components in the network. Given data, such as noisy observations of the number of individuals that have recovered since the last observation time, interest typically lies in estimating the unknown rates and the current numbers in each compartment.
Due to the intractability of the likelihood, even for moderate-sized reaction networks, MCMC is the overarching method of choice for inference. Andrieu et al. (2010) presented two new, generic methods for inference on hidden Markov models (of which reaction networks are a special case) via MCMC using a particle filter. The particle filter, essentially, simulates multiple realisations of the underlying latent state of the reaction network; that is, the number of each reacting species from the initiation of the system until the most recent observation. This project focuses on the Particle Gibbs sampler, an iteration of which starts with a current path from the posterior distribution of paths, simulates M−1 new paths and then samples a new ‘current’ path from the M possible paths.
However, the particle filter and the Particle Gibbs methods suffer from an issue known as particle degeneracy, which means the distribution of the hidden states at early observation time points is not fully explored, resulting in poor mixing. A recent STOR-i PhD (Malory, 2021) created the Exchangeable Particle Gibbs Sampler (xPGibbs), which overcomes the degeneracy issue when the hidden Markov model is a diffusion.
Despite these advancements, xPG has so far been developed only for continuous-state-space models (diffusions) driven by Gaussian noise. In contrast, this PhD project will evolve the core idea of xPGibbs for use within reaction networks, where the state space is discrete, and the key drivers of the randomness are Poisson processes as opposed to the Brownian motion that drives continuous-state-space diffusions. It will also look at further extending the methodology to the discrete-time chain-binomial epidemic model.

MSc project: Deep Pricing in the CEV model

Supervisor: Dr. John Armstrong

The application of machine learning in finance receives more and more attention. In this paper, we price American put options under the CEV model with finite difference method. Then we use a neural network to approximate the pricing map from model parameters to option prices, which realizes rapid computation of option price. We also use market data downloaded from Bloomberg to calibrate the CEV model. In addition, we estimate value at risk of a portfolio containing American put options and train another neural network mapping model parameters to value at risk.

MSc project: Predictable Forward Performance Process in Binomial Tree Model with Robo-Advising Application

Supervisor: Dr. Liang Gechun

we derive the discrete-time predictable m-forward performance processes in the case of logarithmic utility function and exponential utility function. Next, we compare the solutions for the single-period investment problem using two different approaches: the classical expectation maximization method and the method involving the forward performance process. We also discuss the Robo-advising application of predictable m-forward performance processes.