PhD:

Bringing a new treatment to market is a long and expensive process, which can often end in failure. Platform trials are a class of clinical trials, which aim to increase efficacy compared to traditional trial designs via the possibility of adding new treatments to ongoing trials. A statistical methodology for platform trials that allows new experimental treatments to be tested as efficiently as possible while satisfying the regulatory bodies’ standards is essential. Therefore, STOR-i and Roche have partnered together in order to create a project which is focused on using a frequentist framework to answer the following three questions:

  • When, why and how to add new arms to the study?
  • How can a sequence of adaptive trials with a changing population and/or a changing control arm with various endpoints be designed?
  • How can a trial be best designed with interim analyses where the trial focuses on all-pairwise comparisons?

Using likelihood theory the initial aim of the project is to extend the MAMS (multi-arm multi-stage) approach to allow for the addition of new experimental treatments at each interim analysis. Where the approach controls the errors of the trial at pre-specified levels. The two main errors to be controlled are the FWER (family wise error rate) and the power under LCF (least favourable configuration). One of the first steps is to derive a new method for calculating the FWER for a MAMS trial where additional arms are added at later intervals.

This is beneficial as during a course of confirmatory clinical trials – which can take years to run and require considerable resources – evidence for a new promising treatment may emerge. Therefore, it may be advantageous to include this treatment into the ongoing trial as this could benefit patients, funders and regulatory bodies by shortening the time taken comparing and selecting experimental treatments. Thus allowing optimal therapies to be determined faster and reduce costs and patient numbers.

STOR-i Internship:

I completed the STOR-i internship at the end of my second year of my undergraduate degree. The internship involved me undertaking a project with a 1st year PhD student. The project I undertook was exploring models for which the data points occur randomly in space and time. The aim of this type of data is to model the locations of data points or events, in addition to any information or marks associated with each occurrence. This can be achieved through point process models. The simplest example of this is the homogenous poisson process. In a homogenous poisson process model events occur independently at random with a uniform intensity.

The first aim of my project was to look at methods for assessing the validity of the assumptions for any data set to see if fitting the homogenous poisson process model was suitable. Then I looked at data where these assumptions were satisfied. The next aim was to study complex data sets where the assumptions may no longer hold. Then to use different models which have fewer or weaker assumptions and subsequently assess any improvements in the model fit.

During the project I chose to focus on a data set which was about earthquakes above magnitude 1.5 in the Netherlands, for which the events are induced by gas extraction from the reservoir below the region. My PhD mentor was Zak Varty whose PhD is focused on point process models. For more information on Zak’s work check out his website: https://www.lancaster.ac.uk/~varty/