MARS PhD Projects
MARS offers an exciting range of PhD projects, with expert supervision, for our Applied Mathematics and Mathematical AI programmes.
We are currently accepting applications from UK fees status students.
Apply to MARS
MARS offers an exciting range of PhD projects, with expert supervision, for our Applied Mathematics and Mathematical AI programmes.
We are currently accepting applications from UK fees status students.
Apply to MARSLead supervisor: Dr Murad Banaji (m.banaji@lancaster.ac.uk)
Life exists because complex biochemical networks robustly perform certain functions. And many diseases result when they fail to do so. A huge challenge in modern biology and medicine is to understand how these networks perform various tasks, and how this functioning is disrupted in disease. Powerful mathematical and computational techniques exist to analyse the dynamics of differential equation systems arising from small networks; but real-world networks tend to be large. And as network size increases, mapping out their dynamics gets difficult.
However, there are a variety of recently developed mathematical tools to study the dynamics of larger biochemical networks. In particular, there are theorems which tell us how large networks "inherit" behaviours from their smaller subnetworks. These results have started to shed light on how interacting subnetworks can give rise to new and surprising dynamical behaviours in larger networks. This raises the challenge of efficiently searching large networks for the presence of known subnetworks. Recent machine learning approaches to subgraph matching have opened a promising direction for tackling this search problem, and adapting them to the specific structure of biochemical reaction networks is a natural extension.
The aim of this PhD project is to take advantage of these recent theoretical advances to develop efficient and reliable computational tools for the analysis of real-world scale biochemical networks. These tools should allow a non-expert user to provide a network as input, and get back information on its most important subnetworks, and its allowed dynamical behaviours, along with descriptions of the regions of parameter-space where they occur. For such automated analysis, a combination of traditional symbolic and numerical computation, and knowledge-guided machine learning will be needed. The development of new machine-learning tools, informed by theory, will be central for subgraph matching problems where we ask if a given network contains subnetworks that admit some particular behaviour; and in approximating geometrical objects, such as parameter sets, in high dimensional spaces. Meanwhile, more traditional symbolic and numerical approaches to systems of differential equations will guide us to the right questions, and help to synthesise training and validation sets for machine learning models.
Prospective students are expected to become familiar with some areas in dynamical systems and bifurcation theory, graph theory and graph neural networks.
Lead supervisor: Dr Ryan Doran (r.doran@lancaster.ac.uk)
When a gas is cooled to extremely low temperatures, quantum mechanics takes over. Instead of acting like a cloud of individual particles, the gas behaves as a single coherent matter wave. Such as system is known as a quantum gas, and it exhibits several remarkable properties, including the ability to flow without any resistance — a phenomenon known as 'superfluidity'.
Superfluids provide a unique and highly controllable setting for studying fundamental problems in fluid dynamics. At the same time, they have incredible potential for applications in quantum technology. For example, superfluids can be used for ultra-precise rotation or acceleration sensors, or as building blocks for “atomtronic” circuits (circuits where currents of ultra-cold atoms play the role of electrons in conventional circuits).
In many of these applications, superfluids are confined to flow in ring-shaped trapping geometries. The aim of this project is to design and analyse the next generation of trapping geometries, optimised for rotation sensing, accelerometery, or coherent quantum transport. A key goal will be to identify designs that significantly enhance device performance and robustness.
To achieve this, the project will combine theoretical modelling, numerical simulations of quantum fluids, and emerging machine learning and data driven techniques. This will give the student the opportunity to develop skills at the interface of mathematical modelling, AI and quantum technology.
The ideal candidate will have a strong mathematical or physics background. Experience in numerical methods will be highly beneficial. Some knowledge of fluid mechanics and/or machine learning tools is desirable but not essential.
Further reading: For a general introduction to quantum fluids, see ‘A Primer on Quantum Fluids’ by Nick Parker and Carlo Barenghi. For a more technical review on quantum technology see ‘Roadmap on Atomtronics: state of the art and perspective’ by L. Amico et al.
Lead supervisor: Dr Catherine Drysdale (c.drysdale@lancaster.ac.uk)
Depression is a heterogeneous disorder that affects nine percent of people worldwide. The impact of depression on individuals cannot be overstated, as well as the impact on their family and the wider society. Attempts have been made to categorise depression into subtypes such as melancholic and atypical depression. Whilst these subtypes exhibiting distinct hormonal and neural signatures, a mechanistic understanding could potentially replace categorisation via symptoms and lead to more tailored treatment approaches.
This project will aim to create models of a wider brain network and hormonal system to model depression’s underlying mechanisms and treatment responses. The data we have is cutting edge and includes time series hormonal data collected with the U-Rhythm device with corresponding fMRI measurements, as well as measurements from the CA2 region of the mouse hippocampus. The mathematics employed will combine graph theory, eigenvalue analysis and neural networks.
Lead supervisor: Professor Chris Nemeth (c.nemeth@lancaster.ac.uk)
MARS and Prob_AI invite applications for a PhD project at the interface of large language models (LLMs) and computational statistics. The project will develop new statistical methods to make LLM generation more efficient, reliable, and better suited for integration into complex decision‑making systems.
LLM generation can be viewed as a sequential probabilistic process: tokens are produced through repeated evaluation of conditional probabilities. This perspective closely connects LLMs with well‑studied ideas in sequential probabilistic inference. The project will build on advances in computational statistics to design improved algorithms for generation and inference in modern language models.
Possible research directions include:
The project will combine ideas from Bayesian statistics, probabilistic modelling and AI, will involve both methodological development and applied experimentation, and is motivated by on-going research with Microsoft Research.
Lead supervisor: Dr Jixiang Qing (j.qing@lancaster.ac.uk)
Many real-world systems encountered across science and engineering (e.g., chemical reactions, robotic motion planning, and autonomous control) are fundamentally continuous in time and governed by differential equations. In such systems, the governing dynamics are often unknown, yet the system must simultaneously be controlled toward a desired objective. How to efficiently achieve this task is a challenge of crucial importance.
Continuous-time model-based reinforcement learning, a recently emerging framework, addresses this challenge by modelling unknown dynamics with data-driven differential equations and learning to control the system through principled exploration under uncertainty. Compared to standard model based RL approaches, it naturally handles irregularly sampled data, enables adaptive decisions about when to measure and act, and avoids discretization errors that degrade performance. Yet, there remain many exciting opportunities to advance this still-new framework by drawing on ideas like probabilistic modelling, scalable inference, and optimal control theory.
This PhD project aims to advance continuous-time MBRL and investigate how cutting-edge algorithms can be developed and adapted to solve emerging real-world problems. The project will specifically focus on:
Candidate profile
We welcome applicants with a background in mathematics, statistics, computer science, engineering, or related fields who are familiar with probability theory, differential equations, and optimization. Experience with reinforcement learning, Gaussian processes, or optimal control is highly beneficial, but not strictly essential.
Lead supervisor: Dr Matthias Sachs (m.sachs@lancaster.ac.uk)
Generative models such as diffusion models have recently transformed AI-driven image and text generation. Applying these ideas to 3D molecular and materials structures is a comparatively underdeveloped area of research, despite its enormous potential impact in drug design, materials discovery, and protein simulation.
The key challenge is that molecules are not just data points - they have 3D structure, physical symmetries, and a mix of discrete and continuous properties that standard generative models are not built to handle. Developing models that respect this structure is an open and active area of research.
In this PhD project, you will develop new generative modelling methods for 3D molecular systems, with a focus on:
The project will be highly interdisciplinary and include collaboration with international partners in computational chemistry and engineering.
The successful candidate will gain expertise at the intersection of modern machine learning, applied mathematics, and scientific computing - an excellent foundation for a career in academia or industry.
Lead supervisor: Dr Mher Safaryan (m.safaryan@lancaster.ac.uk)
Mathematical optimisation is one of the main engines behind modern machine learning. Its goal is to adjust a model’s parameters so that it performs well on training data, typically by minimising prediction errors. As models (such as deep neural networks) and datasets have grown dramatically in size, the computational and energy costs of training and deploying them have also exploded. This creates new challenges and opportunities for optimisation research.
One direction focuses on model compression, which reduces the size of large models while preserving accuracy. Because large models often contain redundancies, they can frequently be compressed with little or no loss in performance. Two key techniques are sparsification (setting some parameters to zero) and quantisation (using fewer bits to represent each parameter). The main challenge is to design optimisation algorithms that remain effective under such compression constraints, whether applied during training (compression-aware training) or after training (post-training compression). This also motivates revisiting the design of optimisation algorithms themselves. While Adam (a variant of stochastic gradient descent) remains the default choice in deep learning, newer optimisers such as Shampoo, SOAP, and Muon show promising alternatives. This direction aims to make training and inference more energy-efficient, enabling AI models to run not only on large cloud-based servers but also on smaller, resource-constrained devices such as smartphones.
A second direction addresses the challenge of scaling data through federated learning. Here, many clients (e.g., mobile devices, hospitals, IoT systems) collaborate to train a shared model without directly sharing their local data, thus preserving privacy. Compared to centralised training, this introduces new optimisation challenges: how to reduce communication between clients, how to handle asynchronous updates efficiently, and how to ensure robustness against adversarial participants.
Together, these two directions explore how to make machine learning optimisation more scalable, efficient, and trustworthy, developing both theoretical convergence guarantees and practical methods that enable powerful models to be trained and deployed with lower cost and higher reliability.