Natural Sciences Mathematics of Artificial Intelligence Pathway
This is a double-weighed pathway.
Mathematics is an incredibly powerful subject that sits at the foundation of all science and technology.
Our world is technologically advancing at a rapid pace thanks to the application of mathematics in areas such as artificial intelligence, cyber security, health, environmental science, and engineering. Through the Mathematics of Artificial Intelligence pathway, you will explore a wide range of topics, from modelling and programming, to calculus and probably and statistics.
Year One
Year One Modules accordion
A mathematical model is a representation of a real-world event, such as a building vibrating during an earthquake or the spread of a disease within a population. In this module, you will investigate mathematical models that lead to ordinary differential equations and will study a variety of core analytical methods for solving them, such as integrating factors and separation of variables.
You will learn to develop models by extracting important data from real-world scenarios, which can then be analysed and refined. Many mathematical models, including those used in artificial intelligence, are unmanageable, and thus you will establish and practice fundamental programming skills and concepts that will be used in future modules.
Interested in how mathematicians build theories from basic concepts to complex ideas, like eigenvalues and integration? Journey from polynomial operations to matrices and calculus through this module.
Starting with polynomials and mathematical induction, you will learn fundamental proof techniques. You will explore matrices, arrays of numbers encoding simultaneous linear equations, and their geometric transformations, which are essential in linear algebra. Eigenvalues and eigenvectors, which characterise these transformations, will be introduced, highlighting their role in applications including population growth and Google's page rankings.
Next, we will reintroduce you to calculus, from its invention by Newton and Leibniz, to its formalisation by Cauchy and Weierstrass. You will explore sequence convergence, techniques for evaluating limits, and key continuity tools like the intermediate value theorem. Differentiation techniques develop a geometric understanding of function graphs, leading to mastering integration methods for solving differential equations and calculating areas under curves. We conclude with a first look at vector calculus.
Interested in how mathematicians build theories from basic concepts to complex ideas, like eigenvalues and integration? Journey from polynomial operations to matrices and calculus through this module.
Starting with polynomials and mathematical induction, you will learn fundamental proof techniques. You will explore matrices, arrays of numbers encoding simultaneous linear equations, and their geometric transformations, which are essential in linear algebra. Eigenvalues and eigenvectors, which characterise these transformations, will be introduced, highlighting their role in applications including population growth and Google's page rankings.
Next, we will reintroduce you to calculus, from its invention by Newton and Leibniz, to its formalisation by Cauchy and Weierstrass. You will explore sequence convergence, techniques for evaluating limits, and key continuity tools like the intermediate value theorem. Differentiation techniques develop a geometric understanding of function graphs, leading to mastering integration methods for solving differential equations and calculating areas under curves. We conclude with a first look at vector calculus.
An introduction to the mathematical and computational toolsets for modelling the randomness of the world. You will learn about probability, the language used to describe random fluctuations, and statistical techniques. This will include exploring how computing tools can be used to solve challenges in scientific research, artificial intelligence, machine learning and data science.
You will develop the axiomatic theory of probability and discover the theory and uses of random variables, and how theory matches intuitions about the real-world. You will then dive into statistical inference, learning to select appropriate probability models to describe discrete and continuous data sets.
You will gain the ability to implement statistical techniques to draw clear, informative conclusions. Throughout, you will learn the basics of R or Python, and their use within probability and statistics. This will equip you with the skills to deploy statistical methods on real scientific and economic data.
Year Two
Year Two modules accordion
Machine learning is at the heart of modern AI systems, and it is a fundamentally mathematical subject. You will learn this mathematics by discovering how techniques are deployed in several AI systems, including the neural networks that have revolutionised the field.
You’ll start by building connections with previously encountered approaches through the unifying concept of a loss function of a parameter vector. For example, with a neural network model the vector input is the set of weights, and the loss function might be the prediction error on a dataset.
The goal is to find a vector input that produces a small loss; in the above example, this is known as training the neural net. You will learn and deploy some of the key mathematical ideas and numerical techniques, such as back propagation and stochastic gradient descent, that enable the automated iterative learning of a good vector input.
Statistics allows us to estimate trends and patterns in data and gives a principled way to quantify uncertainty in these estimates. The findings can lead to new insights and support decision-making in fields as diverse as cyber security, human behaviour, finance and economics, medicine, epidemiology, environmental sustainability and many more.
Dive into the behaviour of multivariate random variables and asymptotic probability theory, both of which are central to statistical inference. You will then be equipped to explore one of the most fundamental statistical models, the linear regression model, and learn how to apply general statistical inference techniques to multi-parameter statistical models. Statistical computing is embedded in the module, allowing you to investigate multivariate probability distributions, simulate random data, and implement statistical methods.
Throughout your degree you gain a unique skills set based on your understanding of the interdisciplinary nature of sciences. In this module we develop your self-awareness of these skills and how to make the most of graduate-level employment opportunities.
We introduce you to the University’s employability resources including job search techniques and search engine use. We develop your skills in writing CVs and cover letters, and we draw on the expertise of employers and alumni. Your ability to effectively use these resources will enhance your employability skills, your communication skills and help you to develop a short-term career plan.
Never has the collection of data been more widespread than it is now. The extraction of information from massive, often complex and messy, datasets brings many challenges to fields such as statistics, mathematics and computing.
Develop the skills and understanding to apply modern statistical and data-science tools to gain insight from contemporary data sets. By addressing challenges from a variety of applications, such as social science, public health, industry and environmental science, you will learn how to perform and present an exploratory data analysis, deploy statistical approaches to analyse data and draw conclusions, as well as developing judgement to critically evaluate the appropriateness of chosen methods for real-world challenges.
Building on your knowledge of vectors and matrices, this module explores the elegant framework of linear algebra, a powerful mathematical toolkit with remarkably diverse applications across statistical analysis, advanced algebra, graph theory, and machine learning.
You'll develop a comprehensive understanding of fundamental concepts, including vector spaces and subspaces, linear maps, linear independence, orthogonality, and the spectral decomposition theorem.
Through individual exploration, small-group collaboration, and computational exercises, you'll gain both theoretical insight and practical skills. The module emphasises how these abstract concepts translate into powerful problem-solving techniques across multiple disciplines, preparing you for advanced studies while developing your analytical reasoning abilities.
Year Three
Year Three modules accordion
Researching, writing and presenting are key skills for all Natural Sciences students. This module develops these skills and gives you the opportunity to produce an individual project on a chosen mathematical or statistical theme. You will receive support from an appropriate supervisor to design, research and deliver the project. You will learn how to format and structure professional scientific reports and papers, understand how to research them, discover how to typeset them using the specialised package LaTeX and find out how to correctly include citations and references. You will deploy established techniques of mathematical or statistical analysis to your chosen project, applying knowledge and skills from previous years, and you will communicate your findings to others by producing a report as well as a presentation.
Core for those students undertaking a project in the Single Maths pathway.
In this module we continue to develop your employability skills. We focus on your ability to communicate your scientific learning to reflect the interdisciplinary nature of your degree and empower you when it comes to job applications and interviews. This includes practice for assessment centres and associated tasks such as psychometric testing and skills testing, and 1-1 recruitment selection or panel-based interviews.
Models of dynamical systems are fundamental to our understanding of the physical and natural world.
Explore a new class of model for the time evolution of a dynamical system and investigate Markov jump process models for real-world systems, such as the evolution of species populations in the wild and the spread of infectious diseases. Using these processes, you will learn how to simulate and study methods understanding their properties and behaviours. Unlike deterministic differential equation models, Markov jump processes are random, allowing for different behaviour every time they are simulated. You will discover how it is often possible to associate a jump process with a related differential equation approximation and that this can provide important insights into the behaviour of the jump process and the original real-world system.
An introduction to a variety of methods that are useful for analysing environmental data, such as air temperatures, rainfall or wildfire locations. Spatial dependence is a key feature of many environmental datasets, and the Gaussian process will be introduced as a model for continuous spatial processes. You will learn about the properties of the Gaussian process and implement this model for spatial data analysis, before investigating methods for point-reference data, such as earthquake or wildfire locations.
You will also dip into natural hazard risk management, which seeks to mitigate the effects of events, such as flooding or storms, in a manner that is proportionate to the risk. You will learn basic concepts from extreme value theory, including the appropriate distributions for extremes, and how to use these as statistical models for estimating the probability of events more extreme than those in the dataset.
The study of graphs (mathematical objects used to model networks and pairwise relations between objects) is a cornerstone of discrete mathematics. Graphs can represent important real-world situations, and the study of algorithms for graph-theoretical problems has strong practical significance.
You will learn about structural and topological properties of graphs, including graph minors, planarity and colouring. We will introduce several theoretical tools, including matrices relating to graphs and the Tutte polynomial. We will also study fundamental algorithms for network exploration, routing and flows, with applications to the theory of connectivity and trees, considering implementation, proofs of correctness and efficiency of algorithms.
You will gain experience in following and constructing mathematical proofs, correctly and coherently using mathematical notation, and choosing and carrying out appropriate algorithms to solve problems. The module will enable you to develop an appreciation for a range of discrete mathematical techniques.
Consider the key issues in the teaching and learning of mathematics. Develop an excellent foundation for a PGCE by engaging with educational literature and gain experience in writing academically.
Having studied mathematics for many years, you will be well-placed to reflect upon that experience and attempt to make sense of it in the light of theoretical frameworks developed by researchers in the field. Throughout this module, you will prepare to become a mathematics graduate who can contribute knowledge to future debates about the ways in which maths is treated within the education system.
AI is suddenly everywhere and the methods for training and using AI tools are fundamentally mathematical, and fascinating in their own right. By understanding what goes on ‘under the hood’ you will open up a plethora of exciting opportunities for both further study and employment.
We will introduce you to the deep learning architectures used in modern AI. You will investigate how different architectures work with different data types and tasks, and what the computationally specified architectures actually mean in a modelling sense.
However, deep neural nets need more than just an appropriate architecture; they need to be both trained and deployed. You’ll study the interesting maths at each of these stages: the most recent approaches to loss function minimisation, and the techniques to sequentially learn and adapt to new data and observations, a critical component of modern AI methods.
Statistical methods play a crucial role in health research. This module introduces you to the key study designs used in health investigations, such as randomised controlled trials and various types of observational study.
Issues of study design will be covered from both a practical and theoretical perspective, aiming to identify the most efficient design which adheres to ethical principles and can be carried out in a feasible amount of time, or using a feasible number of patients. Various approaches to controlling for confounding will be discussed, including both design and analysis-based methods. You will also explore different types of response data, including introducing time-to-event data and the resulting challenges presented by censoring.
Real-world studies and published articles will be used to illustrate the concepts, and reference will be made to the ICH guidelines for pharmaceutical research and STROBE guidelines for epidemiological studies.
Stochastic processes are fundamental to probability theory and statistics and appear in many places in both theory and practice. For example, they are used in finance to model stock prices and interest rates, in biology to model population dynamics and the spread of disease, and in physics to describe the motion of particles.
During this module, you will focus on the most basic stochastic processes and how they can be analysed, starting with the simple random walk. Based on a model of how a gambler's fortune changes over time, it questioned whether there are betting strategies that gamblers can use to guarantee a win. We will focus on Markov processes, which are natural generalisations of the simple random walk, and the most important class of stochastic processes. You will discover how to analyse Markov processes and how they are used to model queues and populations.
Statistics and machine learning share the goal of extracting patterns or trends from very large and complex datasets. These patterns are used to forecast or predict future behaviour or interpolate missing information. Learn about the similarities and differences between statistical inference and machine learning algorithms for supervised learning.
You will explore the class of generalised linear models, which is one of the most frequently used classes of supervised learning model. You will learn how to implement these models, how to interpret their output and how to check whether the model is an accurate representation of your dataset. Lastly, you will have the opportunity to see how these models can be extended to the case of the ‘large p, small n’ question. This phrase refers to the situation in which there are many more variables than there are samples, something which is now commonplace.
Do you want to entertain and inspire children and the public in STEM? With an introduction to teaching as well as wider engagement opportunities, learn how to understand your audience and how to engage and enliven them. You will also learn how to balance this with educating them and presenting science in a way that’s appropriate to your audience. We include an introduction to pedagogy, how to inspire school pupils and how to use traditional and new media for science communication.
You will deliver an activity of your choosing to an audience. This could be a lesson at school, engaging with children at a large outreach event or delivering a public lecture. In addition, you will also reflect on your activity to discuss what you’ve learnt and what changes you would make. You can deliver this by either video, podcast or article.
Year Four (MSci Only)
Year Four Modules accordion
A highlight of your degree will be a significant individual project that will be undertaken with the guidance of a supervisor. Prior to starting your final year, you will be given a list of potential projects according to the current research interests of our academic staff and based on your preferences you will be allocated a supervisor.
Once the year commences, you will begin to explore an area of mathematics or statistics that is of particular interest to you, while receiving support through regular meetings with your supervisor. You will find relevant resources and steer the direction of your project, which will run throughout the academic year. By the end you will be able to showcase your findings, both in written and oral form.
Your dissertation represents the culmination of years of mathematical study and may even provide an entry point for a PhD, if you are interested in further study.
Modern data collection is almost always on a large scale, which allows us to study complex dependencies and interactions. Nonparametric statistical methods can be helpful in modelling and understanding such data as they allow for models that adapt more flexibly to the data. For example, rather than assuming a parametric (e.g. Gaussian) distribution to model a population, we can utilise kernel-based methods to approximate the density function based on the observed data, allowing for the use of weaker assumptions.
This module will introduce you to nonparametric methods used in both density estimation and regression settings, the former via kernel methods, and the latter by extending the generalised linear modelling paradigm through the usage of spline functions, thus enabling us to investigate non-linear relationships between variables. Overall, this module equips you with a range of powerful models that can be used (and are demonstrated) in a variety of real-world applications.
Clinical trials are planned experiments on human beings designed to assess the relative benefits of one or more forms of treatment. For instance, to study whether aspirin reduces the incidence of pregnancy-induced hypertension or assess whether a new immunosuppressive drug improves the survival rate of transplant recipients. This module combines the study of technical methodology with discussion of wider research issues.
You will learn about the definition and estimation of treatment effects, before progressing to cross-over trials, sample size determination, and equivalence trials. You will explore flexible trial designs that allow modifications to key aspects of the study based on interim data during an ongoing trial. You will also touch on topics such as meta-analysis and accommodating confounding in the design stage.
Throughout, you will develop the ability to recognise and use principles of good study design and improve your skills in the analysis and interpretation of study results.
Contemporary statisticians work with large and complex datasets, which call for large and complex models. Effective implementation of such models is only possible with modern computing hardware, good programming skills, and computational algorithms.
Learn a range of techniques and associated algorithms relevant to statistics and AI and enhance your R and Python programming abilities through the implementation of these algorithms. Develop your competence in constructing computer programs by combining classical statistical programming with appropriate use of large language AI models. You will implement numerical optimisation techniques, such as gradient descent, and learn to select the most appropriate for a given statistical inference problem. You’ll also delve into the toolkit of algorithms used for advanced statistical inference, such as the bootstrap and Markov chain Monte Carlo, while gaining proficiency in your implementation and use.
Since the introduction of ChatGPT, large language models are everywhere. Image classification is dominated by artificial intelligence, with widespread applications in medicine and other fields. AI even dominates board games and computer games such as Go, Dota 2 and StarCraft. More impactful applications, such as self-driving cars, are just around the corner.
You will learn how such models are constructed, how they work for both prediction and classification tasks, and how to train these models efficiently from data. You’ll be introduced to the fundamental mathematical concepts for neural network models, including network architectures, activation functions, loss functions, and training approaches such as stochastic gradient descent. You will practice how to implement such network models from scratch in simple settings, before using powerful packages to train more sophisticated models. Following this experience, you will then investigate modern network architectures and training approaches for tasks such as image classification and natural language processing.
This module will introduce both non-communicable disease epidemiology and infectious disease epidemiology, starting with the fundamental concepts of measures of disease occurrence and risk and likelihood inference for epidemiological parameters, extending to mathematical modelling of infectious diseases. Along the way, you will cover epidemiological study design and analysis, causal inference, disease screening, and infectious disease models and how they can be fitted to data, including estimation of reproduction numbers of infectious diseases.
Statistical models, based on probability distributions or stochastic processes, represent a simplified version of the real-world. By using suitable data to train a statistical model, it can be used to identify patterns or make predictions.
From the fundamentals of statistical inference, such as estimates, estimators and uncertainty, you will explore frequentist and Bayesian estimation. You will construct likelihood functions and learn how to use these to find parameter estimators. Using the asymptotic properties of these estimators, you will quantify the uncertainty in their estimates and conduct principled model selection to obtain a deeper insight into their data. Bayesian estimation starts from the fundamentals of prior and posterior distributions, before tackling more complex concepts such as Bayesian point estimates and predictive distributions. You will conclude the module with an introduction to Monte Carlo estimation.
Many of the more specialist statistical models and procedures are built around a latent, or hidden, stochastic process. This additional layer of structure allows the model to provide a more flexible and realistic representation of the real-world, leading to a more accurate inference. Examples of stochastic mechanisms used include Gaussian processes, which are used in spatial statistics, system emulation and modern experimental design, and time-series models, such as the dynamic linear model.
You will gain an understanding of a selection of hidden-process models and the situations in which they are applied, such as in environmental statistics, engineering and health. You will gain an appreciation for why new methods are often required for inference on these models and delve into these new techniques. Models will be developed around example datasets and applications, which you will explore using the new methods.
We introduce an array of techniques often referred to as `machine learning’ based methods. You will study these methods in significant detail, learning to apply them in practice, and gaining an understanding of their different motivations, objectives, and implementation (via optimisation).
This module is vital if you are an aspiring data-scientist, as it will give you a variety of baseline methods which you can deploy on a range of supervised (i.e. classification/prediction), or unsupervised (i.e. clustering/exploration) tasks. By studying the mathematical foundations of these techniques alongside their algorithmic implementation, you will be well-placed to generate insights from these methods in practice. Importantly, you will gain an awareness of their limitations, be able to critically reflect on their performance, and suggest appropriate alternatives/extensions for specialist applications.
Stochastic calculus is a theory that enables the calculation of integrals with respect to stochastic processes. You will begin by studying discrete-time stochastic processes, defining key concepts such as martingales and stopping times. This will then lead to the exploration of continuous-time processes, in particular, Brownian motion.
You’ll learn how to derive basic properties of Brownian motion and explore integration with respect to it, while examining the derivation of Itô's formula and how to apply this to Brownian motion.
Over the course of the module, you will be able to justify and critique the use of stochastic models for real-world applications and learn how to use the stochastic calculus framework to formulate and solve problems involving uncertainty – a skill that underpins financial mathematics.
Survival analysis involves time to event data, for example the time to recovery when given a new treatment. Such data is often accompanied by censoring, a form of missing data that occurs when the study ends before all subjects experience the event. Longitudinal analysis involves repeated measurements of a response variable made on the same set of individuals across multiple measurement times. These studies are used in public health to identify risk factors for common health conditions.
In this module, you will learn about risk and failure time models for survival data. You will also gain an understanding of generalised linear mixed-effects models (GLMMs) and be able to apply these for all common types of longitudinal or hierarchical data. All analyses will be performed using the software R.
Explore other groupings
Select a grouping to see the list of pathways available, alongside the core and optional modules you can take.