We offer a range of PhDs funded by different sources, such as research councils, industries or charities.
To apply for a funded PhD please read the advertised project information carefully as requirements will vary between funders. The project information will include details of funding eligibility, application deadline dates and links to application forms. Only applicants who have a relevant background and meet the funding criteria can be considered.
1. Current PhD Opportunities
accordion
Data-Driven Hybrid Motion–Force Control for Robust Human–Manipulator Interaction Lancaster University – in collaboration with United Kingdom National Nuclear Laboratory (UKNNL)
We invite applications for a fully funded PhD studentship at Lancaster University’s School of Engineering, in partnership with United Kingdom National Nuclear Laboratory (UKNNL). This exciting project will develop novel data-driven, robust, and adaptive control methods for human–robot interaction and teleoperation, with direct applications in nuclear robotics, hazardous environment manipulation, and beyond.
Project Overview
Teleoperation is a critical enabler for safe and efficient operation in hazardous environments such as nuclear decommissioning. However, current industrial solutions suffer from limitations under uncertainty, time delays, and noisy sensing.
This PhD project will design and experimentally validate a hybrid motion–force control framework that ensures precise end-effector positioning while maintaining robust and adaptive force regulation under real-world conditions. Research will include:
Development of nonlinear robust adaptive controllers and disturbance observers.
Design of bilateral teleoperation schemes that enhance transparency and stability under communication delays.
Integration of data-driven approaches for force estimation and safety.
Experimental validation on industrial robotic platforms at the UKNNL Hot Robotics Facility.
The project provides the opportunity to work on cutting-edge robotics challenges with significant industrial impact, supported by state-of-the-art facilities at both Lancaster University and UKNNL.
Supervisory Team
Dr Allahyar Montazeri (Lead Supervisor, School of Engineering, Lancaster University; Data Science Institute Member)
Professor Plamen Angelov (Co-Supervisor, School of Computing and Communications, Lancaster University; Data Science Institute Member)
Training and Development
The successful candidate will receive a tailored training programme including:
Hands-on training with ROS2, MATLAB/Simulink, and CoppeliaSim.
Access to world-class robotics laboratories and facilities.
Opportunities to engage with national and international conferences, workshops, and training events.
Insight into the nuclear sector through industrial collaboration with UKNNL.
Funding
Duration: 4 years (3.5 years EPSRC Doctoral Landscape Award + 0.5 years UKNNL extension)
Coverage: UKRI minimum stipend, tuition fees for Home students, and a research training support grant.
Additional support for consumables, maintenance, and travel.
Eligibility
Open to UK Home students only, due to clearance requirements for UKNNL facilities.
Applicants should have (or expect to obtain) a First or Upper Second-Class degree (or equivalent) in Engineering, Control, Robotics, Computer Science, or a related discipline.
Strong mathematical and programming skills (MATLAB, Python, or C++) are highly desirable.
Application Process
Applicants should submit:
A full CV.
A one-page cover letter outlining their motivation and suitability for the project.
Reference letter from two academics commenting on the candidate abilities.
Applications will be considered on a rolling basis until the position is filled, with an expected start date of January 2025.
Closing Date – 1st December
For informal enquiries, please contact Dr Allahyar Montazeri
Details
Start Date: As soon as possible
Deadline for application: Open (it is recommended you apply as soon as possible)
Interview: Rolling
Description
If you’re interested in protecting AI from rapidly emerging cyber threats and securing a technology that will define the coming decades, this PhD studentship is for you.
We are seeking candidates to join our AI security group at Lancaster University, and become part of this rapidly growing research field.
The adoption of Artificial Intelligence (AI) and prominent technologies such as Generative AI, LLMs, and Agentic AI systems is rapidly accelerating across both research and industry.
While there is considerable research activity on the application of AI for security, there has been less attention towards the security of AI itself. AI security focuses on addressing cyber security risks against the AI systems against a wide plethora of cyber attacks, spanning prompt injection, data leakage, jailbreaking, bypassing guardrails, model backdoors, and more. The emergence of such AI risks has drawn the attention of every nation and major business, however existing cyber security tools and methods are ineffective within AI systems due to the intrinsically random, complex, and opaque nature of neural networks. To date, how to secure today’s and tomorrow’s AI models and systems remains unsolved.
This project would provide you the skill and training necessary to become a researcher specializing in AI security – an area that is increasingly sought after in academia and industry.
Research Areas
Topics of interest you could pursue include:
Discover new types of cyber attacks / security vulnerabilities in AI and GenAI
Create defence systems and countermeasures against AI cyber attacks
Design run-time detection systems for prompt injection and jailbreaking
Explore different cyber attack modalities (i.e. malicious instructions in images/audio)
Build and develop cutting-edge LLM guardrails and firewalls
Investigate hidden security characteristics within neural networks
Identify ShadowAI – malicious AI systems hidden within an organization
Uncover backdoor attacks and model hijacking within ML artefacts
What We Offer
A 3.5-year fully funded PhD studentship (including both tuition and stipend).
Access to a large-scale GPU data centre entirely dedicated to our research lab.
Comprehensive training in cutting-edge AI technology and cyber security techniques.
Employment opportunities at Mindgard (https://mindgard.ai/), an award-winning AI security company founded at our lab, and now based in the heart of London.
Collaboration opportunities with Nvidia, Mindgard, GCHQ’s National Cyber Security Centre, and NetSPI, amongst others.
Opportunity to travel to conferences internationally to present your research.
Our Research Lab
We are among the few labs globally specializing in AI security. You will be part of a new cohort of PhD students joining an established team of scientists and engineers. Founded in 2016, the research lab led by Professor Peter Garraghan is internationally renowned in AI systems and security, publishing over 70 research papers, securing over £14M in external grant funding, the formation of Mindgard, and all research students to date securing positions in academia or industry R&D labs upon graduation.
About You
We highly value people who are kind, curious and believe in making a difference.
A good background in Computer Science, ideally a BSc in Computer science (or equivalent) with a 2:1 classification and above.
Interest in Artificial Intelligence, Cyber Security, Distributed Systems, or a combination of the above.
Highly motivated, and capable of working both independently and as part of a team.
Good communication, technical knowledge, and writing skills.
Get in Touch
These positions are available now, thus candidates are strongly recommended to apply as early as possible.
For informal enquiries about these positions, please contact and share your CV with Professor Peter Garraghan. To apply, please visit our school PhD opportunities page, which includes guidance on submission, and a link to the submission system.
Details
Academic Requirements: First-class or 2.1 (Hons) degree, or master’s degree (or equivalent) in an appropriate subject
Recently, we have seen a transformative change in the use of artificial intelligence (AI) technology in many aspects of our lives. In our personal lives, we have access to services and tools that make use of AI in creative and useful ways and – similarly – in a professional setting, AI is being used to enable major changes to the way business is conducted. Some propose that we are at the beginning of a journey in which AI will fundamentally change the way our societies and businesses function.
The concept of AI has been around for several decades and can take many forms. A recent US National Institute for Standards and Technology (NIST) document (NIST AI 100-2e2023), which examines AI attacks, defines two main classes: (i) predictive; and (ii) generative AI. The former is concerned with predicting classes of data (e.g. for anomaly detection); whereas the latter is used to generate content, often using large language models (LLMs). In general, this is not a new technology. However, the recent rapid acceleration of the use of AI has emerged because of new generative models and abundant access to task-specific compute capabilities.
Inspired by this trend, the nuclear sector is exploring the use AI and its capabilities to support a variety of functions. For example, it can be used to enable efficiencies in business process execution, supporting staff with a variety of decision-making tasks using AI-enabled assistants. Moreover, AI can be used to support other functions in a nuclear setting such as those related to physical security, materials inspection, and automated and autonomous robotics and control. A comprehensive review of the uses of AI in the nuclear sector has been produced by the International Atomic Energy Agency (IAEA)[1].
An emerging area of application of AI is to support efficient, safe and secure use of operational technology (OT). This can take many forms, including using machine learning models to optimize control strategies without the need to develop mathematical models of a target system, supporting predictive maintenance to ensure maintenance activities are realized in a cost-effective and safe manner, enabling autonomous operations, and using various forms of machine learning to predict and classify anomalous system behaviour. OT systems typically support business and – in some cases – safety critical functions; therefore, the correct operation of OT that incorporates AI is of the utmost importance.
Nuclear is the most heavily regulated sector in the world. This is because of the uniquely severe consequences of the failure of functions on nuclear safety and security. Failures can result in major environmental disasters and loss of life. In this setting, the use of AI should be approached in a consequence and risk-informed manner. An important way to manage risks that stem from errant AI behaviour is to realize so-called guardrails. Guardrails take many forms and can be described in this context as socio-technical measures to protect the function of systems from the errant behaviour of artificial intelligence. Example guardrails include policies that mandate that humans are integral to decision making that is supported by AI or physical controls (safety interlocks, etc.) that prevent an AI-supported system from causing an accident. It is worth noting that guardrails will likely play an important role in gaining regulatory approval for the use of AI to support safety-relevant functions in nuclear.
Whilst chosen guardrails may be suitable at the genesis of a system, there are potential longitudinal socio-technical effects that might degrade their performance. These effects emerge because of different forms of “drift” associated with a system and its use. Example types of drift include organizational change (e.g. changes in policy), shifts in the criticality of functions and associated systems, changes in regulatory assurance requirements, and generational shifts in staff experience and knowledge, e.g. caused by AI-supported autonomy. These changes may be slow and occur over extended periods, making them difficult to detect. The result is a failure or sub-optimal use of guardrails to effectively mitigate errant AI behaviour.
The aim of this PhD proposal is to investigate a framework that supports risk-informed decisions to be made about the choice of guardrails for ensuring the safe and secure operation of nuclear functions, which include systems that have an AI component. Specifically, the project will focus on case studies that incorporate AI for improving the security and efficiency of OT in the nuclear sector. This framework should consider the characteristics of the guardrails (e.g. their cost, flexibility, scrutability, and effectiveness) along with how they are affected by longitudinal drift. The intention is to take a systems view, in line with work by Leveson et al.[2] who argue that traditional models of failure causality (the fault, error, failure chain) are inadequate for understanding the causes of failures. Rather, that a more complex view of the system in its context, which include changes in the way systems are operated over time, is better suited to this task.
Supervisor: Professor Paul Smith, School of Computing and Communications, Lancaster University
This is a 42-month funded project, including fees and an enhanced stipend.
Entry Requirements
Applicants must have a Master’s degree and/or a minimum of a 2:1 in their bachelor’s degree in computer science or a related field.
Applicants must be resident in the UK during the period of study; they may need to travel to collect data during their studies and will need to obtain security clearance. It is expected the primary fieldwork site will be in Cumbria.
How to Apply
Applications should be submitted via Phillip Satchell, the postgraduate coordinator in the School of Computing and Communications
You must provide an up-to-date CV, and two references. We also request a written statement of purpose (explaining why you want to undertake this project, why you have the requisite skills). A further piece of research/assignment work, dissertation section, or publication is also recommended to be submitted.
Applicants can contact Professor Paul Smith to discuss their applications
Optimising recovery and return to combat readiness is key to ensuring military personnel are prepared for deployment in the UK and overseas. Utilising 3D motion capture for assessing gait of military personnel is the “gold standard,” but not logistically feasible due to the number of personnel requiring assessment, time taken to assess, digitise, and analyse the data, cost, and expertise to interpret it.
AI driven mobile phone applications that track gait have the potential to revolutionise how we assess gait and diagnose gait disorders not only in military personnel but all clinical populations. Their ease of use, speed of feedback and low cost makes them an ideal tool to be implemented into clinical practice, particularly in a rehabilitation setting.
We have developed a 3D human estimation pose model capable of capturing and measuring gait, but it has not been validated in a clinical population or been compared to the “gold standard” 3D motion capture. Our aim in collaboration with the Ministry of Defence is to adapt and refine our existing 3D human estimation pose model to be able to automatically detect and diagnose gait disorders in military personnel with overuse injuries.
We have four key objectives:
Objective 1: To adapt our existing model to capture and analyse biomechanical data from military personnel with and without overuse injuries
Objective 2: To determine the validity of the biomechanical data obtained from our model compared to the “gold standard” 3D motion capture in military personnel with and without overuse injuries
Objective 3: To identify, test, and validate possible solutions for the model to diagnose gait disorders in military personnel with overuse injuries
Objective 4: Test real time execution of digitisation, application and data extraction when using the model to diagnose gait disorders in military personnel with overuse injuries
We are looking for an enthusiastic, proactive and highly motivated PhD candidate.
Experience in 3D motion capture, machine learning, AI or data analysis strongly desirable. This project is in collaboration with the Ministry of Defence, and some travel will be expected between Lancaster University (host institution) and the Defence Medical Rehabilitation Centre Stanford Hall for meetings, recruitment of personnel and data collection.
Essential:
2:1 or 1st class undergraduate degree (or equivalent) in sport science, biomechanics, computer vision or computer science related disciplines
Strongly desirable:
A Merit or Distinction postgraduate degree (or equivalent experience) in sport science, biomechanics, computer vision or computer science related disciplines
Experience in collecting data from participants in research studies
Demonstrate expertise in quantitative research methods
Experience in machine learning and/or AI
Experience in 3D motion capture and/or biomechanical assessment of gait
Experience of presenting at international conferences and/or publishing in peer-reviewed journals
Funding
A successful applicant will receive a stipend towards living expenses at the UKRI rate (currently £ £20,780 per year) and £1000 per year to support training and development needs (e.g., attend courses or conferences).
Supervisory team
Dr Hannah Jarvis - Lancaster University has expertise in gait assessment and led previous research projects with the Ministry of Defence. Co-supervisors are Professor Jun Liu - Lancaster University, expertise in computer vision and machine learning, and Professor Neil Reeves - Lancaster University, expertise in digital health technologies and cyber security. You will also be supervised by colleagues from DMRC Stanford Hall.
In your application, please include:
A cover letter detailing why you are the most appropriate person for the position
You will need to put in an application to the University's online application system. Please follow the University's guidance regarding the required documentation.
Please make sure to include a CV (mandatory, maximum of two pages) including your previous degrees and graduation grades, as well as any relevant skills. Where it applies, also include awards of excellence, publications, and links to code releases, such as through GitHub.
Please follow all of the requirements. Not adhering to these requirements may at best delay the processing of your application, and at worst might result in immediate rejection. The preferred format for all supporting documents is PDF.
2.1. Your Research Proposal
Please note that even if you are applying for a funded PhD position, you will need to develop a proposal.
At the top of the first page of the Research Proposal, please include the following information:
Mandatory
A clear indication of the SCC research group(s) you want to work with.
A list of two or three works that are similar to your proposal. This list is in addition to any other references you may wish to include.
Optional
The names of the SCC academic(s) you want to work with. Please also indicate if you would like us to consider your application if your preferred supervision team is not possible.
2.2. Your Personal Statement
A personal statement is mandatory and should be a maximum of one page. The document should explain your motivation to work on your chosen project and a little about your background.
Other methods of applying for a PhD
Studying for a research degree is a highly rewarding and challenging process. You'll work to become a leading expert in your topic area with regular contact and close individual supervision with your supervisor.
If you have your own research idea, we can help you to develop it. To begin this process you will need to find a PhD Supervisor from one of our research groups, whose research interests align with your own.
You can also apply for a PhD from one of the Doctoral Training Centres and Partnerships that work with the School of Computing and Communications. Details of each of the Training Centres are provided here.