Details
Academic Requirements: First-class or 2.1 (Hons) degree, or master’s degree (or equivalent) in an appropriate subject
Recently, we have seen a transformative change in the use of artificial intelligence (AI) technology in many aspects of our lives. In our personal lives, we have access to services and tools that make use of AI in creative and useful ways and – similarly – in a professional setting, AI is being used to enable major changes to the way business is conducted. Some propose that we are at the beginning of a journey in which AI will fundamentally change the way our societies and businesses function.
The concept of AI has been around for several decades and can take many forms. A recent US National Institute for Standards and Technology (NIST) document (NIST AI 100-2e2023), which examines AI attacks, defines two main classes: (i) predictive; and (ii) generative AI. The former is concerned with predicting classes of data (e.g. for anomaly detection); whereas the latter is used to generate content, often using large language models (LLMs). In general, this is not a new technology. However, the recent rapid acceleration of the use of AI has emerged because of new generative models and abundant access to task-specific compute capabilities.
Inspired by this trend, the nuclear sector is exploring the use AI and its capabilities to support a variety of functions. For example, it can be used to enable efficiencies in business process execution, supporting staff with a variety of decision-making tasks using AI-enabled assistants. Moreover, AI can be used to support other functions in a nuclear setting such as those related to physical security, materials inspection, and automated and autonomous robotics and control. A comprehensive review of the uses of AI in the nuclear sector has been produced by the International Atomic Energy Agency (IAEA)[1].
An emerging area of application of AI is to support efficient, safe and secure use of operational technology (OT). This can take many forms, including using machine learning models to optimize control strategies without the need to develop mathematical models of a target system, supporting predictive maintenance to ensure maintenance activities are realized in a cost-effective and safe manner, enabling autonomous operations, and using various forms of machine learning to predict and classify anomalous system behaviour. OT systems typically support business and – in some cases – safety critical functions; therefore, the correct operation of OT that incorporates AI is of the utmost importance.
Nuclear is the most heavily regulated sector in the world. This is because of the uniquely severe consequences of the failure of functions on nuclear safety and security. Failures can result in major environmental disasters and loss of life. In this setting, the use of AI should be approached in a consequence and risk-informed manner. An important way to manage risks that stem from errant AI behaviour is to realize so-called guardrails. Guardrails take many forms and can be described in this context as socio-technical measures to protect the function of systems from the errant behaviour of artificial intelligence. Example guardrails include policies that mandate that humans are integral to decision making that is supported by AI or physical controls (safety interlocks, etc.) that prevent an AI-supported system from causing an accident. It is worth noting that guardrails will likely play an important role in gaining regulatory approval for the use of AI to support safety-relevant functions in nuclear.
Whilst chosen guardrails may be suitable at the genesis of a system, there are potential longitudinal socio-technical effects that might degrade their performance. These effects emerge because of different forms of “drift” associated with a system and its use. Example types of drift include organizational change (e.g. changes in policy), shifts in the criticality of functions and associated systems, changes in regulatory assurance requirements, and generational shifts in staff experience and knowledge, e.g. caused by AI-supported autonomy. These changes may be slow and occur over extended periods, making them difficult to detect. The result is a failure or sub-optimal use of guardrails to effectively mitigate errant AI behaviour.
The aim of this PhD proposal is to investigate a framework that supports risk-informed decisions to be made about the choice of guardrails for ensuring the safe and secure operation of nuclear functions, which include systems that have an AI component. Specifically, the project will focus on case studies that incorporate AI for improving the security and efficiency of OT in the nuclear sector. This framework should consider the characteristics of the guardrails (e.g. their cost, flexibility, scrutability, and effectiveness) along with how they are affected by longitudinal drift. The intention is to take a systems view, in line with work by Leveson et al.[2] who argue that traditional models of failure causality (the fault, error, failure chain) are inadequate for understanding the causes of failures. Rather, that a more complex view of the system in its context, which include changes in the way systems are operated over time, is better suited to this task.
Supervisor: Professor Paul Smith, School of Computing and Communications, Lancaster University
This is a 42-month funded project, including fees and an enhanced stipend.
Entry Requirements
Applicants must have a Master’s degree and/or a minimum of a 2:1 in their bachelor’s degree in computer science or a related field.
Applicants must be resident in the UK during the period of study; they may need to travel to collect data during their studies and will need to obtain security clearance. It is expected the primary fieldwork site will be in Cumbria.
How to Apply
Applications should be submitted via Phillip Satchell, the postgraduate coordinator in the School of Computing and Communications
You must provide an up-to-date CV, and two references. We also request a written statement of purpose (explaining why you want to undertake this project, why you have the requisite skills). A further piece of research/assignment work, dissertation section, or publication is also recommended to be submitted.
Applicants can contact Professor Paul Smith to discuss their applications
[1] https://www.iaea.org/publications/15198/artificial-intelligence-for-accelerating-nuclear-applications-science-and-technology
[2] http://sunnyday.mit.edu/