The hub, National Edge AI Hub for Real Data: Edge Intelligence for Cyber-disturbances and Data Quality, focuses on cyber threats and how Edge AI (the application of AI techniques at the data source, as opposed to on a cloud or central server) can be utilised and improved to make data more secure. In particular, Lancaster lends its strengths in the field of AI security, and will be contributing to the hub with research addressing cyber risks to AI, Machine Learning, and Generative AI at the edge.
Lancaster’s Principal Investigator is Professor Peter Garraghan, and is joined by Co-Investigator, Professor Neeraj Suri, both from the School of Computing and Communications. Professor Garraghan is a Fellow of the EPSRC, and CEO of Mindgard – a University spinout company delivering AI Security testing for corporations, providing them with the tools necessary to safeguard their AI assets.
On the bid, Professor Garraghan said: “The importance of this funding cannot be overstated. AI, Large Language Models, and Generative AI are all game changers across both the scientific and business worlds. As a result, attacks against these systems are growing, representing the next major cyber threat that the world will have to contest with in the coming years – and this is particularly the case for AI at the edge. Therefore, the field of AI security will only grow in importance globally across research, industry, government, and the general population.”
Lancaster is at the forefront of cyber security research in the UK, particularly within the intersection of cyber security, computing systems, AI, and Machine Learning. Only recently, Lancaster was announced as the lead on a new £9 million AI research hub, ProbAI, in the Department of Mathematics and Statistics, and established the MARS: Mathematics for AI in Real World Systems with the help of a £12 million investment from Research England.Back to News