ELSA

European Lighthouse on Secure and Safe AI

An artistic rendition of a lighthouse

About the ELSA project

ELSA is a virtual centre of excellence that will spearhead efforts in foundational safe and secure artificial intelligence (AI) methodology research. ELSA is a large and growing network of top European experts in AI and machine learning who aim to promote the development and deployment of cutting-edge AI solutions in the future and make Europe the world's lighthouse of AI.

ELSA brings together researchers from 26 top research institutions and companies in Europe to pool their expertise in the field of AI and machine learning. ELSA is coordinated by the CISPA Helmholtz Center for Information Security in Saarbrücken.

The ELSA project will be focusing on the following research programmes:

  • Technical robustness and security;
  • Privacy protection techniques and infrastructures; and
  • Human agency and oversight.

For more information regarding the ELSA project please go to the following link.

ELSA is funded by Horizon Europe, one of the largest research and innovation funding programmes in the world.

ELSA is built on the ELLIS Society (European Laboratory for Learning and Intelligent Systems) and works in close collaboration with ELISE (European Network of AI Excellence Centers).

Elsa logo

Our Aims

AI systems are now widely used to inform and automate decisions and actions that have significant consequences for individuals, including safety-critical and human-rights critical contexts, ranging from medical diagnostic tools, autonomous vehicles and biometric identification and verification systems that are used to inform decisions to allow or deny access to critical resources and opportunities. Addressing these problems requires the development of methods that can be integrated into interpretable and accountable legal and ethical governance architectures that will enable lay-users to regard such systems as trustworthy.

Our aim is to investigate the adequacy of existing technical methods and governance mechanisms, seeking to develop new techniques, mechanisms and analytical approaches that can provide the foundations for establishing demonstrable, evidence-based assurance mechanisms capable of safeguarding multiple dimensions of safety and security that otherwise remain under threat, including epistemic security and the safety and security of property, persons and human identity

Team Members

Plamen Angelov

Centre for Technological Futures , Cyber Security Research Centre (Data), Digital Health Group, DSI - Foundations, Lancaster Intelligent, Robotic and Autonomous Systems Centre, LIRA - Advanced Manufacturing, LIRA - Biomedical, LIRA - Environmental Modelling, LIRA - Extreme Environments, LIRA - Fundamentals, LIRA - Security and Defence, LIRA - Smart Cities and Mobility, LIRA - Society and Human Behaviour, SCC (Data Science), Security Lancaster, Security Lancaster (Academic Centre of Excellence), Security Lancaster (Secure Machine Learning and Intelligence), Security Lancaster (Systems Security)

Dmitry Kangin

Dr Dmitry Kangin

Senior Research Associate - ELSA: European Lighthouse on Secure and Safe AI

Nikki Flook

Nikki Flook

ELSA Project Manager

C51, C - Floor, InfoLab21

A selection of our partners

Research programme: Human agency and oversight

Lancaster University, represented by the School of Computing and Communications (SCC) and LIRA (link) is leading the research programme ‘Human agency and oversight’ focusing on explainable and interpretable deep learning in collaboration (work package 3, WP3) with teams from Alan Turing Institute, Birmingham University, CISPA and CINI.

Transparency, explainability and interpretability (including “by design” rather than post hoc) of the technical AI solutions plays a critically important role in understanding and verification of the reasoning and decision-making processes which such AI systems follow. The grand challenge is to devise adequate, integrated governance frameworks, methods and mechanisms that enable meaningful human oversight aligned with European values as well as to enable widespread take-up and deployment of AI applications that could significantly enhance individual and collective wellbeing. Cross-disciplinary research including technical experts as well as legal and governance experts is vital for integration and embedding of technical methods within ethical and governance regimes that are designed to provide meaningful and evidence-based assurance (‘AI Assurance’).

The work of Lancaster University in this project is focused on the aspects of human agency related to interpretability, explainability and transparency of the AI models. LIRA members have proposed a number of methods which are centred around the notion of prototype-based interpretable machine learning, combining the benefits of the flexibility of deep-learning architectures with the transparency of symbolic AI. We build upon these solutions to solve a number of application use cases.

Use cases

ELSA will develop six ambitious use cases which will cover a wide spectrum of sectors where real-life impact of safe and secure AI is expected.

Grand challenges

All of these application areas are the focus of the network. To achieve its goals, the network is addressing three major challenges:

  • The development of robustness guarantees and certificates;
  • Data-secure and robust collaborative learning; and
  • The development of human control mechanisms for the ethical and secure use of AI.

Lancaster University, represented by SCC and LIRA, is leading the Grand Challenge ‘Human-in-the-loop decision-making’ in collaboration with teams from Alan Turing Institute and Birmingham University.

Human-in-the-loop decision making

Integrating governance and ensuring meaningful human oversight of AI systems that is demonstrably in accordance with core European values remains a key obstacle to widespread take-up and deployment across Europe. Achieving safe and secure AI is a particularly challenging and demanding task in relation to machine learning systems for several reasons:

Impact on humans

AI systems both rely upon (for human-in-the-loop systems) and directly affect humans who, as individuals worthy of dignity and respect, must be capable of understanding and evaluating the proposed outputs of AI systems, including whether those outputs are normatively justified based on reasons rather than being determined stochastically.

Safety and security

The complexity, sophistication and opacity of the underlying AI systems can preclude establishing the safety and security of the system and its impacts.

Complexity

The interaction between the AI system and its surrounding socio-technical context, including interaction within and between humans, is complex, dynamic and inherently difficult to predict.

Associated Organisations

News

Events