Integrity

A composite of different areas of study - health, computing, natural environment etc

The Integrity theme examines how trust, transparency, justice, fairness, accountability, and resilience can be built and sustained across data-driven and AI-enabled systems.

As intelligent technologies are deployed at speed, often ahead of the ethical processes, governance structures, and safeguarding mechanisms designed to oversee them, we address the widening gap between technological capability and responsible oversight. Integrity provides a lens for understanding how trust is constructed, negotiated, eroded, and even weaponised within digital ecosystems, and how robust evidence and interdisciplinary insight can support safer, fairer outcomes.

This is a deliberately cross-cutting theme. Integrity is a foundational condition of responsible innovation. Questions of bias, explainability, safety, privacy, institutional responsibility, communicative reliability, and sociotechnical resilience intersect with all data science and AI research. By bringing together expertise from the social sciences, humanities, computing, security, policy, and beyond, the Integrity theme strengthens research culture across the Institute, ensuring that technological advancement and ethical accountability evolve together rather than in tension.

Theme Lead

Claire Hardaker

Professor Claire Hardaker

Professor of Forensic Linguistics

DisTex - Discourse and Text Research Group, ESRC Centre for Corpus Approaches to Social Science, Forensic Linguistics Research Group , Security Lancaster, Security Lancaster (Behavioural Science)

Case Studies

Bot or Not running on a tablet

Bot or Not? Real-world detection of AI-generated content

As AI becomes increasingly capable of generating convincing content, this raises significant challenges for journalism, law enforcement, online safety, and public trust. As a result, in 2023 Professor Claire Hardaker launched the Bot or Not? project, and over the years this has been extended by Dr Georgina Brown and other researchers. This suite of perception tests continues to investigate whether ordinary people can reliably distinguish between human and AI-generated language, speech, and creative content, how they arrive at their conclusions, and how confident they are in their judgements.

This large-scale project combines forensic linguistics and data science, and the findings from these studies provide a range of insights including the limits of human detection and the linguistic features that influence trust and credibility. These insights help identify where AI systems currently succeed in mimicking human communication and where they still leave detectable traces.

The findings have important implications across an enormous array of contexts, from national security, through to interpersonal crime, through to the integrity of the evidential process and beyond. By improving understanding of how AI-generated content is (mis)perceived, Bot or Not? contributes to the development of more reliable detection tools, better policy responses, and stronger safeguards for digital integrity.

Someone using the Fort Vox system

Fort Vox: exploring voice biometrics and AI security through play

In collaboration with Dr Lorraine Underwood and the technology community element14, Professor Claire Hardaker and Dr Georgina Brown have developed Fort Vox, an interactive project that helps young people explore how voice biometrics, artificial intelligence, and security systems work.

Voice recognition systems are increasingly used to verify identity and protect access to sensitive information, from banking services to healthcare accounts. These systems analyse the sound of a person’s voice as a biometric identifier. However, advances in AI-generated speech and voice cloning raise important questions about how secure such technologies really are.

Fort Vox makes these complex issues tangible through a playful challenge: a locked box that can only be opened by successfully imitating a stored voice password. By experimenting with different vocal strategies to “hack” the system, participants gain hands-on insight into how automated voice recognition systems analyse speech and make decisions.

The prototype turns abstract concepts in AI and cybersecurity into an engaging physical experience. Through trial and experimentation, players begin to understand how automated systems process human voice signals and how those systems can sometimes be manipulated or fooled.

Although designed to be accessible and engaging for young audiences, the project addresses serious issues surrounding AI security, privacy, and digital trust. By introducing these ideas early, Fort Vox encourages critical thinking about emerging technologies while helping to inspire the next generation of cybersecurity and AI professionals. Projects like this contribute to building future digital skills and strengthening the UK’s cyber security ecosystem.

Northgate in the Data Immersion Suite

Operation Northgate: training forensic analysts for high-stakes, real-time decision making

Operation Northgate is an immersive simulation developed at Lancaster University that trains participants to analyse speech and language evidence under the pressures of a live investigation.

In real-world investigations, analysts rarely have the luxury of unlimited time or perfectly curated data. Instead, they must interpret evolving information, collaborate with colleagues, and make careful judgments while events are still unfolding. Preparing analysts for these conditions is essential if linguistic and speech evidence is to be used responsibly and effectively.

Run in Lancaster University’s £5m Data Immersion Suite, Operation Northgate recreates this environment through a simulated investigation in which participants must identify speakers in covert surveillance recordings while persons of interest are travelling toward a potential crime scene. Working in teams, participants analyse speech and linguistic evidence while new information is introduced throughout the scenario, requiring them to continually reassess their conclusions.

The Data Immersion Suite is an advanced teaching and research facility designed to support complex, data-rich scenarios. The suite allows multiple streams of information, including audio, text, and intelligence, to dynamically stream across large visualisation walls, creating an environment that mirrors the information flow of real operational settings.

Within this immersive decision space, participants must balance analytical rigour with time-sensitive judgement. Operation Northgate is delivered both for and with external stakeholders and provides a safe environment in which students, researchers, and practitioners can develop practical skills in forensic analysis, collaborative decision making, and responsible interpretation of complex data. By combining immersive technology with applied linguistic and speech science expertise, the exercise helps prepare analysts for the challenges of real-world high-stakes contexts.