The Integrity theme examines how trust, transparency, justice, fairness, accountability, and resilience can be built and sustained across data-driven and AI-enabled systems.
As intelligent technologies are deployed at speed, often ahead of the ethical processes, governance structures, and safeguarding mechanisms designed to oversee them, we address the widening gap between technological capability and responsible oversight. Integrity provides a lens for understanding how trust is constructed, negotiated, eroded, and even weaponised within digital ecosystems, and how robust evidence and interdisciplinary insight can support safer, fairer outcomes.
This is a deliberately cross-cutting theme. Integrity is a foundational condition of responsible innovation. Questions of bias, explainability, safety, privacy, institutional responsibility, communicative reliability, and sociotechnical resilience intersect with all data science and AI research. By bringing together expertise from the social sciences, humanities, computing, security, policy, and beyond, the Integrity theme strengthens research culture across the Institute, ensuring that technological advancement and ethical accountability evolve together rather than in tension.