DSI Distinguished Speaker - Dr Brent Mittelstadt,Oxford Internet Institute

Tuesday 17 May 2022, 3:00pm to 4:30pm

Venue

Online Microsoft Teams

Open to

All Lancaster University (non-partner) students, Postgraduates, Prospective International Students, Prospective Postgraduate Students, Prospective Undergraduate Students, Public, Staff, Undergraduates

Registration

Free to attend - registration required

Registration Info

Please register on Eventbrite

Event Details

“Talk on bias, fairness, and non-discrimination law in artificial intelligence”

Title: Bias preservation in fair machine learning

Abstract: Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups. Recognising this problem, much work has emerged in recent years to test for bias in machine learning and AI systems using various fairness and bias metrics. Often these metrics address technical bias but ignore the underlying causes of inequality and take for granted the scope, significance, and ethical acceptability of existing inequalities. In this talk I will introduce the concept of “bias preservation” as a means to assess the compatibility of fairness metrics used in machine learning against the notions of formal and substantive equality. The fundamental aim of EU non-discrimination law is not only to prevent ongoing discrimination, but also to change society, policies, and practices to ‘level the playing field’ and achieve substantive rather than merely formal equality. Based on this, I will introduce a novel classification scheme for fairness metrics in machine learning based on how they handle pre-existing bias and thus align with the aims of substantive equality. Specifically, I will distinguish between ‘bias preserving’ and ‘bias transforming’ fairness metrics. This classification system is intended to bridge the gap between notions of equality, non-discrimination law, and decisions around how to measure fairness and bias machine learning. Bias transforming metrics are essential to achieve substantive equality in practice. I will conclude by introducing a bias preserving metric ‘Conditional Demographic Disparity’ which aims to reframe the debate around AI fairness, shifting it away from which is the right fairness metric to choose, and towards identifying ethically, legally, socially, or politically preferable conditioning variables according to the requirements of specific use cases.

The talk will be followed by a Q & A session hosted by Dr Bran Knowles, Senior Lecturer, SCC

Please register on Eventbrite

Contact Details

Name Julia CARRADUS
Email

j.carradus1@lancaster.ac.uk