DSI Weds Lunchtime Talks - Marisela Gutierrez Lopez
Wednesday 28 April 2021, 12:30pm to 1:30pm
Venue
Microsoft TeamsOpen to
Alumni, Postgraduates, Prospective International Students, Prospective Postgraduate Students, Public, Staff, UndergraduatesRegistration
Registration not required - just turn upEvent Details
Speaker: Marisela Gutierrez Lopez, Bristol Digital Futures Institute, University of Bristol
Explainable AI For Insurance : case study at an insurance company
Abstract: AI decision making is now widely recognised as a social and political challenge - centring in on concerns about bias and fairness - to which an increasingly popular solution is ‘explainable AI’ (xAI). In this formulation, the drive for explanation is understood both as a technical challenge (how to open the black box) and a communication challenge (how to explain complex technical processes in ways that can be easily understood). In contrast, our case study at an insurance company suggests that xAI goes beyond technical aspects of models and data, as explanations were generated by a variety of actors within and beyond the technical teams, and different actors held different knowledge and expectations of what needed explaining and why. From this perspective, xAI is better understood as a situated practice, organized, mediated and (in turn) shaped by ongoing social interactions. We argue for the need to widen the horizon of xAI from technocratic to participatory approaches where explanations encompass AI in practice and are integrated into the lived experiences of people subject to automatic decision making. An expanded definition of xAI can facilitate richer discussions on what makes AI explainable and guide the co-creation of explanations with different data publics.
AI decision making is now widely recognised as a social and political challenge - centring in on concerns about bias and fairness - to which an increasingly popular solution is ‘explainable AI’ (xAI). In this formulation, the drive for explanation is understood both as a technical challenge (how to open the black box) and a communication challenge (how to explain complex technical processes in ways that can be easily understood). In contrast, our case study at an insurance company suggests that xAI goes beyond technical aspects of models and data, as explanations were generated by a variety of actors within and beyond the technical teams, and different actors held different knowledge and expectations of what needed explaining and why. From this perspective, xAI is better understood as a situated practice, organized, mediated and (in turn) shaped by ongoing social interactions. We argue for the need to widen the horizon of xAI from technocratic to participatory approaches where explanations encompass AI in practice and are integrated into the lived experiences of people subject to automatic decision making. An expanded definition of xAI can facilitate richer discussions on what makes AI explainable and guide the co-creation of explanations with different data publics.
Contact Details
Name | Julia Carradus |