A perspective on fairness in machine learning from DeepMind
Silvia Chiappa, Research Scientist and William Isaac, Research Scientist, DeepMind
More info and registration here
As the world moves towards applying machine learning techniques in high stakes societal contexts – from the criminal justice system to education to healthcare – ensuring the fairness of these systems becomes an evermore important and urgent issue. In this talk DeepMind’s Research Scientists, Silvia and William, will explain how Causal Bayesian Networks (CBNs) can be used as a tool for reasoning about and addressing fairness issues.
In the first part of the talk we will show that CBNs can provide us with a simple and intuitive visual tool for describing different possible unfairness scenarios underlying a dataset. We will use this viewpoint to revisit the recent debate surrounding the COMPAS pretrial risk assessment tool and, more generally, to point out that fairness evaluation on a model requires careful considerations on the patterns of unfairness underlying the training data.
In the second part of the talk we will explain how CBNs can provide us with a powerful quantitative tool to measure unfairness in a dataset, and to help researchers in the development of techniques to address complex fairness issues.
This talk is based on two recent papers: A Causal Bayesian Networks Viewpoint on Fairness and Path-Specific Counterfactual Fairness
This event is co-hosted by the Governance of Artificial Intelligence (GovAI), Future of Humanity Institute and the Rhodes Artificial Intelligence Lab.
About the speakers
Silvia Chiappa is a Research Scientist in Machine Learning at DeepMind. She received a Diploma di Laurea in Mathematics from University of Bologna and a PhD in Machine Learning from École Polytechnique Fédérale de Lausanne. Before joining DeepMind, Silvia worked in the Empirical Inference Department at the Max-Planck Institute for Intelligent Systems (Prof. Dr. Bernhard Schölkopf), in the Machine Intelligence and Perception Group at Microsoft Research Cambridge (Prof. Christopher Bishop), and at the Statistical Laboratory, University of Cambridge (Prof. Philip Dawid). Her research interests are based around Bayesian & causal reasoning, graphical models, variational inference, time-series models, and ML fairness and bias.
William Isaac is a Research Scientist with DeepMind’s Ethics and Society Team. Prior to DeepMind, WIlliam served as an Open Society Foundations Fellow and Research Advisor for the Human Rights Data Analysis Group focusing on algorithmic bias and fairness. William’s prior research centering on deployments of automated decision systems in the US criminal justice system has been featured in publications such as Science, the New York Times, and the Wall Street Journal. William received his Doctorate in Political Science from Michigan State University and a Masters in Public Policy from George Mason University.
Thu, 17 October 2019, 16:00 – 17:30 BST
Tony Hoare Room, Department of Computer Science, Robert Hooke Building, Parks Road, Oxford