Bias and fairness in algorithmic decision making

Project overview

This research project addresses the question: to what extent, and how, can we used biased data to create systems (e.g., train models, moderate models) to make less biased associations (decisions, predictions)?

Machine learning often involves creating systems that model or imitate associations within training data. However, training data can be flawed in many ways. For example, it can:

  • be unrepresentative of the population we want to model (or ‘use the model on)
  • embody decisions we regard nowadays as flawed, e.g., unfair, prejudiced, discriminative
  • misrepresent the values (costs or benefits) of different decisions or predictions.

The last two flaws listed explicitly relate to human views about what is right, and what we value: these considerations put this research at the intersection of humanity and technology.

What if we want to build systems that makes ‘better’ decisions than we have made previously? We may want to do this in situations where:

  • human values affect our view of the ‘right’ decisions
  • ‘ground truth’ is subjective, for example, where different people may arrive at different conclusions (recruiting, sentencing, diagnosis)
  • the outcome of a decision (or its counterfactual) is difficult to quantify (e.g., diagnosis followed by intervention) [4].

This leads to further research questions:

  • How can we objectively assess aspects of bias and fairness of data?
    • How can we usefully characterise the distributions of training and ‘production’ data, including protected or sensitive attributes for which non-discrimination
      should be established?
    • How do definitions of fairness and bias (e.g., [1]) behave ‘in the wild’, i.e., on real-world data in which the distributions of training and ‘production’ data may differ?
    • How could organisations achieve transparency and accountability for their algorithms in line with the ACM’s principles [3]?
  • How can we use biased data to make less biased decision-making systems?
    • Resampling to change the apparent distribution of training data (see e.g., [5]).
    • Generating synthetic data to change the apparent distribution of training data.
    • Applying different learning strategies to modify the system’s response to the training data (e.g., boosting, Generalised Adversarial Networks).

Outcomes

  • Uptake of, and contributions to open source resources for bias and fairness in AI, e.g., [6].
  • Engagement with others (e.g., IBM) in the translation and extension of fairness methods in industrial settings.
  • Publications both academic, industry (white paper) and popular (e.g., The Conversation).
  • Development of educational resources to enhance capability and awareness.

Project team

Bibliography

[1] S. Verma and J. Rubin, ‘Fairness Definitions Explained’, in Proceedings of the International Workshop on Software Fairness, New York, NY, USA, 2018, pp. 1–7.
[2] Arvind Narayanan, ‘21 fairness definitions and their politics’, 23-Feb-2018. [Online]. Available: https://www.youtube.com/embed/jIXIuYdnyyk. [Accessed: 20-Oct-2019].
[3] ACM U.S. Public Policy Council, ‘Statement on Algorithmic Transparency and Accountability’. 2017.
[4] C. Russell, M. J. Kusner, J. Loftus, and R. Silva, ‘When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness’, in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 6414–6423.
[5] Y. Li and N. Vasconcelos, ‘REPAIR: Removing Representation Bias by Dataset Resampling’, presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 9572–9581.
[6] R. K. E. Bellamy et al., ‘AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias’, ArXiv181001943 Cs, Oct. 2018.