A Survey on Bias and Fairness in Machine Learning

Published on Aug 23, 2019in arXiv: Learning
Ninareh Mehrabi5
Estimated H-index: 5
,
Fred Morstatter22
Estimated H-index: 22
+ 2 AuthorsAram Galstyan41
Estimated H-index: 41
Sources
Abstract
With the widespread use of AI systems and applications in our everyday lives, it is important to take fairness issues into consideration while designing and engineering these types of systems. Such systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that the decisions do not reflect discriminatory behavior toward certain groups or populations. We have recently seen work in machine learning, natural language processing, and deep learning that addresses such challenges in different subdomains. With the commercialization of these systems, researchers are becoming aware of the biases that these applications can contain and have attempted to address them. In this survey we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined in order to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and how they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.
Figures & Tables
Download
馃摉 Papers frequently viewed together
2017NeurIPS: Neural Information Processing Systems
2016NeurIPS: Neural Information Processing Systems
3 Authors (Moritz Hardt, ..., Nathan Srebro)
References130
Newest
#1Song眉l TolanH-Index: 6
#2Marius MironH-Index: 11
Last. Carlos Castillo (UPF: Pompeu Fabra University)H-Index: 68
view all 4 authors...
In this paper we study the limitations of Machine Learning (ML) algorithms for predicting juvenile recidivism. Particularly, we are interested in analyzing the trade-off between predictive performance and fairness. To that extent, we evaluate fairness of ML models in conjunction with SAVRY, a structured professional risk assessment framework, on a novel dataset originated in Catalonia. In terms of accuracy on the prediction of recidivism, the ML models slightly outperform SAVRY; the results impr...
Source
Jun 1, 2019 in NAACL (North American Chapter of the Association for Computational Linguistics)
#1Shikha Bordia (NYU: New York University)H-Index: 7
#2Samuel R. Bowman (NYU: New York University)H-Index: 46
Many text corpora exhibit socially problematic biases, which can be propagated or amplified in the models trained on such data. For example, doctor cooccurs more frequently with male pronouns than female pronouns. In this study we (i) propose a metric to measure gender bias; (ii) measure bias in a text corpus and the text generated from a recurrent neural network language model trained on the text corpus; (iii) propose a regularization loss term for the language model that minimizes the projecti...
Source
#1Lee CohenH-Index: 3
#2Zachary C. Lipton (CMU: Carnegie Mellon University)H-Index: 42
Last. Yishay MansourH-Index: 80
view all 3 authors...
When recruiting job candidates, employers rarely observe their underlying skill level directly. Instead, they must administer a series of interviews and/or collate other noisy signals in order to estimate the worker's skill. Traditional economics papers address screening models where employers access worker skill via a single noisy signal. In this paper, we extend this theoretical analysis to a multi-test setting, considering both Bernoulli and Gaussian models. We analyze the optimal employer po...
May 24, 2019 in ICML (International Conference on Machine Learning)
#1Xingyu Chen (Duke University)H-Index: 2
#2Brandon Fain (Duke University)H-Index: 10
Last. Kamesh Munagala (Duke University)H-Index: 45
view all 4 authors...
May 24, 2019 in ICML (International Conference on Machine Learning)
#1Marc-Etienne Brunet (U of T: University of Toronto)H-Index: 5
#2Colleen Alkalay-Houlihan (U of T: University of Toronto)H-Index: 2
Last. Richard S. ZemelH-Index: 71
view all 4 authors...
The power of machine learning systems not only promises great technical progress, but risks societal harm. As a recent example, researchers have shown that popular word embedding algorithms exhibit stereotypical biases, such as gender bias. The widespread use of these algorithms in machine learning systems, from automated translation services to curriculum vitae scanners, can amplify stereotypes in important contexts. Although methods have been developed to measure these biases and alter word em...
May 24, 2019 in ICML (International Conference on Machine Learning)
#1Elliot Creager (U of T: University of Toronto)H-Index: 10
#2David Madras (U of T: University of Toronto)H-Index: 8
Last. Richard S. ZemelH-Index: 71
view all 7 authors...
We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes. Taking inspiration from the disentangled representation learning literature, we propose an algorithm for learning compact representations of datasets that are useful for reconstruction and prediction, but are also \emph{flexibly fair}, meaning they can be easily modified at test time to achieve subgroup demographic parity with respect to multiple sensitive a...
May 24, 2019 in ICML (International Conference on Machine Learning)
#1Berk Ustun (Harvard University)H-Index: 15
#2Yang Liu (NTU: Nanyang Technological University)H-Index: 89
Last. David C. Parkes (Harvard University)H-Index: 62
view all 3 authors...
May 24, 2019 in ICML (International Conference on Machine Learning)
#1Lingxiao Huang (EPFL: 脡cole Polytechnique F茅d茅rale de Lausanne)H-Index: 10
#2Nisheeth K. Vishnoi (Yale University)H-Index: 25
Fair classification has been a topic of intense study in machine learning, and several algorithms have been proposed towards this important task. However, in a recent study, Friedler et al. observed that fair classification algorithms may not be stable with respect to variations in the training dataset -- a crucial consideration in several real-world applications. Motivated by their work, we study the problem of designing classification algorithms that are both fair and stable. We propose an ext...
#1Ivan Minchev (Leibniz Institute for Astrophysics Potsdam)H-Index: 68
#2Gal Matijevic (Leibniz Institute for Astrophysics Potsdam)H-Index: 23
Last. C. Scannapieco (Facultad de Ciencias Exactas y Naturales)H-Index: 1
view all 10 authors...
Simpson's paradox, or Yule-Simpson effect, arises when a trend appears in different subsets of data but disappears or reverses when these subsets are combined. We describe here seven cases of this phenomenon for chemo-kinematical relations believed to constrain the Milky Way disk formation and evolution. We show that interpreting trends in relations, such as the radial and vertical chemical abundance gradients, the age-metallicity relation, and the metallicity-rotational velocity relation (MVR),...
Source
#1Ninareh Mehrabi (SC: University of Southern California)H-Index: 5
#2Fred Morstatter (ISI: Information Sciences Institute)H-Index: 22
Last. Aram Galstyan (ISI: Information Sciences Institute)H-Index: 41
view all 4 authors...
Community detection is an important task in social network analysis, allowing us to identify and understand the communities within the social structures. However, many community detection approaches either fail to assign low degree (or lowly-connected) users to communities, or assign them to trivially small communities that prevent them from being included in analysis. In this work, we investigate how excluding these users can bias analysis results. We then introduce an approach that is more inc...
Cited By303
Newest
#2Fariborz Haghighat (Concordia University)H-Index: 9
Abstract null null Data-driven models have drawn extensive attention in the building domain in recent years, and their predictive accuracy depends on features or data distribution. Accuracy variation among users or periods creates a certain unfairness to some users. This paper addresses a new research problem called fairness-aware prediction of data-driven building and indoor environment models. First, three types of fairness definitions are introduced in building engineering. Next, Type I and T...
Source
Computer vision applications like automated face detection are used for a variety of purposes ranging from unlocking smart devices to tracking potential persons of interest for surveillance. Audits of these applications have revealed that they tend to be biased against minority groups which result in unfair and concerning societal and political outcomes. Despite multiple studies over time, these biases have not been mitigated completely and have in fact increased for certain tasks like age predi...
#1Wiebke Toussaint (TU Delft: Delft University of Technology)H-Index: 3
#2Akhil Mathur (Bell Labs)H-Index: 17
Last. Fahim Kawsar (Bell Labs)H-Index: 26
view all 4 authors...
When deploying machine learning (ML) models on embedded and IoT devices, performance encompasses more than an accuracy metric: inference latency, energy consumption, and model fairness are necessary to ensure reliable performance under heterogeneous and resource-constrained operating conditions. To this end, prior research has studied model-centric approaches, such as tuning the hyperparameters of the model during training and later applying model compression techniques to tailor the model to th...
Source
#1Bhanu Jain (UTA: University of Texas at Arlington)H-Index: 2
#2Manfred Huber (UTA: University of Texas at Arlington)H-Index: 18
Last. Ramez Elmasri (UTA: University of Texas at Arlington)H-Index: 36
view all 3 authors...
Increasing utilization of machine learning based decision support systems emphasizes the need for resulting predictions to be both accurate and fair to all stakeholders. In this work we present a novel approach to increase a Neural Network model's fairness during training. We introduce a family of fairness enhancing regularization components that we use in conjunction with the traditional binary-cross-entropy based accuracy loss. These loss functions are based on Bias Parity Score (BPS), a score...
#2Dietmar JannachH-Index: 56
view all 14 authors...
Source
Nov 1, 2021 in EMNLP (Empirical Methods in Natural Language Processing)
Last. Anders S酶gaard (UCPH: University of Copenhagen)H-Index: 39
view all 3 authors...
Sentiment analysis systems have been shown to exhibit sensitivity to protected attributes. Round-trip translation, on the other hand, has been shown to normalize text. We explore the impact of round-trip translation on the demographic parity of sentiment classifiers and show how round-trip translation consistently improves classification fairness at test time (reducing up to 47% of between-group gaps). We also explore the idea of retraining sentiment classifiers on round-trip-translated data.
Nov 1, 2021 in EMNLP (Empirical Methods in Natural Language Processing)
#1Somnath Basu Roy Chowdhury (IIT-KGP: Indian Institute of Technology Kharagpur)H-Index: 4
#2Sayan Ghosh (UNC: University of North Carolina at Chapel Hill)H-Index: 4
Last. Snigdha Chaturvedi (UNC: University of North Carolina at Chapel Hill)H-Index: 15
view all 6 authors...
Contextual representations learned by language models can often encode undesirable attributes, like demographic associations of the users, while being trained for an unrelated target task. We aim to scrub such undesirable attributes and learn fair representations while maintaining performance on the target task. In this paper, we present an adversarial learning framework 鈥淎dversarial Scrubber鈥 (AdS), to debias contextual representations. We perform theoretical analysis to show that our framework...
Nov 1, 2021 in EMNLP (Empirical Methods in Natural Language Processing)
#1Ahmed Abbasi (ND: University of Notre Dame)H-Index: 29
#2David G. Dobolyi (UVA: University of Virginia)H-Index: 10
Last. Yi Yang
view all 6 authors...
Psychometric measures of ability, attitudes, perceptions, and beliefs are crucial for understanding user behavior in various contexts including health, security, e-commerce, and finance. Traditionally, psychometric dimensions have been measured and collected using survey-based methods. Inferring such constructs from user-generated text could allow timely, unobtrusive collection and analysis. In this paper we describe our efforts to construct a corpus for psychometric natural language processing ...
#1Sanjiv Ranjan Das (Santa Clara University)H-Index: 41
#2Michele DoniniH-Index: 11
Last. Muhammad Bilal Zafar (Amazon.com)H-Index: 20
view all 10 authors...
Source
#1Aida Mostafazadeh Davani (SC: University of Southern California)H-Index: 6
#2Mohammad AtariH-Index: 16
Last. Morteza DehghaniH-Index: 21
view all 4 authors...
Social stereotypes negatively impact individuals' judgements about different groups and may have a critical role in how people understand language directed toward minority social groups. Here, we assess the role of social stereotypes in the automated detection of hateful language by examining the relation between individual annotator biases and erroneous classification of texts by hate speech classifiers. Specifically, in Study 1 we investigate the impact of novice annotators' stereotypes on the...
This website uses cookies.
We use cookies to improve your online experience. By continuing to use our website we assume you agree to the placement of these cookies.
To learn more, you can find in our Privacy Policy.