Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err

Published on Jul 6, 2014in Social Science Research Network
· DOI :10.2139/SSRN.2466040
Berkeley J. Dietvorst7
Estimated H-index: 7
(UPenn: University of Pennsylvania),
Joseph P. Simmons31
Estimated H-index: 31
(UPenn: University of Pennsylvania),
Cade Massey15
Estimated H-index: 15
(UPenn: University of Pennsylvania)
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In five studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
Figures & Tables
📖 Papers frequently viewed together
2016KDD: Knowledge Discovery and Data Mining
2,424 Citations
7 Citations
1,333 Citations
Cited By11
#1Zhan Zhang (Pace University)H-Index: 7
#2Yegin Genc (Pace University)H-Index: 1
Last. Xiangmin Fan (CAS: Chinese Academy of Sciences)H-Index: 6
view all 5 authors...
Ongoing research efforts have been examining how to utilize artificial intelligence technology to help healthcare consumers make sense of their clinical data, such as diagnostic radiology reports. How to promote the acceptance of such novel technology is a heated research topic. Recent studies highlight the importance of providing local explanations about AI prediction and model performance to help users determine whether to trust AI's predictions. Despite some efforts, limited empirical researc...
#1Romain Cadario (EUR: Erasmus University Rotterdam)H-Index: 7
#2Chiara Longoni (BU: Boston University)H-Index: 3
Last. Carey K. Morewedge (BU: Boston University)H-Index: 27
view all 0 authors...
Medical artificial intelligence is cost-effective, scalable, and often outperforms human providers. One important barrier to its adoption is the perception that algorithms are a “black box”—people do not subjectively understand how algorithms make medical decisions, and we find this impairs their utilization. We argue a second barrier is that people also overestimate their objective understanding of medical decisions made by human healthcare providers. In five pre-registered experiments with con...
#1Chiara Longoni (BU: Boston University)H-Index: 3
#2Andrey Fradkin (BU: Boston University)H-Index: 9
Last. Gordon Pennycook (University of Regina)H-Index: 40
view all 4 authors...
Artificial Intelligence (AI) algorithms are now able to produce text virtually indistinguishable from text written by humans across a variety of domains. A key question, then, is whether people believe content from AI as much as content from humans. Trust in the (human generated) news media has been decreasing over time and AI is viewed as lacking human desires, and emotions, suggesting that AI news may be viewed as more accurate. Contrary to this, two preregistered experiments conducted on repr...
#1Rahild NeuburgerH-Index: 1
#2Marina FiedlerH-Index: 14
Autonome Informationssysteme (AIS), die lernen, schlussfolgern und entscheiden und damit eigenstandig Programme zur Handlung entwickeln, stellen ein zusatzliches Element im Arbeitskontext dar. Je nach Anwendung fuhren sie dazu, dass sich die Arbeitsteilung zwischen Mensch und Technologie weiter verschiebt. Zwischen den beiden Extrema – Ubernahme der Aufgaben alleine durch das AIS bzw. nur durch den Menschen – eroffnet sich ein breites Spektrum an Aufgaben, die in einer neuartigen Form der Arbeit...
#1Marah Blaurock (University of Hohenheim)H-Index: 2
This paper evaluates the explanatory relevance of service encounter 1.0 theories in the service encounter 2.0 environment. To this end, first the focal changes from service encounter 1.0 to 2.0 are outlined and the most relevant service encounter 1.0 theories identified. Second, an evaluation scheme consisting of contextual and individual bounding factors of theoretical assumptions is developed. Third, the evaluation scheme is exemplary deployed evaluating role theory.
Inspired by the recent development of autonomous artificial intelligence (AI) systems in military and medical applications I envision the use of one such system, an AI empowered exoskeleton smart-suit called the Praetor Suit, to question the important ethical issues stemming from its use. The Praetor Suit would have the ability to monitor the service member’s physiological and psychological state, report that state to medical experts surveilling its operation through teleoperation and autonomous...
#1Andrea Martinesco (UVSQ: Versailles Saint-Quentin-en-Yvelines University)H-Index: 3
#2Mariana Netto (IFSTTAR)H-Index: 17
Last. Victor H. Etgens (École normale supérieure de Cachan)H-Index: 1
view all 4 authors...
Abstract Following the increase of automation levels in personal and public vehicles in the last decades, this note aims to discuss interdisciplinary investigation required to address criminal liability in case of an accident involving autonomous vehicles. Lawyers need from technicians definitions of automation levels. Each automation level requests differently the driver generating the need for related psychological and ergonomics studies. Finally, in the case of an accident, an Event Data Reco...
2 CitationsSource
This work compares user collaboration with conversational personal assistants vs. teams of expert chatbots. Two studies were performed to investigate whether each approach affects accomplishment of tasks and collaboration costs. Participants interacted with two equivalent financial advice chatbot systems, one composed of a single conversational adviser and the other based on a team of four experts chatbots. Results indicated that users had different forms of experiences but were equally able to ...
3 Citations
#1Campbell R. Harvey (Duke University)H-Index: 111
#2Sandy RattrayH-Index: 5
Last. Otto Van HemertH-Index: 9
view all 4 authors...
In this article, the authors analyze and contrast the performance of discretionary and systematic hedge funds. Systematic funds use rules-based strategies, with little or no daily intervention by humans. In the authors’ experience, some large allocators shy away from systematic hedge funds altogether. One possible explanation is what the psychology literature calls “algorithm aversion.” However, the authors find no empirical basis for such an aversion. For the period 1996–2014, systematic and di...
3 CitationsSource
#1Berkeley J. Dietvorst (U of C: University of Chicago)H-Index: 7
People often choose to use human forecasts instead of algorithmic forecasts that perform better on average; however, it is unclear what decision process leads people to rely on (inferior) human predictions instead of (superior) algorithmic predictions. In this paper, I propose that people choose between forecasting methods by (1) using their status quo forecasting method by default and (2) deciding whether or not to use the alternative forecasting method by comparing its performance to a counter...
2 CitationsSource