Resistance to Medical Artificial Intelligence

Published on Dec 1, 2019in Journal of Consumer Research
· DOI :10.1093/JCR/UCZ013
Chiara Longoni4
Estimated H-index: 4
(BU: Boston University),
Andrea Bonezzi8
Estimated H-index: 8
(NYU: New York University),
Carey K. Morewedge27
Estimated H-index: 27
(BU: Boston University)
Sources
Abstract
Artificial intelligence (AI) is revolutionizing healthcare, but little is known about consumer receptivity to AI in medicine. Consumers are reluctant to utilize healthcare provided by AI in real and hypothetical choices, separate and joint evaluations. Consumers are less likely to utilize healthcare (study 1), exhibit lower reservation prices for healthcare (study 2), are less sensitive to differences in provider performance (studies 3A–3C), and derive negative utility if a provider is automated rather than human (study 4). Uniqueness neglect, a concern that AI providers are less able than human providers to account for consumers’ unique characteristics and circumstances, drives consumer resistance to medical AI. Indeed, resistance to medical AI is stronger for consumers who perceive themselves to be more unique (study 5). Uniqueness neglect mediates resistance to medical AI (study 6), and is eliminated when AI provides care (a) that is framed as personalized (study 7), (b) to consumers other than the self (study 8), or (c) that only supports, rather than replaces, a decision made by a human healthcare provider (study 9). These findings make contributions to the psychology of automation and medical decision making, and suggest interventions to increase consumer acceptance of AI in medicine.
📖 Papers frequently viewed together
259 Citations
11 Citations
102 Citations
References67
Newest
#1Kun-Hsing Yu (Harvard University)H-Index: 17
#2Andrew L. Beam (Harvard University)H-Index: 19
Last. Isaac S. Kohane (Harvard University)H-Index: 109
view all 3 authors...
Artificial intelligence (AI) is gradually changing medical practice. With recent progress in digitized data acquisition, machine learning and computing infrastructure, AI applications are expanding into areas that were previously thought to be only the province of human experts. In this Review Article, we outline recent breakthroughs in AI technologies and their biomedical applications, identify the challenges for further progress in medical AI systems, and summarize the economic, legal and soci...
300 CitationsSource
#2Philip T. LavinH-Index: 1
Last. James C. Folk (UI: University of Iowa)H-Index: 24
view all 5 authors...
Artificial Intelligence (AI) has long promised to increase healthcare affordability, quality and accessibility but FDA, until recently, had never authorized an autonomous AI diagnostic system. This pivotal trial of an AI system to detect diabetic retinopathy (DR) in people with diabetes enrolled 900 subjects, with no history of DR at primary care clinics, by comparing to Wisconsin Fundus Photograph Reading Center (FPRC) widefield stereoscopic photography and macular Optical Coherence Tomography ...
161 CitationsSource
#1Holger A. Haenssle (Heidelberg University)H-Index: 13
#2Christian Fink (Heidelberg University)H-Index: 65
Last. Iris ZalaudekH-Index: 74
view all 66 authors...
Background Deep learning convolutional neural networks (CNN) may facilitate melanoma detection, but data comparing a CNN's diagnostic performance to larger groups of dermatologists are lacking. Methods Google's Inception v4 CNN architecture was trained and validated using dermoscopic images and corresponding diagnoses. In a comparative cross-sectional reader study a 100-image test-set was used (level-I: dermoscopy only; level-II: dermoscopy plus clinical information and images). Main outcome mea...
408 CitationsSource
#1Berkeley J. Dietvorst (U of C: University of Chicago)H-Index: 6
#2Joseph P. Simmons (UPenn: University of Pennsylvania)H-Index: 32
Last. Cade Massey (UPenn: University of Pennsylvania)H-Index: 15
view all 3 authors...
Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm...
102 CitationsSource
#1J. Jeffrey InmanH-Index: 47
Last. Linda L. PriceH-Index: 44
view all 4 authors...
11 CitationsSource
5 CitationsSource
By examining the state of operations management (OM) research from 1980 to 2015 and by considering three new industry trends, we propose new OM research directions in socially and environmentally responsible value chains that fundamentally expand existing OM research in three dimensions: (a) contexts (emerging and developing economies); (b) objectives (economic, environmental, and social responsibility); and (c) stakeholders (producers, consumers, shareholders, for-profit/nonprofit/social enterp...
114 CitationsSource
#1Jon Kleinberg (Cornell University)H-Index: 116
#2Himabindu Lakkaraju (Stanford University)H-Index: 18
Last. Sendhil MullainathanH-Index: 89
view all 5 authors...
Presented on October 24, 2016 at 10:00 a.m. in the Klaus Advanced Computing Building, room 1116
203 CitationsSource
#1Andre Esteva (Stanford University)H-Index: 11
#2Brett Kuprel (Stanford University)H-Index: 4
Last. Sebastian Thrun (Stanford University)H-Index: 152
view all 7 authors...
An artificial intelligence trained to classify images of skin lesions as benign lesions or malignant skin cancers achieves the accuracy of board-certified dermatologists.
3,982 CitationsSource
#1Sancy A. Leachman (OHSU: Oregon Health & Science University)H-Index: 56
#1Sancy A. Leachman (OHSU: Oregon Health & Science University)H-Index: 21
Last. Glenn Merlino (OHSU: Oregon Health & Science University)H-Index: 1
view all 2 authors...
A computer, trained to classify skin cancers using image analysis alone, can now identify certain cancers as successfully as can skin-cancer doctors. What are the implications for the future of medical diagnosis? See Letter p.115
15 CitationsSource
Cited By70
Newest
#1Yochanan E. BigmanH-Index: 10
#1Yochanan E. Bigman (Yale University)
Last. Kurt GrayH-Index: 31
view all 5 authors...
Abstract Artificial intelligence (AI) algorithms hold promise to reduce inequalities across race and socioeconomic status. One of the most important domains of racial and economic inequalities is medical outcomes; Black and low-income people are more likely to die from many diseases. Algorithms can help reduce these inequalities because they are less likely than human doctors to make biased decisions. Unfortunately, people are generally averse to algorithms making important moral decisions—inclu...
Source
#1Taenyun Kim (SKKU: Sungkyunkwan University)H-Index: 1
#2Hayeon Song (SKKU: Sungkyunkwan University)H-Index: 22
Last. Hayeon Song (SKKU: Sungkyunkwan University)H-Index: 3
view all 2 authors...
Abstract Trust is essential in individuals’ perception, behavior, and evaluation of intelligent agents. Because, it is the primary motive for people to accept new technology, it is crucial to repair trust when damaged. This study investigated how intelligent agents should apologize to recover trust and how the effectiveness of the apology is different when the agent is human-like compared to machine-like based on two seemingly competing frameworks of the Computers-Are-Social-Actors paradigm and ...
Source
#1Markus Langer (Saarland University)H-Index: 8
#2Daniel Oster (Saarland University)H-Index: 2
Last. Andreas Sesing (Saarland University)
view all 8 authors...
Abstract Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satis...
1 CitationsSource
#1Ryosuke Yokoi (Dodai: Doshisha University)H-Index: 2
#2Yoko Eguchi (Keio: Keio University)H-Index: 8
Last. Kazuya Nakayachi (Dodai: Doshisha University)H-Index: 10
view all 4 authors...
Artificial intelligence (AI) can provide many benefits in healthcare, including rapid and effective treatment options. However, previous research on human–computer interactions has demonstrated tha...
Source
#1Erik Hermann (Leibniz Institute for Neurobiology)
Artificial intelligence (AI) is (re)shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications (will) provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical p...
Source
#1Chunqu Xiao (NU: Nanjing University)H-Index: 1
#2Hong Zhu (NU: Nanjing University)
Last. Liang Wu
view all 4 authors...
Source
May 8, 2021 in CHI (Human Factors in Computing Systems)
#1Jin Chen (PSU: Pennsylvania State University)
#2Cheng Chen (PSU: Pennsylvania State University)
Last. S. Shyam Sundar (PSU: Pennsylvania State University)H-Index: 65
view all 4 authors...
You may feel special and believe that you are getting personalized care when your doctor remembers your name and your unique medical history. But, what if it is an AI doctor and not human? Since AI systems are driven by personalization algorithms, it is possible to design AI doctors that can individuate patients with great precision. Is this appreciated or perceived as eerie and intrusive, thereby negatively affecting doctor-patient interaction? We decided to find out by designing a healthcare c...
Source
#1Jungkeun Kim (AUT: Auckland University of Technology)H-Index: 15
#2Marilyn Giroux (AUT: Auckland University of Technology)H-Index: 5
Last. Jacob C. Lee (Dongguk University)H-Index: 6
view all 3 authors...
Source
May 6, 2021 in CHI (Human Factors in Computing Systems)
#1Min Kyung Lee (University of Texas at Austin)H-Index: 26
#2Katherine Rich (University of Texas at Austin)
Emerging research suggests that people trust algorithmic decisions less than human decisions. However, different populations, particularly in marginalized communities, may have different levels of trust in human decision-makers. Do people who mistrust human decision-makers perceive human decisions to be more trustworthy and fairer than algorithmic decisions? Or do they trust algorithmic decisions as much as or more than human decisions? We examine the role of mistrust in human systems in people’...
Source
#1Xinge Li (KU: Korea University)
#2Yongjun Sung (KU: Korea University)H-Index: 27
Abstract In the current era, interacting with Artificial Intelligence (AI) has become an everyday activity. Understanding the interaction between humans and AI is of potential value because, in future, such interactions are expected to become more pervasive. Two studies—one survey and one experiment—were conducted to demonstrate positive effects of anthropomorphism on interactions with smart-speaker-based AI assistants and to examine the mediating role of psychological distance in this relations...
Source