Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

Volume: 1, Issue: 5, Pages: 206 - 215
Published: May 13, 2019
Abstract
Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can...
Paper Details
Title
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
Published Date
May 13, 2019
Volume
1
Issue
5
Pages
206 - 215
Citation AnalysisPro
  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.