Understanding the Origins of Bias in Word Embeddings

Pages: 803 - 811
Published: May 24, 2019
Abstract
The power of machine learning systems not only promises great technical progress, but risks societal harm. As a recent example, researchers have shown that popular word embedding algorithms exhibit stereotypical biases, such as gender bias. The widespread use of these algorithms in machine learning systems, from automated translation services to curriculum vitae scanners, can amplify stereotypes in important contexts. Although methods have been...
Paper Details
Title
Understanding the Origins of Bias in Word Embeddings
Published Date
May 24, 2019
Pages
803 - 811
Citation AnalysisPro
  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.