This article is from the source 'rtcom' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.rt.com/viral/384870-alt-robot-human-prejudice/

The article has changed 2 times. There is an RSS feed of changes available.

Version 0 Version 1
Alt-robot: Human prejudice spreading to AI finds new study Alt-robot: Human prejudice spreading to AI, new study finds
(35 minutes later)
Robots inherit any racial or gender bias displayed in the language used by the humans with whom they interact, claims a new study. Billions of words used in machine learning tools have been found to contain some questionable associations.Robots inherit any racial or gender bias displayed in the language used by the humans with whom they interact, claims a new study. Billions of words used in machine learning tools have been found to contain some questionable associations.
The better AI becomes at interpreting the human language, the more likely it is to adopt human bias, according to new research by scientists at Princeton University published in the journal Science.The better AI becomes at interpreting the human language, the more likely it is to adopt human bias, according to new research by scientists at Princeton University published in the journal Science.
Researchers fed words into GloVe (Global Vectors for Word Representation), an open-source learning algorithm that processes words from a collection of 840 billion before pairing them based on their clusters.Researchers fed words into GloVe (Global Vectors for Word Representation), an open-source learning algorithm that processes words from a collection of 840 billion before pairing them based on their clusters.
A word like “flower” was found to be linked to words associated with pleasantness, while “insect” was found to be associated with unpleasantness.A word like “flower” was found to be linked to words associated with pleasantness, while “insect” was found to be associated with unpleasantness.
Bias became visible when the words “female” and “woman” returned associations with arts and humanities occupations as well as the home. “Male” and “man” were associated with maths and engineering roles.Bias became visible when the words “female” and “woman” returned associations with arts and humanities occupations as well as the home. “Male” and “man” were associated with maths and engineering roles.
European-American names were also found to be associated with pleasant terms while African American names returned unpleasant terms.European-American names were also found to be associated with pleasant terms while African American names returned unpleasant terms.
READ MORE: Robot kill switches & legal status: MEPs endorse AI proposalREAD MORE: Robot kill switches & legal status: MEPs endorse AI proposal
“A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it,” co-author of the study Joanna Bryson told The Guardian.“A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it,” co-author of the study Joanna Bryson told The Guardian.
Bryson warned that addressing the issue could be problematic as AI cannot consciously counteract learned bias.Bryson warned that addressing the issue could be problematic as AI cannot consciously counteract learned bias.
“A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.“A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.
GloVe draws on the Common Crawl corpus, a repository of data collected online over eight years with insights into politics, art and popular culture for its data vectors.GloVe draws on the Common Crawl corpus, a repository of data collected online over eight years with insights into politics, art and popular culture for its data vectors.
“The world is biased, the historical data is biased, hence it is not surprising that we receive biased results,” Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford said.“The world is biased, the historical data is biased, hence it is not surprising that we receive biased results,” Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford said.
Wachter warned that eliminating bias by reducing their powers of interpretation could be problematic but that it is a “responsibility that we as society should not shy away from.”Wachter warned that eliminating bias by reducing their powers of interpretation could be problematic but that it is a “responsibility that we as society should not shy away from.”