This article is from the source 'rtcom' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.rt.com/news/512212-face-features-political-views/

The article has changed 3 times. There is an RSS feed of changes available.

Version 1 Version 2
Cambridge Analytica-linked Stanford scientist’s study suggests he can tell by examining your FACE what your politics are Stanford scientist’s study says AI can tell by examining your FACE what your politics are
(3 days later)
Facial recognition algorithms can be trained to recognize people’s political views, Stanford-affiliated researcher Michal Kosinski claims, stating that his most recent study achieved 72 percent accuracy on liberal v conservative.Facial recognition algorithms can be trained to recognize people’s political views, Stanford-affiliated researcher Michal Kosinski claims, stating that his most recent study achieved 72 percent accuracy on liberal v conservative.
Properly trained facial recognition algorithms can correctly guess a person’s political orientation nearly three-quarters of the time, Kosinski claimed in a paper published on Monday in Scientific Reports. Using over a million profiles from Facebook and dating sites across the US, UK and Canada, the algorithm was able to accurately pick out conservatives from liberals in 72 percent of face pairs. Properly trained facial recognition algorithms can correctly guess a person’s political orientation nearly three-quarters of the time, Kosinski said in a paper published on Monday in Scientific Reports. Using over a million profiles from Facebook and dating sites across the US, UK, and Canada, the algorithm he tested was able to accurately pick out conservatives from liberals in 72 percent of face pairs.
The most significant facial features in telling the political groups apart – aside from age, gender, and ethnicity – were head orientation and emotional expression, according to the study authors. They also explained that liberals were more likely to look directly at the camera and more likely to look surprised than disgusted. The figure may not seem high, but keep in mind that a random pick would give a 50 percent accuracy, while a human trying to determine political affiliation from a person’s appearance would achieve only about 55 percent accuracy. And even when obvious features like age and race, which correlate with political views, were adjusted for, the facial recognition software remained around 70 percent accurate, according to the study.
Liberals were also supposed to smile “more intensely and genuinely,” leaving them with a different pattern of wrinkles as they age. Conservatives, meanwhile, “tend to be healthier, consume less alcohol and tobacco, and have a different diet” – attributes that affected the health and texture of the skin. As is typical with AI, there is no telling which features exactly the algorithm has picked to make the predictions. The authors made an educated guess that head orientation and emotional expressions were among the more telling cues. Liberals were more likely to look directly at the camera and more likely to look surprised than disgusted, for example. Beards and spectacles, on the other hand, barely affect the accuracy of predictions.
But their conclusions went much further than that, diving deep into the realms of facial-recognition dystopia. Facial appearance, they claimed, predicts everything from success in work, educational achievement, even the length of a prison sentence – all factors which, according to the researchers, could easily influence political affiliation. The conclusions of the study go much further, diving deep into the realms of facial-recognition dystopia. Inhumanly accurate algorithms, paired with publicly available images of millions of people, may be used to screen people without their consent, or even knowledge based on various criteria that humans consider part of their private lives. Kosinski’s earlier study used this approach to predict sexual orientation, and he believes the same technology may bring with it a truly nightmarish future.
On the flip side, they also suggested the link between facial attributes and politics could start in the womb, with “genes, hormones, and prenatal exposure to substances” all playing a role in whether an individual grows up to be liberal or conservative. “Negative first impressions” – presumably meaning ugliness – could “reduce [an individual’s] earning potential and status,” inculcating empathy for the down-trodden and thus a more liberal worldview. One does not need to go far for an example. Faception, a ‘Minority Report’-esque Israeli program, purports to predict not only an individual’s place on the political spectrum, but also that person’s likelihood of being a terrorist, paedophile, or other major criminal. Kosinski’s work has won him a degree of infamy in the past; Faception’s developer counts Kosinski as one of the people they approached for consultations on the product, but he says he merely told them he had qualms with the ethics of it.
Even drinking during pregnancy might predict future political orientation, given poor cognitive development has – the researchers claim – been “linked to political orientation.” Kosinski’s work remains controversial. The 2017 ‘algorithmic gaydar’ study, for example, was hacked by LGBT advocacy groups across the US unhappy with the ramifications. The science behind it was criticized by some other researchers of AI and psychology, who said he conflated facial features with cultural cues, but they didn’t dispute his point about the dangers of mass surveillance.
Presumably aware that he’d be attacked for seemingly trying to revive the long-discredited ’science’ of physiognomy, a technique purported to be able to assess an individual’s character, personality, even criminal propensities by the shape of their face, Kosinski denounced the discipline as “based on unscientific studies, superstition, anecdotal evidence, and racist pseudo-theories.” Others see such studies as nothing but quackery, considering that it bears a striking resemblance to the notorious pseudoscience of physiognomy. Adherents claimed they could assess an individual’s character, personality, even criminal propensities by the shape of their face but in practice their predictions revealed more about their biases. The researcher also denounced the discipline as “based on unscientific studies, superstition, anecdotal evidence, and racist pseudo-theories,” but he insists the AI-powered approach works.
However, he explained, just because the field was wildly unscientific “does not automatically mean that they are all wrong.” Kosinski’s name is sometimes mentioned in connection with Cambridge Analytica, the now-defunct company that mass-harvested data from Facebook and claimed that it could use it to conduct highly targeted political campaigns. The connection actually never existed and seems to stem from “inaccurate press reports” from the time when the scandal over the firm’s dubious business model first erupted in 2018.
Kosinski is perhaps best known for his work with Facebook, which gave rise to the infamous data-mining firm Cambridge Analytica ahead of the 2016 US election. The company scooped up tens of millions of Facebook users’ data to deliver precision-targeted political advertising on the part of the Republican presidential campaign. Editorial note: This story has been changed by RT since its first publication to better reflect the stated goals of Michal Kosinski’s research into facial recognition applications, and to correctly state that he was not connected to Cambridge Analytica.
He also worked as an adviser on Faception, a Minority-Report-esque Israeli program that purported to predict not only an individual’s place on the political spectrum, but also that person’s likelihood of being a terrorist, pedophile, or other major criminal. Kosinski’s work has won him a degree of infamy in the past. A 2017 paper he co-wrote argued that a simple AI system could determine whether a face belonged to a gay or straight person, raising the hackles of LGBT advocacy groups across the US. In October, he and a colleague published a paper debunking a previous claim that married couples grow to resemble one another facially as they spend their lives together.
But he’s far from the only scientist at work on trying to bring physiognomy up to date for the 21st century. The University of Harrisburg in May claimed it had developed an AI algorithm capable of determining with 80 percent accuracy whether someone was a criminal, just by looking at their face.
A similar study was conducted in China in 2016.
Think your friends would be interested? Share this story!