Zephyrnet Logo

Outlandish Stanford facial recognition study claims there are links between facial features and political orientation

Date:


A paper published today in the journal Scientific Reports by controversial Stanford-affiliated researcher Michal Kosinski claims to show that facial recognition algorithms can expose people’s political views from their social media profiles. Using a dataset of over 1 million Facebook and dating sites profiles from users across Canada, the U.S., and the U.K., Kosinski and coauthors say they trained an algorithm to correctly classify political orientation in 72% of “liberal-conservative” face pairs.

The work, taken as a whole, embraces the pseudoscientific concept of physiognomy, or the idea that a person’s character or personality can be assessed from their appearance. In 1911, Italian anthropologist Cesare Lombroso published a taxonomy declaring that “nearly all criminals” have “jug ears, thick hair, thin beards, pronounced sinuses, protruding chins, and broad cheekbone.” Thieves were notable for their “small wandering eyes,” he said, and rapists their “swollen lips and eyelids,” while murderers had a nose that was “often hawklike and always large.”

Phrenology, a related field, involves the measurement of bumps on the skull to predict mental traits. Authors representing the Institute of Electrical and Electronics Engineers (IEEE) have said this sort of facial recognition is “necessarily doomed to fail” and that strong claims are a result of poor experimental design.

Princeton professor Alexander Todorov, a critic of Kosinski’s work, also argues that methods like those employed in the facial recognition paper are technically flawed. He says the patterns picked up by an algorithm comparing millions of photos might have little to do with facial characteristics. For example, self-posted photos on dating websites project a number of non-facial clues.

Moreover, current psychology research shows that by adulthood, personality is mostly influenced by the environment. “While it is potentially possible to predict personality from a photo, this is at best slightly better than chance in the case of humans,” Daniel Preotiuc-Pietro, a postdoctoral researcher at the University of Pennsylvania who’s worked on predicting personality from profile images, told Business Insider in a recent interview.

Defending pseudoscience

Kosinski and coauthors, preemptively responding to criticism, take pains to distance their research from phrenology and physiognomy. But they don’t dismiss them altogether. “Physiognomy was based on unscientific studies, superstition, anecdotal evidence, and racist pseudo-theories. The fact that its claims were unsupported, however, does not automatically mean that they are all wrong,” they wrote in notes published alongside the paper. “Some of physiognomists’ claims may have been correct, perhaps by a mere accident.”

According to the coauthors, a number of facial features — but not all — reveal political affiliation, including head orientation, emotional expression, age, gender, and ethnicity. While facial hair and eyewear predict political affiliation with “minimal accuracy,” liberals tend to face the camera more directly and are more likely to express surprise (and less likely to express disgust), they say.

Stanford facial recognition study

“While we tend to think of facial features as relatively fixed, there are many factors that influence them in both the short and long term,” the researchers wrote. “Liberals, for example, tend to smile more intensely and genuinely, which leads to the emergence of different expressional wrinkle patterns. Conservatives tend to be healthier, consume less alcohol and tobacco, and have a different diet — which, over time, translates into differences in skin health and the distribution and amount of facial fat.”

The researchers posit that facial appearance predicts life outcomes like the length of a prison sentence, occupational success, educational attainments, chances of winning an election, and income and that these outcomes in turn likely influence political orientation. But they also conjecture there’s a connection between facial appearance and political orientation and genes, hormones, and prenatal exposure to substances.

“Negative first impressions could over a person’s lifetime reduce their earning potential and status and thus increase their support for wealth redistribution and sensitivity to social injustice, shifting them toward the liberal end of the political spectrum,” the researchers wrote. “Prenatal and postnatal testosterone levels affect facial shape and correlate with political orientation. Furthermore, prenatal exposure to nicotine and alcohol affects facial morphology and cognitive development (which has been linked to political orientation).”

Stanford facial recognition study

Kosinski and coauthors declined to make available the project’s source code or dataset, citing privacy implications. But this has the dual effect of making auditing the work for bias and experimental flaws impossible. Science in general has a reproducibility problem — a 2016 poll of 1,500 scientists reported that 70% of them had tried but failed to reproduce at least one other scientist’s experiment — but it’s particularly acute in the AI field. One recent report found that 60% to 70% of answers given by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were often simply memorizing answers.

Numerous studies — including the landmark Gender Shades work by Joy Buolamwini, Dr. Timnit Gebru, Dr. Helen Raynham, and Deborah Raji — and VentureBeat’s own analyses of public benchmark data have shown facial recognition algorithms are susceptible to various biases. One frequent confounder is technology and techniques that favor lighter skin, which include everything from sepia-tinged film to low-contrast digital cameras. These prejudices can be encoded in algorithms such that their performance on darker-skinned people falls short of that on those with lighter skin.

Bias is pervasive in machine learning algorithms beyond those powering facial recognition systems. A ProPublica investigation found that software used to predict criminality tends to exhibit prejudice against black people. Another study found that women are shown fewer online ads for high-paying jobs. An AI beauty contest was biased in favor of white people. And an algorithm Twitter used to decide how photos are cropped in people’s timelines automatically elected to display the faces of white people over people with darker skin pigmentation.

Ethically questionable

Kosinski, whose work analyzing the connection between personality traits and Facebook activity inspired the creation of political consultancy Cambridge Analytica, is no stranger to controversy. In a paper published in 2017, he and Stanford computer scientist Yilun Wang reported that an off-the-shelf AI system was able to distinguish between photos of gay and straight people with a high degree of accuracy. Advocacy groups like Gay & Lesbian Alliance Against Defamation (GLAAD) and the Human Rights Campaign said the study “threatens the safety and privacy of LGBTQ and non-LGBTQ people alike,” noting that it found basis in the disputed prenatal hormone theory of sexual orientation, which predicts the existence of links between facial appearance and sexual orientation determined by early hormone exposure.

Todorov believes Kosinski’s research is “incredibly ethically questionable” as it could lend credibility to governments and companies that might want to use such technologies. He and academics like cognitive science researcher Abeba Birhane argue that those who create AI models must take into consideration social, political, and historical contexts. In her paper “Algorithmic Injustices: Towards a Relational Ethics,” for which she won the Best Paper Award at NeurIPS 2019, Birhane wrote that “concerns surrounding algorithmic decision making and algorithmic injustice require fundamental rethinking above and beyond technical solutions.”

In an interview with Vox in 2018, Kosinski asserted that his overarching goal was to try to understand people, social processes, and behavior through the lens of “digital footprints.” Industries and governments are already using facial recognition algorithms similar to those he’s developed, he said, underlining the need to warn stakeholders about the extinction of privacy.

“Widespread use of facial recognition technology poses dramatic risks to privacy and civil liberties,” Kosinski and coauthors wrote of this latest study. “While many other digital footprints are revealing of political orientation and other intimate traits, facial recognition can be used without subjects’ consent or knowledge. Facial images can be easily (and covertly) taken by law enforcement or obtained from digital or traditional archives, including social networks, dating platforms, photo-sharing websites, and government databases. They are often easily accessible; Facebook and LinkedIn profile pictures, for instance, can be accessed by anyone without a person’s consent or knowledge. Thus, the privacy threats posed by facial recognition technology are, in many ways, unprecedented.”

Indeed, companies like Faception claim to be able to spot terrorists, pedophiles, and more using facial recognition. And the Chinese government has deployed facial recognition to identity photographs of hundreds of suspected criminals, ostensibly with over 90% accuracy.

Experts like Os Keyes, a Ph.D. candidate and AI researcher at the University of Washington, agrees that it’s important to draw attention to the misuses of and flaws in facial recognition. But Keyes argues that studies such as Kosinski’s advance what’s fundamentally junk science. “They draw on a lot of (frankly, creepy) evolutionary biology and sexology studies that treat queerness [for example] as originating in ‘too much’ or ‘not enough’ testosterone in the womb,” they told VentureBeat in an email. “Depending on them and endorsing them in a study … is absolutely bewildering.”

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Source: https://venturebeat.com/2021/01/11/outlandish-stanford-facial-recognition-study-claims-there-are-links-between-facial-features-and-political-orientation/

spot_img

Latest Intelligence

spot_img