Zephyrnet Logo

Letter decrying predictive criminality AI research paper passes 1,000 signatures

Date:

The Coalition for Critical Technology (CCT) penned a letter opposing the publication of research called “A Deep Neural Network Model to Predict Criminality Using Image Processing.” At the time of publication the letter has more than 1,000 signatures from researchers, practitioners, academics, and others. According to a press release from Harrisburg University, the paper is slated for publication in a book series from Springer Publishing, and the letter urges readers to demand that Springer pull the paper and condemn the use of criminal justice statistics to predict criminality.

The use of algorithms in predictive policing is a fraught subject. As the CCT letter elaborates, criminal justice data is notoriously flawed. “Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data,” the letter reads. One of the primary researchers on the Harrisburg University paper, Jonathan Korn, is a former NYPD officer.

According to Harrisburg University’s press release, the paper promises, “With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime.”

The notion of determining criminality from a person’s face is fundamentally racist and has deep roots in historical pseudoscience; using facial recognition technology is just a modern means of attempting to answer the same flawed question. The CCT’s letter begins by condemning the very idea of the researcher’s question: “Such claims are based on unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years.”

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

This is not the first time the paper in question has appeared. In early May, Harrisburg University pushed out the same press release, only to take it down after harsh criticism. Motherboard kept an archived version of the release and reported that one of the researchers, Nathaniel J.S. Ashby, said in an email, “The post/tweet was taken down until we have time to draft a release with details about the research which will address the concerns raised.”

But it’s unclear what has changed since May. The “new” press release is identical to the first one, and Ashby declined to respond to VentureBeat’s questions about how they’ve changed the paper. VentureBeat asked Springer if it will accede to the CCT’s demands, but the publisher has not responded at this time. (Update, June 23, 12:09pm PT: Although Springer has not responded to VentureBeat, the company confirmed on Twitter that it will not publish the article. The company did not expound on its plans for any of the group’s other demands.)

An overarching issue is the inability of technology to determine things that are fundamentally socially constructed and defined. “Machine learning does not have a built-in mechanism for investigating or discussing the social and political merits of its outputs,” the letter asserts. This reflects Dr. Ruha Benjamin’s statement from a talk earlier this year in which she explained that “computational depth without historic or sociological depth is superficial learning.” Researcher Abeba Birhane further unpacks this notion in her award-winning paper “Algorithmic Injustices: Towards a Relational Ethics.”

The CCT’s letter is a rich resource for prior research on the topics of criminality, predictive policing, facial recognition, and related issues.

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/vGTbl1pJ2fg/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?