Zephyrnet Logo

Google gets woke on gender in Vision API, Amazon happy to sell its facial recognition code to foreigners, and more

Date:

Elon Musk roasts OpenAI, says should be more open

Roundup Hello readers. If you’re struggling to keep up with all the AI-related news spewed out and have already read what we’ve covered this week, then here’s more.

Me, sexist? No! What’s gender anyway?: Google’s Vision API, a service that offers pre-trained computer vision models for image recognition, will no longer identify gender in photos.

If an image of a person is fed into the API, Google will now label them as a ‘person’ rather than ‘male’ or ‘man,’ or ‘female’ or ‘woman’. The move to scrap “gendered labels” was to reduce the chances of unfair biases, apparently.

“Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias,” a spokesperson told Business Insider.

The classification of male and female doesn’t apply to everyone. Training machine learning models on these two labels means that they can fail when given a picture of transgender or non-binary people. To avoid such mistakes, Google’s Vision API will now just label someone as a person.

The change only affects Google’s Vision API, and doesn’t apply to its AutoML Vision service. AutoML Vision is more flexible, and users can train models on their own custom labels, so they can include gendered labels if they want.

Deepfakes in India’s politics: Fake videos of politician Manoj Tiwari, who is running for the current State Legislative Assemblies in India, began surfacing this week.

In one of the clips, Tiwari criticises his opponent, Arvind Kejriwal, from the Aam Aadmi Party for not sticking to his promises of opening more schools and installing more CCTV cameras in English.

Youtube Video

In another clip, he’s positioned against the same background, wearing the same clothes, and making another speech. But this time, he’s speaking in Haryanvi, a dialect of Hindi.

Youtube Video

If that’s not suspicious enough, here’s a third video that’s very similar to the first two – except now Tiwari is speaking in a completely different language.

Youtube Video

When viewed together, it definitely looks like the clips may have been altered using machine learning algorithms. The fake content described as deepfakes allows people to paste over someone’s face onto another person’s body. It’s possible that Tiwari’s appearance from the shoulders up was mapped onto other people’s bodies, and presumably these people were the ones that spread his message in English and Haryvani.

These deepfake videos were then spread across 5,800 WhatsApp groups, reaching up to 15 million people, as first reported by Vice.

The majority of deepfakes – about 96 per cent – are for pornographic content. Internet perverts have a penchant for swapping the faces of adult actresses for their favorite female celebrities.

But the creation of deepfakes for political reasons seems to be rising. Suspected fake videos of politicians from other countries, like Malaysia and Gabon, have cropped up too.

Hell yeah, we sell our facial recognition to police departments. And we’d probably sell it foreign governments too: The head of Amazon’s AWS cloud service, Andy Jassy, said he was happy to offer its facial recognition technology to law enforcement and would sell it to foreign governments too.

Facial recognition is the most controversial application of modern AI. It’s a well-known fact that the vast majority of models struggle to identify women and people of darker skin as accurately as white men. The technology, therefore, is likely to carry racial and gender biases, possibly leading to things like false arrests from incorrect matches.

Despite these issues, however, Amazon continues to sell to law enforcement departments across the US. In a documentary, Amazon Empire: The Rise and Reign of Jeff Bezos, produced by Frontline, the investigative journalism arm of America’s Public Broadcasting Service, Jassy states that he would sell Amazon’s Rekognition technology to foriegn governments.

“There’s a number of governments that are against the law for U.S. companies to do business with,” he said. “We would not sell it to those people or those governments.”

When pressed with the fact that some countries that the US can trade freely with are known for enforcing oppressive regimes and human rights abuses, Jassy said: “Yeah, again, if we have documented cases where customers of any sort are using the technology in a way that’s against the law or that we think is impinging people’s civil liberties, then we won’t allow them to use the platform” — meaning all of AWS, not just Rekognition.”

So, erm, that’s all okay then.

Algorithms inspecting visa applications: An architect’s visa to travel to the US was revoked after a computer algorithm flagged him up for being involved in a security threat.

Eyal Weizman, director of Forensic Architecture, a research group based in London that analyses and investigates videos of violent conflicts and human rights abuses around the world, was told he could no longer enter the US for a trip planned this month. Weizman has had no previous problems passing in and out of the border, and was flew to American in December.

But this time, this visa was revoked. When he went to the US embassy in London to apply for it again, he was told that his name had been flagged up by an algorithm. The computers had “identified a security threat that was related to him,” according to The New York Times. The embassy told him that the algorithm may have singled him out for interacting with certain people or staying in certain hotels.

He was asked to provide travel details over the last 15 years, including whether he had visited Syria, Iran, Iraq, Yemen, or Somalia. Weizman has passports from the United Kingdom and Israel.

Not much is known about how the algorithm works. A spokesperson from the US Customs and Border Protection refused to discuss the issue further and said that visa records were confidential under US law.

OpenAI not so open, after all: Here’s this week’s long read: OpenAI, the research lab based in San Francisco, known for its very public quest to develop artificial general intelligence has changed over the years.

Its reputation as a friendlier and more transparent company, compared to other bigger Silicon Valley tech corps, has slowly eroded over time. OpenAI appears to operate just like any other upstart now. There is a strong incentive for developing technology for profit, installing corporate secrecy, and an aggressive PR strategy.

All of it seems to stem from the OpenAI transforming from a nonprofit to a startup accepting cash from investors.

MIT Tech Review’s Karen Hao discovered this when she was given limited access to interview some of the company’s most prominent employees. On the surface they appeared open, talking about their grandeur visions of AGI, but behind closed doors other employees were told to notify the internal communications team whenever Hao contacted them without explicit permission to talk. It’s a common tactic employed by companies to prevent employees leaking to the press.

Read her story to find out more about the internal politics of what goes on inside OpenAI.

After the article was published, Elon Musk, who left OpenAI’s board last year criticized the company for its lack of transparency and said he had little confidence in the company’s safety strategy. Ouch. ®

Sponsored: Detecting cyber attacks as a small to medium business

Source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/24/ai_roundup_210220/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?