Zephyrnet Logo

Twitter wants help with deepfakes, and Microsoft Azure will rent out new AI chips for its cloud users, and more

Date:

Roundup Here’s this week’s collection of AI-related news that we found interesting. Read on to find out more about a new chip coming to Microsoft Azure and how Twitter hopes to deal with deepfakes.

Graphcore ML chips coming to Microsoft Azure: Graphcore, a British AI hardware startup, is teaming up with Microsoft to bring its Intelligence Processing Unit chip to cloud users.

“The Graphcore IPU is unique in keeping the entire machine learning knowledge model inside the processor,” it said this week. “With 16 IPU processors, all connected with IPU-Link technology in a server, an IPU system will have over 100,000 completely independent programs, all working in parallel on the machine intelligence knowledge model.”

It’s not available to rent via Microsoft’s cloud just yet, however. If you’re interested you’ll have to sign up online, the service will be prioritized for those working in natural language processing – a strong point for the IPU.

Despite all the flashy claims of the IPU’s performance when it comes to training and inference for the large language model BERT, it’s not really clear how good the hardware really is. Graphcore hasn’t revealed the full specs, and it hasn’t submitted any results to ML Perf, an industry effort to benchmark AI hardware. There are also few comparisons with other types of chips for different types of more commonly used models like ResNet-50 in computer vision.

You can read more about the announcement here.

Here’s what it’s like to work for Amazon Mechanical Turk: Unfortunately current machine learning systems can’t learn from the messy real world yet, and require large training datasets that have been carefully preprocessed by teams of humans.

Services like Amazon Mechanical Turk, an online marketplace where workers – or turkers – bid to perform simple computational tasks for a small wage, provide a way for companies or researchers to outsource the laborious work of labeling data.

How much these turkers really make is murky. The vast majority make under minimum wage. A hack from the New York Times tried to see what it was like and only managed to scrape a measly 97 cents per hour. Sometimes turkers are underpaid what they were initially promised when they began working on a particular job, and lack the power to dispute payments.

AMT may give people the power to choose where and when they work, but that type of freedom comes at a cost. You can see what a typical job for a turker is like here.

How should Twitter deal with deepfakes?: The rise of fake content manipulated using AI algorithms has set off alarm bells with social media platforms, and politicians concerned over how it could impact misinformation.

The term ‘deepfakes’ has been used to describe the images, videos, audio clips, and sometimes even text, generated by machine learning models. As they spread over the internet, Twitter has began thinking how it should update its rules to deal with the false content.

At the moment, it’s drafted a few proposals that include notifying users when a tweet contains synthetic media, so that they’re aware before they decide to share something fake. It’s also considering adding a link, that could be a news article or Twitter Moment, to give people more information about why they believe that the image or video is a deepfake.

These ideas aren’t set in stone, however, and the social media giant has asked the public for feedback. Users can tell Twitter their opinions by filling out an online survey.

“When you come to Twitter to see what’s happening in the world, we want you to have context about the content you’re seeing and engaging with,” it announced this week. “Deliberate attempts to mislead or confuse people through manipulated media undermine the integrity of the conversation.”

“That’s why we recently announced our plan to seek public input on a new rule to address synthetic and manipulated media. We’ve called for public feedback previously because we want to ensure that — as an open service — our rules reflect the voice of the people who use Twitter. We think it’s critical to consider global perspectives, as well as make our content moderation decisions easier to understand.”

You can tell Twitter what you think here or help them come up with novel ways of detecting deepfakes here.

Scarily good deepfake alert!: Speaking of deepfakes, here’s a creepy video of a roundtable discussion featuring top Hollywood actors and directors discussing the future of cinema with streaming services.

The conversation between Robert Downey, Jr. , George Lucas , Tom Cruise, Ewan McGregor, Jeff Goldblum. But there’s a catch, the whole thing – apart from the moderator Mark Ellis – has been faked. Other actors and comedians were cast to play the role of those Hollywood actors, and then their faces were manipulated so they appeared just like the actor they’re imitating.

The mouth movements match up to the speech, their facial expressions look pretty realistic, and it’s frankly terrifying. The only thing that gives it away is that sometimes the actors just look a little off. For example, Tom Cruise’s face is just a bit too blurry and Robert Downey Jr’s stare is unnatural.

You can see it for yourself below. ®

Youtube Video

Sponsored: Detecting cyber attacks as a small to medium business

Source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/18/ai_roundup_151119/

spot_img

Latest Intelligence

spot_img