Zephyrnet Logo

Deepfakes are getting easier to make and the internet’s just not ready

Date:


With the proliferation of deepfake apps and features, AI-powered media manipulation technology is becoming more mainstream.
With the proliferation of deepfake apps and features, AI-powered media manipulation technology is becoming more mainstream.
Image: Elyse Samuels / The Washington Post via Getty Images

One of the coolest videos I’ve seen in the past year is a YouTube clip from The Late Show with David Letterman featuring actor and comedian Bill Hader. 

Or… was that actually Tom Cruise? It’s hard to tell sometimes because they keep seamlessly switching back and forth.

So, what exactly are you watching here? Well, someone took an unedited clip of Letterman interviewing Hader and then swapped in Cruise’s face using artificial intelligence.

The video is what is known as a deepfake, or manipulated media created through the power of AI. 

Deepfakes can be as straightforward as face-swapping one actor onto another in a clip from your favorite movie. Or, you can even have an impersonator provide audio to synced mouth movement and create an entirely new moment for that targeted individual. This Obama deepfake, voiced by Jordan Peele, is a perfect example of that usage.  

While the manipulated media is ultimately generated by AI, the human behind it still needs time and patience to craft a good quality deepfake. In the case of that altered Letterman clip, the creator of the video had to take the original clip and feed it to a powerful cloud computer alongside a slew of varying still images of Tom Cruise’s face. 

During this time, the computer is, in essence, studying the image and video. It’s “learning” how to best swap Hader and Cruise’s face and output a flawless piece of manipulated video. Sometimes, the AI takes weeks to perfect the deepfake. Plus, it can be expensive, too. You’ll need a computer with some pretty powerful specs or you’ll have to rent a virtual machine in the cloud to pull off high-quality deepfake creation. 

But, that’s quickly changing. Big tech companies are jumping on the trend and developing their own software so that users can create deepfake content. And now, deepfakes are becoming easier to create.

Earlier this week, the face-swapping mobile app Doublicat launched. Founded by artificial intelligence company RefaceAI, Doublicat is perhaps the simplest media manipulation tool yet. Users just need to download the app, snap a selfie, and choose from one of hundreds of GIFs portraying popular scenes from movies, TV shows, and the internet. Within seconds, your short, looping deepfake GIF is ready to share. 

“We’ve gone from worrying about sharing our personal data to now having to worry about sharing our personal images,” says Singer.

The GIFs are fairly simple and likely chosen based on which image would be easiest for the app to spit out an accurate face swap. It’s far from perfect, but it’s extremely fast. And what it can do with even low-quality selfies is impressive. In time, the technology is only going to get even better.

Doublicat told Mashable that “updates will be coming to allow users to upload their own GIFs, search for GIFs in-app, and use pictures from their phone’s camera roll.” 

Doublicat may be the simplest media manipulation tool in the U.S., but similar apps exist in international markets.

Zao, Snap’s new Cameos, Doublicat — face swapping is becoming a commodity thanks to creative entrepreneurs from China and Ukraine,” said Jean-Claude Goldenstein, founder and CEO of CREOpoint, a firm which helps businesses handle disinformation. Goldenstein points out that Snapchat recently acquired AI Factory, the company behind its Cameos feature for $166 million. 

TikTok, the massively popular video app owned by the Chinese-based Bytedance, has reportedly already developed a yet-to-launch deepfake app as well.

But, it’s not all fun and games.

“A deepfake can ruin a reputation in literally seconds, so if public figures don’t start prepping for these threats before they hit, they’re going to be in for a rude awakening if they ever have the misfortune of being featured in one of these videos,” Marathon Strategies CEO Phil Singer told Mashable. Singer’s PR firm recently launched a service specifically to deal with disinformation via deepfakes.

To understand the concern behind this seemingly harmless tech that’s been used to create funny videos, one needs to understand how deepfakes first rose to prominence.

In late 2017, the term “deepfake” was coined on Reddit to refer to AI-manipulated media. The best examples at the time were some funny Nicolas Cage-related videos. But, then, the fake sex videos took over. Using deepfake technology, users started taking their favorite Hollywood actresses and face-swapping them into adult film movies. Reddit moved to ban the pornographic use of deepfake in 2018 and expanded its deepfake policy just last week.

In an age of fake news and disinformation easily spread via the internet, it doesn’t take long to see how fake pornographic videos can ruin one’s life. Factor in that we’re now in a presidential election year, the first since coordinated disinformation campaigns ran amok in 2016, and you’ll understand why people are worried about malicious uses of this growing technology.

“We’ve gone from worrying about sharing our personal data to now having to worry about sharing our personal images,” says Singer. “People need to be extra judicious about sharing images of themselves because one never knows how they will be used.”

“It is only a matter of time before they become as ubiquitous as any of the social media tools people currently use,” he continued.

Most alarming is that some of the world’s biggest tech companies are still wondering how to combat nefarious deepfakes.

Just this month, Facebook announced its deepfake ban. One problem, though: How do you spot a deepfake? It’s an issue the largest social networking platform on the planet still hasn’t been able to properly solve. 

Facebook launched its Deepfake Detection Challenge to work with researchers and academics on solving this problem, but we’re still not there and we’ll likely never be there one hundred percent. 

According to Facebook’s Deepfake Detection website: “The AI technologies that power tampered media are rapidly evolving, making deepfakes so hard to detect that, at times, even human evaluators can’t reliably tell the difference.”

“That’s a serious problem since AI can’t reliably detect fake news or fact check fast enough,” explains CREOpoint’s Goldenstein.

During our exchange, Goldenstein sent me the following quote: “A lie is heard halfway around the world before the truth has a chance to put its pants on.”

While looking up the quote’s origin, interestingly, I discovered that different versions of the quote have often been misattributed over the years to Winston Churchill. 

If one really wanted to double-down on the belief that Churchill did say this, it seems like it wouldn’t be all that difficult to create a deepfake that “proves” he did.

Source: https://mashable.com/article/deepfake-impersonation-tech-easy-to-make/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?