Zephyrnet Logo

Deepfakes and dodgy dapps: GenAI is here

Date:

The public debut of ChatGPT 3.5 in November 2022 has forced every enterprise to think about generative artificial intelligence, with financial services at the vanguard.

But eighteen months on, it seems financial institutions and fintechs are still struggling to wrestle generative AI into production, given its tendency to make things up. If there’s one use case that’s proven, however, it’s not what the industry would like.

It’s the arrival of deepfakes.

The more firms and processes digitalize, the more vulnerable systems become to hackers. Bankers are visibly nervous about the impact of generative AI on turbo-charging fraud and deceptions. 

“GenAI is a superpower for bad actors,” said Daniel Drew, chief of staff for Hong Kong at HSBC, speaking at a recent conference.

Scary stuff

The ease and availability of deepfakes – video counterfeits of a person’s image and voice to spread lies and misinformation – is startling.

“Two years ago, identity fraud was about the printed image,” said Tony Petrov, chief legal officer at Sumsub, a software company that provides verification services. “Now identity fraud is about a few apps on your phone, and it only costs a few euros a month to do.”

He added the company’s systems logged a 10-times rise in digital fraud among its global clientele over the past year. Many of these attempts at dissemination are clumsy and fail, but it only takes a handful of successes to cause great damage.

For example, in February, multinational engineering company Arup’s Hong Kong-based finance executive was fooled into sending $25 million to criminals. He was confronted with nine video images of various colleagues, including the chief financial officer, who pressured him into making the payments.

“If scammers want to target one person, they will succeed,” Petrov said.

Regulation and response

Regulators in the US and China, among other jurisdictions, are passing more laws. In the US this has taken the form of White House executive orders to its federal bureaucrats.

In China the rules are more specific and thought-through. Most usefully, China mandates watermarking. Just as other countries build intricate holographics or other identity markers for cash, they should be creating similar laws and procedures for digital content.

Although the threat can feel overwhelming, institutions and fintechs are not helpless. Generative AI depends on large-language models and vast amounts of computing power. That’s expensive and hard to obtain. Most deepfakes are not sophisticated, and can be snared in biometric identification processes.



But building defense requires a lot of data, too. Petrov says this means data sharing is important, and that protocols need to develop to better enable sharing across borders. “Fragmentation of data compliance is a dangerous trend,” he said.

Nor can certain techniques such as federated AI or, in the crypto world, zero-knowledge proofs do the job. That’s because these are tools to confirm what information a piece of data holds, but cybersecurity for know-your-customer processes needs the data itself.

Crypto cons

Indeed, the crypto world – already trying to figure out some basics in KYC – is also coming under attack by deepfakes. On June 3, a customer of crypto exchange OKX revealed he lost $2 million to a deepfake scam. Criminals bought his personal details and then used genAI deepfake videos to change his account information and bypass the exchange’s two-factor authentication.

This sort of thing is common, even if it doesn’t get talked about in the media.

“We call these ‘wallet drainers’,” said Shahar Madar, vice president at self-custody digital-asset firm Fireblocks. “An impersonator checks out your assets and asks you for your permissions, to steal ETH, USDC, NFTs – there’s all different kinds of scams.”

Although the OKX customer was about someone using a centralized crypto exchange, deepfakes are being used to target users in DeFi and traders relying on smart contracts. These interactions involve the use of wallets and decentralized apps (‘dapps’), which creates a new vulnerability.

 “Instead of transactions being peer-to-peer, users are interacting with dapps [decentralized applications], giving it information so it can interact with a smart contract,” Madar said.

Smart contracts are usually neither smart nor contracts. They aren’t legally binding. This means trading desks need to read those apps before they sign a transaction. Big firms can review these, including the reputation of the contract writer and the code, but they can also test it before they use it.

Fireblocks is now rolling out a feature to let smaller teams do a basic version of reviewing and testing smart contracts.

Testing is important, because blockchain is all based on code. There’s nothing unpredictable about what will happen, provided users understand the apps they are using, be it for mining, staking, or trading. So running a copy of code, like a virtual twin, is a way to protect against dodgy software.

This is the world

Hackers are turning to deepfakes to get around such defenses.

Madar relates a recent story of an open-source compression library that was compromised by a malicious developer (probably a North Korean agent) who had been lurking in the community for several years. The criminal executed what Madar called a psyop by using avatars to get other developers to reveal sensitive information.

Phishing expeditions are often easy to spot because hackers aren’t known for their attention to grammar, or voice calls sound off. But deepfakes don’t make those mistakes. “It’s translated in real time to sound like someone from New Jersey,” Madar said.

For now, these attacks are still rare, because they require computing power and sophistication. Petrov notes that most deepfake attacks involve pornography, so that’s an obvious red flag. But cybersecurity players, regulators, bankers and police know that deepfakery is only going to increase and become more clever. It’s the one genAI application that everyone can count on being deployed.

spot_img

Latest Intelligence

spot_img