Zephyrnet Logo

Preparing for the Next Cybersecurity Epidemic: Deepfakes

Date:

In light of rapidly advancing deepfake technology and increasing reliance on virtual collaboration tools due to post-COVID work arrangements, organizations need to be prepared for malicious actors getting more sophisticated in their impersonation attempts. What was a cleverly written phishing email from a C-level email account in 2021 could become a well-crafted video or voice recording attempting to solicit the same sensitive information and resources in 2022 and beyond.

Deepfakes are images and videos created using computers and machine learning software to make them seem real, even though they are not. Over the past few years there have been several high-profile deepfake attacks resulting in stolen funds. Recently, a Hong Kong-based bank was duped by an artificial intelligence-powered “deep voice” attack that cloned a trusted director’s voice seeking a $35 million transfer. Unfortunately, these instances are becoming increasingly more common. With cybercriminals constantly targeting enterprise organizations, it is now more important to verify all content, despite how valid they may seem on the surface.

Deepfake’s Dangers In the New Hybrid Workforce
Cybercriminals have impersonated company emails for decades through daily phishing attacks. But now, cybercriminals have taken it a step further with voice and video deception. A classic deepfake example is when a person of influence asks for some type of monetary donation. An executive may seemingly send a voicemail over email to an employee asking for a donation to her charity only to find out the recording was a fake and the money was given to an offshore account. It’s easy for the employee to react immediately instead of verifying if it’s true.

If your boss asks you to do something, you usually feel pressured to comply. And in the era of hybrid work, it’s more difficult to confirm these interactions as many organizations are relying on less face-to-face communication. Digital communications attack surfaces are skyrocketing, and the deception technology is keeping pace — making it a very dangerous combination and ripe for deepfake attacks.

How AI Can Combat Deepfake Threats
Fortunately, AI has made significant advances in analyzing video and audio. For example, YouTube automatically generates captions from audio and has text processing as an AI system that scans for keywords and categorizing content. This same type of technology can be used to decipher phishing videos. If a CEO asks employees to donate to his charity, an AI cybersecurity system can convert and analyze the text and recognize if the solicitation is for money and if the payment should be sent via Zelle, instead of a charity website.

An AI system could also abstract this data and input it into a legal compliance framework for approved payment, which would trigger a human to approve the payment or intervene. AI can be a first defense layer and can flag irregularities more quickly than the human eye. For example, an AI system can also quickly compare the audio/video message against existing known original footage to make sure the message isn’t generated from a number of clips manipulated and spliced together, eliminating a time-consuming task for a human. There are more advancements in this type of detection, and several AI systems are capable of processing video and audio for context and can either pass along this information to a human or can be detected automatically.

Authenticity Fights Deepfakes
There’s a growing importance for organizations to determine certificate of authenticity designation manually or automatically for video and collaborative content. Authentication methods, including blockchain, could be a key factor to fighting deepfake attacks. Blockchain can be used in a variety of applications from legal to voting to authenticate identity. Blockchain can be used in a couple ways. It can be used to prompt a user to provide proof of their identity before they can disseminate content under their name. Blockchain applications can also be used to verify whether content in a given file has been forged or manipulated from its original version.

Whether it’s based on blockchain or other authentication methods, authentication schemes can be implemented to determine the legitimacy of an audio or video file. Decentralizing authentication is critical so one entity doesn’t have the full authority to validate content. In addition to blockchain, using multifactor authentication or signatures are viable ways to increase security around authentication.

While organizations can use blockchain and AI to fight deepfakes, one of the most important methods may involve cybersecurity awareness training. More organizations are training their employees to better protect themselves and the organization against a variety of cyberattacks. Organizations should implement cybersecurity awareness training that teaches people to verify if communications are authentic. As new deepfake cyberattacks emerge, awareness training is an important step in the near term to combat a variety of attacks.

Source: https://www.darkreading.com/operations/preparing-for-the-next-cybersecurity-epidemic-deepfakes

spot_img

Latest Intelligence

spot_img