Zephyrnet Logo

The double-edged sword of AI in financial regulatory compliance

Date:

Global money laundering is estimated to account for a staggering two to five per cent of the world’s GDP,
equivalent to up to $2 trillion. In response, regulatory bodies are tightening the noose on financial crime by implementing increasingly stringent regulations and laws.

As they do so, many financial institutions are turning to AI in a bid to spot and stop fraud as it happens. With this technology, compliance teams can detect complex, anomalous behavioural patterns in real-time, providing compelling insights not detectable
by traditional rules-based systems.

In fact,
58 per cent of banks are now heavily relying on AI for fraud detection
, according to the Economist Intelligence Unit. It’s become truly embedded as a tool in the fight against crime. Therefore, it’s perhaps unsurprising that criminals have taken note and
are using it to push back.

This has been accelerated by the arrival of generative AI models, like ChatGPT and Google’s Bard. Unlike the forms of AI that preceded it, generative AI can learn from patterns and structures in data to create new content. As many will have seen in the news,
it can generate text, images, music and even voices or videos that didn’t exist before.

As a result, AI is now being used by rule keepers and rule breakers as they try to outsmart each other. It’s the ultimate double-edged sword. This begs the question, how are fraudsters using it?

Examining the risks

Generative AI and large language models (LLMs) are being developed worldwide, with new versions and expanded capabilities emerging at breath-taking speed. It’s widely unregulated and an absolute gift to those who wish to commit an offence.

For example, Tencent Cloud, a prominent Chinese technology company, has recently launched a
Deepfakes-as-a-Service (DFaaS) solution priced at $145 per video. The implications of DFaaS technology for financial crime are highly concerning, especially when it comes to Know
Your Customer (KYC) protocols and customer due diligence (CDD) checks.

The software can generate fake identity documents, images, and videos that appear authentic, enabling the creation of synthetic identities that can bypass KYC/CDD. Generative AI can also rapidly sift through user accounts, perform scams and phishing attacks,
and gain access to sensitive data.

As fraudsters obtain more personal data and create more believable fake IDs, the accuracy of AI models improves, leading to more successful scams. The ease of creating believable identities enables fraudsters to scale identity-related scams with high success
rates.

Another key area where generative AI models can be employed by criminals is during various stages of the money laundering process, making detection and prevention more challenging. For instance, fake companies can be created to facilitate fund blending,
while AI can simplify the generation of fake invoices and transaction records, making them more convincing.

Furthermore, by bypassing KYC/CDD checks, it’s possible to create offshore accounts that hide the beneficial owners behind money laundering schemes. Generating false financial statements becomes effortless and AI can identify loopholes in legislation to
facilitate cross-jurisdictional money movements.

The possibilities are endless. If a human can imagine a way of committing fraud, generative AI can probably find a way to achieve it. With the ability to digest colossal volumes of data and then create believable content, these models can exploit any imaginable
opportunity to steal.

Harnessing AI’s potential

As mentioned above, the good news is that AI can be used for good as well as bad. While fraudsters explore the malevolent potential it offers, the future of fraud detection and prevention lies in the positive contributions AI can make.

AI’s ability to detect, deter, and halt crime stands as the most compelling application of the technology within the world of finance. And it has long been recognised as such. The US Financial Crimes Enforcement Network (FinCEN)
emphasised the importance of AI and machine learning back in 2018, stating that these technologies can, “Better manage money laundering and terrorist financing risks while reducing
the cost of compliance.”

The Financial Action Task Force (FATF)
echoed this sentiment
, stating that AI-based solutions can help identify risks, respond to suspicious activity, and enhance monitoring capabilities with greater speed, accuracy, and efficiency.

Navigating the path ahead

With the battlelines drawn for the fight between two opposing sides, the only way forward is for more financial institutions to fight fire with fire by adopting AI-enabled AML and fraud prevention solutions. Only by doing so can they effectively address
regulatory requirements and maintain compliance.

To navigate this path, it’s vital that compliance departments build multi-faceted teams adding data scientists, data analysts, forensic accountants and professionals with a programming background. Without this, it will be nearly impossible to take full advantage
of the opportunity afforded by the new technology.

If financial institutions can build the right skills and work with the best-equipped vendors, the next step is transitioning from rules-based technology to AI. This usually requires a move to the cloud, because the computing power required by AI and ML can
be huge. Trying to use older on-premises infrastructure is likely to cause issues.

However, it is imperative to ensure transparency in AI decision-making processes, enabling audits and reporting for regulatory compliance. With a sophisticated approach to AI’s role in meeting financial regulations, compliance can be future-proofed in the
face of evolving challenges.

As the financial sector navigates these uncharted waters, embracing the potential of AI while mitigating risks, the fight against financial crime can be strengthened, ensuring a safer and more secure future for all.

spot_img

Academic VC

VC Cafe

Latest Intelligence

spot_img