Zephyrnet Logo

Driving innovation in finance through trustworthy AI

Date:

Artificial intelligence (AI) is fast becoming an essential tool for the financial services industry.

Responsible governance will play an important role in the successful deployment of AI

According to Insider Intelligence’s AI in Banking report, most banks (80%) are highly aware of the potential benefits presented by AI.

The opportunities in this space are legion, but AI solutions require careful governance, with the right checks and balances in place to ensure that they are robust and fair.

Applications of AI tools in financial services

The scope of possible uses for AI and machine learning in finance stretches across business functions and sectors. In the financial services space alone, AI tools are already used to refine customer service, client segmentation, fraud prevention and loan assessment, to name just a few.

Customer service: When it comes to customer service, many banks are now looking to AI chatbots to enable 24/7 customer service interactions. These bots use AI and machine learning to answer basic customer questions via an instant messenger interface, responding quickly and saving user inputs for operatives to review if necessary.

As a result, they are able to provide fast and relevant information and support to each user and drive tailored interactions. As these tools become increasingly sophisticated, this kind of support can result in higher satisfaction for both customers – who receive quicker support – and employees, who can be more efficient with their time.

For instance, investment management giant Fidelity’s technology lab in Ireland has built an AI-driven multilingual virtual assistant that parses and answers text-based questions in natural language for the company’s more than 30 million investors.

Client segmentation: AI can also be applied to client segmentation, which is the process of dividing customers based on common characteristics, such as demographics or behaviours. Here, AI can seek out patterns within client data quickly and on a huge scale – creating outputs that would otherwise be unachievable through manual means.

Creating user segments allows financial marketers to focus their outreach – targeting the right customers with the right products and services. Personalising the experience across users’ preferred channels and devices – a further step that can be worked into this segmentation process – can also significantly increase brand engagement and customer satisfaction.

Deutsche Bank, for example, leverages AI-driven client segmentation to improve and tailor its offering for clients. Over the past few years, the bank has started deploying AI solutions across its Securities Services franchise, helping it identify client clusters that are suited to certain types of service based on their behaviour patterns. Like Fidelity, the bank also has an AI- chatbot, Debbie, which responds in real time to customer requests such as settlement status queries.

Loan assessment and fraud prevention: This same pattern-recognition capability means AI can also analyse and single out irregular transactions that would otherwise go unnoticed by humans but may indicate the presence of fraud. This makes it a great tool for banks to assess loan risks, detect and prevent payments fraud and improve processes for anti-money laundering.

Mastercard’s Identity Check solution developed in Dublin, for instance, uses machine learning capabilities to verify more than 150 variables as part of the transaction process to help reduce fraud, making it easier for merchants to accept online payments.

These are just a few examples of how AI can be already applied in financial services. As the technology continues to advance, the number of use cases is set to continue growing.

Taking responsible steps

In step with this increase in the use of AI, it is important that controls on how AI is set up and applied are put in place to ensure systems are robust, fair and safe.

Financial services providers are responsible for ensuring that their data is high-quality and reliable. They need to understand the implications and impact of their technology. Given the complexity and scale of the tasks AI is generally tasked with, it is a real danger that models might go wrong. Without the necessary guidance and proper training, AI can output responses that lead to unknowingly biased decisions, for instance, with potentially damaging consequences.

This is not just a risk management exercise – when done properly and clearly communicated, having good governance in place can also drive business and loyalty. A recent Capgemini study found that 62% of consumers placed more trust in a company whose AI was understood to be ethical, while 61% were more likely to refer that company to friends and family and 59% showed more loyalty to that company.

So how do companies go about ensuring their AI solutions are tightly governed? It begins with having a full understanding of the model and ensuring that controls are proportionate to the importance of the outcomes.

  1. Explainable model

Ensuring this robust governance starts with understanding the model. Whenever an AI algorithm outputs a result, it is critical that the company is able to explain – whether to customers, senior management or just themselves – what that result means and how it was arrived at.

For many application areas in financial services, humans will remain in the loop for critical decision-making scenarios where the onus will be on the algorithm to build trust and confidence. Researchers are hard at work developing these explainable AI algorithms that can provide justification for their result and transparency around their limitations, while also working as effectively as more complex and less transparent “black box” solutions such as deep learning.

As financial services companies move AI models from the lab to the outside world, it is also important to have a team of human operatives in the loop of the system, monitoring and analysing the inputs and outcomes and making sure the tool is processing, learning and performing correctly and as expected.

New software engineering practices such as MLOps (machine learning model operationalisation management) are focusing on streamlining the delivery of machine learning solutions from the lab to production. Proper monitoring and maintenance of AI models over time is important to prevent any potential degradation of results or discovery of unforeseen issues.

For example, researchers at Stanford University and the University of Chicago undertook a study of real-world mortgage data and found that differences in mortgage approvals between minority and majority groups was not only down to bias, but also the fact that minority and low-income groups have less credit history data, making it more difficult for the algorithm to predict the risk of loan default. This study demonstrates the challenges that companies face when ensuring the trustworthiness of their AI model internally, to the public and the regulator.

  1. Materiality

The level of scrutiny an AI model is subjected to should also be adapted in line with the consequences of its outputs. This is based on a concept known as “materiality” – that is, the severity of any negative consequences associated with an erroneous AI output.

For instance, materiality is higher when an AI tool is determining people’s access to life-changing facilities, such as loans or credit cards, than it is when it is simply sorting customers into different segments to help with marketing or sales targeting. The simple principle here is that, as the materiality of an AI output increases, so must the strictness of controls in place to ensure that the outcome is correct.

The most fundamental control in this regard comes back to the first point, which is the need to have an explainable model. Humans need to be able to explain how the AI model works and derives its results. And as the materiality of an output rises, the burden of explainability pertaining to that model rises in step.

This gradation of risk has been captured in the draft EU regulations for AI which classify AI systems into three categories – unacceptable risk (e.g., social scoring systems); high risk (e.g., credit scoring); and limited risk (e.g., customer segmentation). It is likely that most AI applications in financial services will come under the high-risk category, and companies are advised to be regulation ready well in advance of the new regulation’s enforcement.

A bright future

As AI applications become more powerful and widespread, good governance and effective controls will play an increasingly important role. Bringing AI applications to the table without compromising these responsibilities will mean staying informed of how these models work, what they contribute to the decision-making process, what risks are involved and how great the impact of any errors in the system could be.

A natural first step for companies embarking on their AI journey is to focus on low-materiality solutions first ensuring governance is suitably strong and “road tested” to mitigate any higher-level risks of more ambitious solutions.

We can look forward to a bright future for AI in financial services, and responsible governance of solutions stands to play an important role in its successful deployment. By keeping models tight to their tasks and free of bias and error, this will ensure the best results for all.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?