Zephyrnet Logosu

Yapay zeka algoritmalarını evcilleştirme — ön yargısız finans

Tarih:

When the computer says ‘no’, is it doing so for the right reasons?

Human biases all too readily creep into AI technology

That is the question increasingly being asked by financial regulators, concerned about possible bias in automated decision making.

With the Bank of England and Financial Conduct Authority (FCA) both highlighting that new technologies could negatively affect lending decisions and the Competition and Markets Authority (CMA) scrutinising the impact of algorithms on competition, this is a topic that’s set to have extensive governance implications. So much so that the European Banking Authority (EBA) is questioning whether the use of artificial intelligence (AI) in financial services is “socially beneficial”.

However, as consumers increasingly expect loan and mortgage approvals at the click of a button, and with some estimates suggesting that AI applications could potentially save firms over $400 billion, there is plenty of incentive for banks, for instance, to adopt this technology with alacrity.

But, if bias in financial decisions is “the biggest risk arising from the use of data-driven technology”, as the findings of The Centre for Data Ethics and Innovation’s AI Barometer report suggest, then what is the answer?

Algorithmovigilance.

In other words, financial services firms can systematically monitor the algorithms computers use to evaluate customer behaviours, credit referencing, anti-money laundering and fraud detection, as well as decisions about loans and mortgages, to ensure their response is correct and appropriate.

Algorithmovigilance is necessary because human biases all too readily creep into AI technology, and this makes it vulnerable to often unrecognised social, economic and systemic tendencies that lead to discrimination – both explicit and implicit.

The problem is that the datasets companies compile and supply to AI and machine learning (ML) systems are often not only incomplete, out of date and incorrect, but also skewed – unintentionally (though perhaps sometimes not) – by the inherent prejudices and presumptions of those who develop them.

This means that a system’s analysis and conclusion can be anything but objective. The old computer adage of ‘garbage in, garbage out’ still applies.

And when it comes to training an ML algorithm, just as with a child, bad habits left unchecked are repeated and become embedded.

So, as long as humans are at least partly involved in making loan decisions, there is potential for discrimination.

Ethical AI should be a priority

Designing AI and ML systems that work in line with all legal, social and ethical standards is clearly the right thing to do. And going forward, financial services firms will come under pressure to make sure they are fully transparent and compliant.

Those who fall behind, or fail to make it a priority, may find themselves faced with not inconsiderable legal claims, fines and long-term reputational damage.

Trust has become the currency of our age, an immensely valuable asset that’s hard to gain and easily lost if an organisation’s ethical behaviour (doing the right thing) and competence (delivering on promises) are called into question.

If people feel they are at the wrong end of inexplicable decisions they cannot challenge because ‘black box AI’ means a bank cannot explain them, and which cannot be understood by regulators who often don’t have the technical expertise to do so, there is a problem.

How big is that problem?

No one is quite sure. However, the National Health Service (NHS) in England, in a first of its kind pilot study into algorithmic impact assessments in health and care services, may give us some idea.

With a third of businesses already using AI to some extent, according to IBM’s 2021 Global AI Adoption Index, more senior executives are going to have to think long and hard about how to protect their customers against bias, discrimination and a culture of assumption.

Creating transparent systems

If we are to move into a world where AI and ML systems actually work as they were intended, senior leaders must commit to algorithmovigilance to ensure that it is both seamlessly embedded into existing corporate and governance processes and then supported through ongoing monitoring and evaluation, with immediate remedial action taken where necessary.

So, organisations must ensure that staff working with data or building machine learning models are focused on developing models devoid of implicit and explicit biases. And since there is always potential for a drift towards discrimination, training of systems needs to be seen as continuous, including monitoring and addressing how particular algorithms respond as market conditions change.

For those who have yet to fully embrace what needs to be done, how might they move forward?

Establishing an internal AI centre of excellence may be a good starting point. Having subject matter experts in one place provides focus and enables a more centralised approach, allowing momentum to be built by concentrating on solving high-value, low-complexity problems to quickly deliver demonstrable returns.

Certainly, banks and financial institutions must learn from any best practice examples shared by regulators to educate themselves about biases in their systems that they may be unaware of.

And so, we come to the crux of the matter: our fundamental relationship with technology. While artificial intelligence and machine learning systems have transformational potential, we should not forget that they are there to serve a purpose and not be an end in themselves.

Removing unintended biases from the equation will be a multi-layered challenge for financial institutions, but one that must be tackled if they are to put ‘rogue algorithms’ back in their box.

spot_img

En Son İstihbarat

spot_img