Zephyrnet Logo

The EU AI Act – What Are The Implications For Banking and Fintech?

Date:

Yesterday’s European Parliament’s final vote on the AI Act, set to take effect this May, heralds the world’s most comprehensive AI legislation. Just Like GDPR, it will have global implications beyond the EU.

The AI Act provides a comprehensive framework for ensuring development of trustworthy AI and the responsible use of AI tools, in particular transparency, bias, privacy violations, security risks, and the potential for disseminating misinformation, as well
as human oversight in development of AI technologies. 

In those guidelines, seven non-binding ethical principles for AI are used, intended to help ensure that AI is trustworthy and ethically sound. The principles include

–       human agency and oversight;

–       technical robustness and safety;

–       privacy and data governance;

–       transparency;

–       diversity, non-discrimination and fairness;

–       societal and environmental well-being and accountability.

With a tiered risk-based approach, high-risk AI systems in sectors like banking and healthcare will face stringent legal obligations and sizable penalties for non-compliance. The Act categorizes AI into four risk tiers, from minimal to unacceptable, each
with escalating obligations.

The EU AI Act prohibits the development, deployment, and use of certain AI systems, including:

–       Social scoring systems

–       Social engineering

–       Real-time remote biometric identification in public spaces

–       AI-based profiling and behaviour prediction

–       Scraping and augmentation of facial images to expand databases

–       AI-based manipulative techniques undermine autonomy and free choice 

Not all AI systems pose significant risks, especially if they don’t materially influence decision-making or substantially harm protected legal interests. AI systems with minimal impact on decision-making or risk to legal interests, such as those performing
narrow tasks or enhancing human activities, are considered low risk. Documentation and registration for these systems are emphasized for transparency. Some of the high risk AI systems include several sectors, including banking and insurance (as well as medical
devices, HR, education, and more).

Mandatory requirements for high-risk AI systems aim to ensure trustworthiness and mitigate risks, considering their purpose and use context. It is important for financial services and fintech firms, especially those dealing with customer data, to keep the
below in mind these requirements for high-risk AI systems:

–       Continuous, iterative risk management for high-risk AI, focusing on health, safety, and rights, requiring updates, documentation, and stakeholder engagement.

–       Conducting a fundamental rights impact assessment

–       rigorous governance to avoid discrimination and ensure compliance with data protection laws

–       Training and testing datasets must be representative, accurate, and free of biases to prevent adverse impacts on health, safety, and fundamental rights

–       Ensuring human oversight and transparency

–       Ensuring bias detection and correction

–       comprehensible documentation for traceability, compliance verification, operational monitoring, and post-market oversight, including system characteristics, algorithms, data processes, and risk management in clear, updated technical documents, plus
automatic event logging throughout the AI’s lifetime.

–       High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity

Businesses must prioritize developing Responsible AI to comply with recent regulations and prevent hefty penalties for non-compliance. These are some of the steps firm should start with to ensure compliance:

  1. Establish AI governance early on, ensuring involvement and buy-in across stakeholders
  2. Educate and train your team in ethical principles of AI. Managing AI risks will require new skills, ranging from data analysis to security/privacy, legal, and much more.
  3. Perform an AI audit of the organization (and not just engineering), but legal, HR, etc as well to gain a full picture of where AI is used in the organization
  4. Check ongoing compliance
  5. Ensure your SaaS providers are using AI responsibly
  6. Ensure transparency, understanding, and explainability of the models that are used in your business

While a much needed step in the right direction, the devil is in the details, and the AI Act will have a substantial impact on the future of organizations, both traditional, and AI-centric. As we navigate through an era where AI’s impact is increasingly
profound, aligning with ethical standards and regulatory requirements is not just a matter of legal compliance but a strategic imperative. By focusing on Responsible AI, businesses not only safeguard themselves against significant fines but also position themselves
as trustworthy and forward-thinking entities in the rapidly evolving digital landscape. The journey towards Responsible AI is a challenging yet indispensable path that promises to redefine the future of technology, governance, and societal well-being.

spot_img

Latest Intelligence

spot_img