Zephyrnet Logo

UK prepares AI rulebook two months after EU AI Act

Date:

The British government has frequently emphasized its goals of becoming a “AI superpower” in the world. Today, it revealed a new “AI rulebook” that it hopes would help regulate the industry, spur innovation, and increase public confidence in the technology. According to the AI rulebook, artificial intelligence regulation will be less centralized than it is in the EU and will instead give existing regulatory authorities the freedom to make choices based on the circumstances at hand.

Table of Contents

AI rulebook seeks to support innovation

With the intention of giving regulators flexibility to implement these in ways that meet the usage of AI in particular industries, the UK’s approach is built on six basic principles that regulators must adhere to.

Damian Collins, the digital minister, provided a comment on the measures:

“We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work.

It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”

Today, Uk Revealed A New &Quot;Ai Rulebook&Quot; That It Hopes Would Help Regulate The Industry And Spur Innovation, Just 2 Months After Eu Ai Act.
The AI rulebook portrays artificial intelligence technology as a “general purpose technology,” similar to electricity.

At the moment, it might be challenging for businesses to navigate and comprehend the extent to which existing regulations apply to AI. The government is also worried that innovation may be impeded and that it will be more challenging for regulators to uphold public safety if AI legislation do not keep up with the rate of technological advancement.

The AI rulebook portrays artificial intelligence technology as a “general purpose technology,” similar to electricity or the internet, that will have a significant impact on many aspects of our life and vary significantly based on the context and application. Many of which, at this time, we probably even cannot predict.

According to the plans unveiled today, the UK’s approach to regulation appears to seek to give regulators and their industries as much latitude as possible. It remains to be seen if this gives organizations the clarity they require or not, but the expectation is that this strategy will provide them more investment freedom.

Today, Uk Revealed A New &Quot;Ai Rulebook&Quot; That It Hopes Would Help Regulate The Industry And Spur Innovation, Just 2 Months After Eu Ai Act.
The AI rulebook outlines the fundamental aspects of artificial intelligence (AI) to help define the framework’s breadth.

Contrary to the EU AI Act, which will be overseen by a single regulatory agency and seeks to standardize EU regulation across all member states, this AI rulebook is a different approach. The EU’s regulatory strategy is described in the rulebook as having a “relatively fixed definition in its legislative proposals.”

“Whilst such an approach can support efforts to harmonize rules across multiple countries, we do not believe this approach is right for the UK. We do not think that it captures the full application of AI and its regulatory implications. Our concern is that this lack of granularity could hinder innovation,” the AI rulebook states.

Instead, in what the UK refers to as a “Brexit seizing moment,” the AI rulebook outlines the fundamental aspects of artificial intelligence (AI) to help define the framework’s breadth while allowing regulators to develop more particular definitions of AI for their individual domains or industries.

“This is in line with the government’s view that we should regulate the use of AI rather than the technology itself – and a detailed universally applicable definition is therefore not needed. Rather, by setting out these core characteristics, developers and users can have greater certainty about scope and the nature of UK regulatory concerns while still enabling flexibility – recognising that AI may take forms we cannot easily define today – while still supporting coordination and coherence,” the AI rulebook adds.

Today, Uk Revealed A New &Quot;Ai Rulebook&Quot; That It Hopes Would Help Regulate The Industry And Spur Innovation, Just 2 Months After Eu Ai Act.
In light of this, the AI rulebook suggests creating a “pro-innovation framework” for regulating artificial intelligence technologies.

In light of this, the AI rulebook suggests creating a “pro-innovation framework” for regulating artificial intelligence technologies. This framework would be supported by a set of principles that are:

  • Context-specific: They suggest regulating AI in accordance with its application and the effects it has on people, communities, and enterprises within a specific environment, and giving regulators the task of creating and enacting suitable legislative responses. This strategy will encourage innovation.
  • Pro-innovation and risk-based: They suggest concentrating on problems where there is demonstrable proof of actual risk or lost opportunities. And they want regulators to pay more attention to real threats than imagined or minor ones related to AI. They aim to promote innovation while avoiding erecting pointless obstacles in its path.
  • Coherent: A set of cross-sectoral principles customized to the unique properties of AI are proposed, and regulators are requested to understand, prioritize, and apply these principles within their respective sectors and domains. They will search for ways to assist and encourage regulatory cooperation in order to create coherence and boost innovation by making the framework as simple to use as possible.
  • Proportionate and adaptable: In order to make their approach adjustable, they want to first lay out the cross-sectoral ideas on a non-statutory basis, however they’ll keep this under review. They will request that regulators first take a light hand with options like voluntary actions or guidelines.

“We think this is preferable to a single framework with a fixed, central list of risks and mitigations. Such a framework applied across all sectors would limit the ability to respond in a proportionate manner by failing to allow for different levels of risk presented by seemingly similar applications of AI in different contexts.

This could lead to unnecessary regulation and stifle innovation. A fixed list of risks also could quickly become outdated and does not offer flexibility. 

Today, Uk Revealed A New &Quot;Ai Rulebook&Quot; That It Hopes Would Help Regulate The Industry And Spur Innovation, Just 2 Months After Eu Ai Act.
The AI rulebook summarizes cross-sectoral principle under 6 headlines.

A centralized approach would also not benefit from the expertise of our experienced regulators who are best placed to identify and respond to the emerging risks through the increased use of AI technologies within their domains,” the government stated. 

Cross-sectoral principles

The guideline acknowledges that the UK’s strategy does have risks and challenges. Compared to a centralized model, the context-driven approach delivers less uniformity, which could cause confusion and less assurance for enterprises. As a result, the UK wants to make sure that it handles “common cross-cutting challenges in a coherent and streamlined way” by adding a set of overarching principles to this strategy.

The cross-sectoral principles in the rules outline how the UK believes well-regulated AI use should behave and build on the OECD Principles on AI. Existing regulators will interpret and put the ideas into reality, and the government is looking into how it may strongly encourage the adoption of a “proportionate and risk-based approach.”

Today, Uk Revealed A New &Quot;Ai Rulebook&Quot; That It Hopes Would Help Regulate The Industry And Spur Innovation, Just 2 Months After Eu Ai Act.
EU AI Act’s approach will probably have an impact on how AI regulation is handled globally, much like GDPR did.

The AI rulebook summarizes cross-sectoral principle under 6 headlines:

  • Ensure that AI is used safely
  • Ensure that AI is technically secure and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Embed considerations of fairness into AI
  • Define legal persons’ responsibility for AI governance
  • Clarify routes to redress or contestability

The principles will be put into practice by regulators, including Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority, and the Medicine and Healthcare Products Regulatory Agency. They will be urged to take into account “lighter touch” methods, such as counseling, voluntary action, and setting up sandboxes.

Conclusion

In this case, the EU AI Act’s approach will probably have an impact on how AI regulation is handled globally, much like GDPR did. While context is crucial, there are numerous concerns linked with AI that might significantly affect people’s life. Flexibility is desirable, but the general public and consumers require clear channels for reporting or contesting the use of AI and access to information about the decision-making processes used. The UK clearly wants to adopt a hands-off strategy that encourages investment, but one must hope that this won’t come at the expense of decency, clarity, and equity.

spot_img

Latest Intelligence

spot_img