Search for content, post, videos

A Primer on the EU AI Act: The World’s First AI Rulebook – Why It Could Drive Innovation

Over the past few years, artificial intelligence has gone mainstream. Especially since the release of OpenAI’s ChatGPT, which was AI’s breakthrough moment, initiating AI’s integration in most industries. Every day, the media are reporting on a new AI breakthrough – or publishing an opinion piece warning about the dangers of AI.

Either way, AI investments are pouring in. This should not come as a surprise, considering the value that AI is projected to bring to the global economy – over 15 trillion USD, according to a PwC report.

AI has almost limitless potential for bringing long-term growth. Organizations across the world are already using it to automate processes, make more accurate predictions, translate text more efficiently, build better recommendation systems, and generate content, whether audio, video, image, or text. The deployment of AI throughout companies’ processes offers a competitive advantage that few technologies can replicate.

However, with the impressive potential of AI also comes great risk. This is especially true if AI finds its way to the public sector. After all, AI used by governments can impact millions of people in key areas of their lives, like law enforcement or access to welfare. The latter has perhaps never been more relevant than in the Dutch government’s infamous childcare benefits scandal, in which an algorithm used by the Tax and Customs Administration incorrectly flagged numerous people for fraud – quite literally ruining the lives of thousands of people with an immigrant background.

Algorithmic discrimination scandals, such as the example above, and calls to take action (from, for instance, NGOs) put AI on the radar of the EU regulators. After all, if AI can take action autonomously or influence the decisions we make, then there should be rules governing those systems. If AI is going to become ever-present in our lives, then it can even invade our fundamental rights, including freedom from discrimination. And if AI is going to help control key aspects of society such as public infrastructure, we better have some strong safeguards in place to make sure things do not go awry.

Adding to that, much of the discourse surrounding the AI Act, particularly when NGOs got involved, centered around not repeating the mistakes made in governing social media platforms and Big Tech companies. Many civil society actors argued that policymakers underestimated Big Tech and social media companies, such as Google and Meta, when they were still in their infancy (in part because they did not understand the technology), which has resulted in their extremely strong economic and political position today.

That is why the EU institutions have devoted considerable attention and resources to creating the world’s first dedicated rulebook on artificial intelligence. The original proposal by the European Commission (the only EU institution with the right to propose legislation) dates back to 2021, with negotiations taking as long as December 2023, when the political agreement between the institutions was reached.

Next, policy officers worked diligently on ironing out the technical details of the text, with the European Member States giving the green light for the text in early February Its final approval and publication is expected in the coming months.

The AI Act Itself

Let us take a closer look at what the AI Act (AIA) is. At its core, the AIA intends to protect the EU citizens’ fundamental rights. That is why it groups AI systems into four risk categories and assigns obligations based on their category.

  • Unacceptable Risk: Systems that pose unacceptable risks to citizens’ fundamental rights and that are therefore banned, except in some cases such as for law enforcement. Examples include social scoring and credit systems (as used in China), systems exploiting or manipulating people’s vulnerabilities, or emotion recognition systems at school or work.
  • High Risk: Systems that pose significant risks without warranting a ban. These systems are subject to strict rules, such as risk management obligations and a conformity assessment, and they will have to be registered in an EU-wide public database. Examples include systems used to influence elections or provide access to essential public services, employment, or healthcare. If the Dutch SyRI (Systemic Risk Indication) algorithm had been created today, it would be classified as “high-risk”, and its terrible consequences would likely have been avoided.
  • Limited Risk: Systems with a relatively low chance of directly harming citizens’ rights. Such systems are subject to transparency obligations such as disclosing that certain content is AI-generated, in the case of chatbots and image generators. The limited-risk category also includes systems in the same fields of high-risk systems (i.e., education) that only perform narrow, procedural tasks, such as improving previous human activities as an additional layer (such as grammar checks) or detecting certain human decisionmaking patterns to find anomalies and trends.
  • Minimal or No Risk: Systems essentially considered to carry barely any risk. They can be developed and deployed without any specific obligations. Examples would include AI-powered spam filters.

One key addition that came relatively late in the legislative process is the addition of extra rules for general-purpose AI models like GPT-4 (which powers OpenAI’s ChatGPT). These rules include disclosure of information to authorities and downstream developers (companies building on the foundation model), with added rules for models deemed to have systemic risk.

They also have transparency obligations included in the limited-risk category. In addition, the most powerful models must also perform an assessment of “systemic risk” due to their potential impact. In other words, more impactful models pose bigger risks, and will, therefore, be subject to more stringent rules.

One of the primary inquiries that risk managers commonly contemplate regarding regulations of this nature is: “Does it possess enforcement mechanisms?” The AI Act, in fact, boasts a notably robust penalty framework. The penalties for non-compliance with prohibitions, categorized as unacceptable risks, can reach as high as €35 million or 7% of a company’s global annual turnover. Providing incorrect or misleading information to competent authorities may result in fines of up to €7.5 million or 1% of the annual turnover.

Failure to adhere to other obligations, including those pertaining to GPAI model providers, could entail penalties of up to €15 million or 3% of the annual turnover. By way of illustration, this would entail OpenAI potentially facing a financial liability of up to €44 million from its €1.48 billion turnover (equivalent to 1.6 billion USD) in 2023.

One detail on the penalty regime is interesting: the fines for Union institutions and agencies are much lower, between €1.5 million and €750.000. To some, this could seem jarring, since the use of AI in the public sector (and particularly at the European level) can affect many more people more severely than in the private sector, where some companies’ activities only affect a limited number of citizens’ lives.

What It Means for Risk Management

In the end, if you are an experienced risk manager, and especially in AI, the AIA does not bring too many new things to the table. It does not invent entirely new concepts out of thin air, and the obligations it imposes seem both sensible (in that they effectively limit risk) and not overly restrictive. In the end, even though some proponents of a more laissez-faire approach decried that the Act would kill AI innovation in the EU, it seems that these fears were overblown. In fact, to a seasoned risk professional, the AI Act’s demands do not seem all that revolutionary – or even like common sense for any data-driven company.

Nonetheless, it is important to bear in mind that we are looking at the regulation through, in a manner of speaking, “risk-colored glasses”. If you are used to seeing, managing, and mitigating risks across entire organizations, you might think that every company is using healthy and sensible risk management practices. In reality, many companies do not have the right practices in place just yet, and the AIA’s obligations and steep fines pose a material risk to their operations.

In our experience, many companies have not been using the right risk management procedures for AI. The technology itself is different, in part because it takes away a bit of agency from your employees or colleagues, and it sometimes limits control over your processes. In some cases, such as when employees secretly use ChatGPT for critical processes, AI can operate almost invisibly, and you would not even know that it is being used within the company where you are supposed to have a good grasp on all of the risks it is exposed to.

Due to the exorbitant amount of attention for AI and the almost otherworldly amounts of hype surrounding it, companies with little digital transformation experience are trying to leverage the power of AI too. This means that they do not have established, streamlined, and vetted risk management processes that could protect them from accidentally messing up their business processes by, for example, needlessly stopping an industrial production line, or worse, harming people through discrimination.

AI Trust and Innovation

The AI Act really formalizes, streamlines, and hopefully enforces many of the good risk management and mitigation practices that some companies were already using. However, certainly, it now also forces those who have ignored AI risks to ensure the systems they develop and deploy are safe.

Tying real consequences to these rules has an added – and often underestimated – bonus. High-risk systems are subject to several requirements and are vetted through a conformity assessment that, if successful, earns them a CE marking, a certification that shows these systems conform to EU standards for consumer safety. The CE marking signals that a company has appropriate risk management strategies, but also that an AI tool is fundamentally trustworthy.

User trust just so happens to be a key driver of user adoption of technologies more generally, and this is true for AI as well. Hence, AI tools that are more trusted are more likely to have lots of users. This is a direct market advantage that AIA-compliant companies can leverage immediately, so companies that adopt the AIA’s requirements more quickly have a leg up over their competition.

To go even further, the AIA could even stimulate innovation within companies. Instead of the Wild West, we now have clear guidelines and standards that new projects should adhere to. Technical teams and developers, for example, no longer have to worry about whether an AI project they have been working on turns out to be impossible to deploy due to rules being set after they have done all the work.

The AIA also fosters better data management practices, which in turn sets the stage for more streamlined innovation efforts.

The jury is still out on what the effects of the AI Act will be, both on AI safety and on AI investment and innovation in Europe. What is certain, however, is that companies will have to start making moves to address their risk management practices. Even though many of the obligations will take one, two, or three years to come into force, it also takes time to not just set up the right risk management and compliance processes, but also, and perhaps more so, to get those processes into a state where they become second nature. After all, to avoid the massive fines we just discussed, really fine-tuning risk management and compliance within an organization is of vital importance. That takes both patience and careful guidance.

Fortunately, there are many resources available for companies looking to start their AI compliance process. You can find more information at Cronos.ai.

Leave a Reply

Your email address will not be published. Required fields are marked *