Search for content, post, videos

Legal Considerations in Implementing the EU AI Act

In an era marked by exponential technological advancements, Artificial Intelligence (AI) stands as a transformative force shaping industries and societies worldwide. As AI gradually leaks in various aspects of our lives, concerns regarding its ethical, legal, and societal implications have become increasingly pronounced. Recognizing the necessity of regulating AI to ensure its responsible development and deployment, the European Union (EU) has taken a pioneering step in this regard represented by the AI Act.

Its purpose is tailored to address the complex challenges posed by AI technologies. It outlines the rights and responsibilities of various stakeholders and at the same time sets the criteria for determining AI systems’ compliance and the consequences for subjects who fail to do so. This article serves as an introductory guide to the AI Act, offering professionals insights into its key provisions, implications, and potential impacts on their respective domains.

In its principle the AI Act represents challenges and opportunities for industry compliance, identifying opportunities for innovation and growth within the regulatory framework. What shall be said maybe at the first stage is that the AI Act does not represent derogation to already existing regulations “established within the market”.

Following the exact wording of Article 9 of the adopted recital “The harmonized rules laid down in this Regulation should apply across sectors and, in line with the New Legislative Framework, should be without prejudice to existing EU law, in particular on data protection, consumer protection, fundamental rights, employment, and protection of workers, and product safety, to which this Regulation is complementary”. The AI Act itself just created a regulatory framework for unique “subject” represented by the AI systems respectively for anyone who might be, due to participating on the AI lifecycle, concerned.

Scope

From a legal perspective, it should be stated that the exceptionality of the AI system, as well as the scope of the AI Act usage, is within the legal field an interesting feature that distinguishes the law from other similar types of legislation. We speak about the geographical boundaries as the AI Act goes beyond the geographical aspect of the EU territory. It takes into consideration scenarios where the output, i.e. produced AI systems, is utilized within the EU; meaning that even if an AI system is operated, hosted, or its manufacturer is established outside the EU, if its results are used within the European Union, the AI Act will apply.

The regulator went even further ensuring that the regulation would also cover cases where the product manufacturers who are putting into service an AI system in their product must be places under their name and trademark before hitting the market. This means that the manufacturer does not need to create the relevant AI system, they could just mark it by their name and add that system to their product. The logic behind the scope is simply defined here since the goal of the legislator was to protect as many cases as possible (falling within the legal framework defined by the EU law) when AI “enters” or its results could affect the EU market.

Risk-Based Approach

AI systems are rapidly evolving technologies that may contribute to various areas of our private or professional lives. However, introducing them as an essential part of our daily lives may also generate risk and cause harm to any fundamental rights or interests protected by the EU law. This is why the initial criterion for evaluating the impact of AI has shifted towards assessing the extent of risk associated with these systems, leading to their categorization into specific risk classes. In particular, the AI Act prohibits practices (such as social scoring, exploitation of vulnerabilities of persons using subliminal techniques, emotion recognition in the workplace, etc.), where the risk they presented was unacceptable.

It introduces standardized requirements for AI systems based on their categorization, reflecting the level of risk they might represent, especially those deemed as:

  1. High-risk level AI systems: These are sorted as either AI systems, which are intended to be used as a safety component of a product, or AI systems as a product themselves, which fall within obligatory party conformity assessment, or are by sectorial perspective provided within Annex III.
  2. AI systems of minimal or limited risk level: Although these are not explicitly mentioned in the law as a separate group, their determination can be inferred from the obligations imposed on all AI systems universally, i.e. obligatory requirements placed even on those risks that are not formerly categorized within the high-risk group. For instance, the obligation is given to providers to design and develop systems in a way that people understand that they are interacting with an AI system from the outset (e.g., chatbots), or the obligation of deplorers to inform and obtain the consent of people exposed to permitted emotion recognition or biometric categorization systems (e.g., safety systems; monitoring driver attentiveness), and disclose and clearly label where visual or audio “deepfake” content has been manipulated by AI.

In addition to the categorization based on risk levels, general-purpose AI systems (GPAI) were excluded from a distinct category due to their unique characteristics.

Following these categories different subjects are assigned different requirements. The essential obligation of any operator of the AI system might be simplified into two basic steps:

  1. Necessary assessment of whether the AI system is subject to any compliance obligations; if so
  2. Classify it within the correct risk level or group of GPAI using the positive and negative approach (meaning that Article 6, section 3 of the AI Act notes which AI systems, despite their function, might be under certain questionable circumstances and should not be considered as high risk).

Considering the risk level of all obliged subjects, i.e. operators of high risk, minimal, or limited-risk levels, the AI system will be required to comply with a different range of duties.

For instance, developers of high-risk AI systems aside from conformity assessment would need to ensure that effective AI governance frameworks, compliance, quality, and risk management systems are in place (the same obligation is in place for GPAI systems, followed by specific obligations imposed on providers of models with systemic risks defined by the regulations as GPAI systems with total computing power of over 10×25 FLOPs).

Non-compliance with the law may result in imposition of penalties ranging from €7.5 million, or 1.5% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete, or misleading information to notified bodies and national competent authorities in reply to a request; up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data.

Ethical Challenges for Trustworthy AI

Engaging with data and gaining insights from diverse datasets prompts the AI Act to pose significant ethical and societal inquiries regarding the conscientious deployment of AI and its potential effects on individuals, communities, and society at large. Therefore, the risk-based approach serves as the foundation for a balanced and efficient framework of enforceable regulation acknowledging that citizens’ trust in AI can only be built on an ethics-by-default and ethics-by-design regulatory framework.

The Commission, having in mind the necessity of an ethical approach, in 2018 created the High-Level Expert Group on Artificial Intelligence (AI HLEG), which compiled ethics guidelines for trustworthy AI to ensure the AI Act would serve this purpose. The guidelines defined seven fundamental principles:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination, and fairness
  6. Societal and environmental well-being
  7. Accountability

These are there to ensure that any legislative action related to artificial intelligence, robotics, and related technologies should be in line at the same time with the principles of necessity and proportionality.

All are of equal importance, support each other, and should be implemented and evaluated throughout the AI system lifecycle. For this purpose, both technical and non-technical methods can be employed.

Technical methods should begin with the architecture’s structure, have an emphasis on explainable AI, and be credible, thanks to persistent testing and validation.

Non-technical methods would consist of implementing the code of conduct, standardization, and certification, but also the accountability via governance frameworks.

Together, they should serve to enable and communicate, in a clear and proactive manner, information to stakeholders about the AI system’s capabilities and limitations, and the implementation of requirements, setting realistic expectations. Only in this manner can we use the best of what AI systems offer and avoid the threats that those systems may pose due to the opacity they may represent.

In this context, it is important to build AI systems that are trustworthy, since human beings will only be able to reap its benefits when the technology, including the processes and the people behind the technology, are trustworthy. Companies that prioritize these principles and invest in developing AI systems that adhere to the requirements of the regulation may differentiate themselves in the market and attract customers who value ethical considerations in AI.

This could drive innovation in the development of AI systems that are not only technically advanced but also ethical, trustworthy, and aligned with societal values. It could also enhance their reputation for responsible AI development and deployment, particularly in markets where ethical considerations are increasingly important. As an outcome, it could give European companies a competitive edge in international markets and strengthen Europe’s position as a leader in trustworthy AI.

Impact on Innovation and Competitiveness

Compliance with all the set-out regulations for any either minimal/limited risk, high-risk AI, or GPAI system operators may entail significant costs for businesses, including investment in technology, data governance, documentation, and certification processes. At first glance, it may seemingly create a limit for startups and smaller companies to enter the market or even compete with incumbents.

Having that in mind, and also the fact the new regulation requirements ask for necessary special expertise to have in place (either internal or outsourcing; apart from the resources), the outcome could be that the regulation might mitigate the ability of these SMEs and startups to innovate, and lead to a consolidation of the AI industry, with larger companies dominating the market and potentially stifling innovation.

Therefore, it should be stated that although the AI Act imposes stricter requirements on certain AI systems, it also provides an opportunity for innovation in these areas. Companies that invest in developing AI systems for high-risk applications, such as healthcare or transportation, may similarly gain a competitive advantage by demonstrating compliance with the regulation and providing assurance of safety, reliability, and transparency to customers and stakeholders.

This could, with the ethical principle in the development of AI systems, stimulate innovation in areas where the potential benefits of AI are significant but where concerns of safety or ethics may have previously hindered adoption. More importantly, the AI Act creates opportunities for innovation by introducing regulatory sandboxes and real-world testing.

The purpose of their establishment is to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase. The aim of this setting is not only to ensure compliance of the innovative AI systems with the regulation, but also to:

  1. Enhance legal certainty for innovators
  2. Understand the opportunities, emerging risks (mitigating that at early stages), and the impacts of AI use
  3. Support cooperation and share best practices (creating Networks of AI Excellence Centers and setting up world-class Testing and Experimentation Facilities (TEFs) for AI) Accelerate progress by removing access barriers for SMEs, including start-ups
  4. Enable the competent authorities’ oversight (in certain cases, adhering to the rules set by the regulation correctly, it can be done without the involvement of a supervising authority)
  5. Facilitate regulatory learning for authorities and undertakings, including collaboration with all market stakeholders with a view to future adaptations of the legal framework (standardization and guidelines to be adopted)

Finally, it should be considered credible that the regulation itself emphasizes self-conformity assessment for high-risk AI systems operators. The regulator did not want to create an administrative burden that would lead to the suppression of any development and competition in the market.

On the other hand, self-conformity assessment might be accessible in the case of subjects who are not obligated to it by the law (presenting them as lower risk level, if any) and may embody a competitive advantage by declaring compliance with the regulation. Moreover, the regulatory framework established by the AI Act may influence the global competitiveness of European AI-based/focused companies compared to their counterparts in other territories, such as the United States or China.

While compliance with the regulation may initially impose costs and administrative burdens on European businesses, thanks to the transparency approach and trust incorporated within the entire AI lifecycle, all these factors may ultimately represent a competitive advantage for the outcome.

Conclusion

The European Union’s enactment of the AI Act heralds a new age in AI governance, marking a significant step towards ensuring the responsible development and deployment of artificial intelligence technologies.

As AI permeates various facets of our lives, the need for ethical, legal, and societal considerations has become paramount. The AI Act, therefore, represents a significant milestone in global AI regulation, demonstrating the EU’s commitment to pioneering a comprehensive legislative approach that fosters the trustworthy and responsible utilization of AI systems following other major EU digital legislation, such as the General Data Protection Regulation (GDPR), the Digital Services Act, the Digital Markets Act, the Data Act, the Cyber Resilience Act, etc.

Overall, the AI Act has the potential to shape the landscape of AI innovation and competitiveness in the European Union by setting standards and requirements for AI development and deployment. While compliance with the regulation may pose challenges for businesses, it should not merely serve as a deterrent in issuing sanctions for failure to meet prescribed obligations, as it also presents opportunities for innovation, differentiation, and leadership in ethical and responsible AI. Understanding the potential impact of the AI Act on innovation and competitiveness is essential for businesses and policymakers alike to navigate the evolving regulatory landscape and harness the opportunities it presents.

In conclusion, the AI Act represents a landmark initiative in global AI regulation, hopefully positioning the EU as a leader in promoting trustworthy and responsible AI development.

By balancing innovation with ethical considerations and regulatory oversight, the regulation sets a precedent for ensuring AI benefits society while mitigating potential risks.

Leave a Reply

Your email address will not be published. Required fields are marked *