Search for content, post, videos

AI Under Watch: The EU AI Act

As Artificial Intelligence (AI) is transforming many sectors and it is impacting the way people live, work, and interact, its influence in society is only growing stronger. However, this rapid widespread and development of AI raises many concerns including various aspects such as bias, privacy protection, and other potential risks.

What is the EU AI Act?

In order to address the potential issues and the need for ethical and responsible AI development, the European Union (EU) has developed the EU AI Act. The EU AI Act is a proposed legislation that intends to create a comprehensive legal framework for regulating AI systems.

The EU AI Act was first proposed by the European Commission in April 2021 and then approved by the EU Parliament on June 14, 2023, representing a big first step toward regulating AI.

Currently, the existing document serves as a draft for a future law, on which the European Parliament, the European Commission, and the Council of the EU are working together to complete it by 2023.

The Motivation Behind the EU AI Act

The EU is committed to developing AI rules because they are aware of the possible risks it brings and they are dedicated in protecting people’s rights. Given AI’s potential to invade privacy, discriminate, and impact human control, the EU aims to promote a responsible and ethical development and use of AI by setting clear rules and standards.

Furthermore, by addressing the potential risks and promoting transparency, accountability, and human oversight, Europe aims to build confidence among individuals and encourage the adoption of AI technologies in a manner that is safe and beneficial for society.

The Risk-Based System of the AI Act EU AI

Act outlines a risk-based approach, dividing AI systems into different categories depending on their potential impact, from unacceptable risk to minimal risk.

Some AI systems that are considered risky, such as facial recognition in public places, predictive policing tools, and social scoring systems, will be completely banned. The Act also requires transparency, i.e., informing people if the content they see is made by AI and making sure illegal content is not generated.

Some significant implications of the European Parliament’s draft text on AI regulations are:

  • Ban on emotion-recognition AI – The draft suggests prohibiting the use of AI to identify people’s emotions in policing, schools, and workplaces. While AI-based facial detection and analysis have not been banned, there is anticipated contention surrounding this matter.
  • Ban on real-time biometrics and predictive policing in public spaces – This proposed ban means it will be prohibited to use facial recognition and other biometric tools to track individuals or predict criminal behavior. This has sparked many debates as many argue that these technologies are necessary for ensuring public safety and efficient law enforcement.
  • Ban on social scoring – The draft proposal seeks to prohibit the practice of social scoring which involves using individuals’ social behavior data to create generalizations and profiles about them.
  • New restrictions for generative AI – The draft suggests new rules for generative AI and proposes that large language models should not use copyrighted material when being trained.
  • New restrictions on recommendation algorithms on social media – The draft proposes stricter regulations for recommendation algorithms used on social media platforms. It classifies these algorithms as “high risk,” which means they would be closely monitored and examined more carefully.

On the other hand, AI systems that are less risky, like spam filters, will not have as many rules to follow. The goal of the AI Act is to find a balance between controlling risky AI systems while still allowing space for innovation in safer AI systems.

The Commission has suggested that those who break the EU AI Act’s rules might have to pay fines of either 30 million euros or 6% of their overall profits, whichever is greater, however, the Parliament is advocating for penalties of up to €40 million or 7% of a company’s annual revenue. Both these propositions are higher than the fines set by the General Data Protection Regulation (GDPR). Failure to comply with the GDPR may result in fines of up to 10 million euros or 2% of an organization’s annual revenue.

EU AI Act’s Impact

Europe’s rules for AI could have a big impact. The researchers suggest that bigger AI model providers should be responsible for being transparent, and there should be enough technical resources to enforce the rules.

The main challenge is making sure AI companies really change and meet the regulations. Without strong pressure, they might make only small changes to follow the rules without making a real difference. But if the AI Act becomes a law and is enforced, it could lead to positive changes, making AI more transparent and accountable.

Leave a Reply

Your email address will not be published. Required fields are marked *