Search for content, post, videos

Q&A from the Webinar

AI RISK MANAGEMENT: ISO/IEC 42001, THE EU AI ACT, AND ISO/IEC 23894

As artificial intelligence rapidly advances, the need for robust AI risk management is more critical than ever. Navigating the complexities and regulations surrounding AI requires a deep understanding of emerging standards and legislative frameworks.

Delve into key insights from our recent webinar, which highlights the ISO/IEC 42001 standard that guides organizations in establishing and improving AI management systems, as well as the European Union’s landmark legislative proposal for AI regulation and the methodologies prescribed by ISO/IEC 23894 for identifying, assessing, and mitigating AI-related risks. In the article below, the speakers, Miriam Podskubova and Callum Wright, address some questions on the topic:

Q: To what extent does ISO/IEC 42001 help with compliance with the EU AI Act, and how much overlap is there between the two?

A: The AI management system should be integrated with the organization’s processes and overall management structure. Specific issues related to AI should be considered in the design of processes, information systems, and controls. ISO/IEC 42001 provides guidelines for the deployment of applicable controls to support such processes and overall helps to reach compliance with provisions of the EU AI Act.

Q: Is the EU AI act directed to EU citizens only or EU residents as well?

A: The EU legislation (once it is valid and effective) as well as the EU AI act is applicable throughout the territory of EU notwithstanding the citizenship of its inhabitants.

Q: If a health provider is using AI and one of the patient is EU citizen and that health care center is outside EU. Does the act still apply?

A: Assuming that the patient is at the time of provision of health care also situated outside the EU the application of AI act would depend on the character of health care provided.

Meaning that if the health care center is usually targeting EU citizens to provide them with care using AI tools it has to have that in compliance with AI Act. If the presence of EU citizen is accidental the health care center (providing that they do not have AI tools compliant with AI act) to avoid any penalty from breach of law should prevent providing health care via AI tools unless it is life-threatening event where the lifesaving obligation would prevail subjective needs of health care provider to avoid any sanction to be imposed.

Q: Do Cyber Consultants have any place with any legal firms?

A: Sure, but it depends. They might look at the law firm as a customer to be provided with cyber related consultancy about setting up its systems and data privacy. Or they may collaborate in order to provide a client (3. party) with complex advisory services in field of cyber security.

Q: AI is considered a risk in itself?

A: It depends. Based on the initial assessment any AI system needs to be evaluate as to the level of risk it represents to health, safety, or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. Based on that the level of risk can be scaled from no risk/minimal/limited to high-risk AI systems or AI systems to be prohibited due to unacceptable level of risk they are posing.

Q: Do you think Lawyers will have to be educated in Cyber and AI and does it represent a Gap?

A: I do honestly think that either lawyers or other professionals all of them need to be educated or at least interested into latest development of technologies not to became obsolete at their daily practice. But, if the question is whether it is necessary for all of them – then the answer is subject to specialization of every single lawyer; if they do not face AI technology in any manner within their practice (either by using it or representing a client who are using it) they might lost market or business advantage in chasing their competitors – AI might be useful for the lawyers as well as it can be to any other professions.

Q: Do you think (EU AI Act) is appositive role for LegalTech for AI cases?

A: It may seem differently, but I think that the structure of the act as itself and framework of regulation, including all tools it represents might be helpful to the outcome – setting ground rules for huge companies and giving chance to smaller enterprises to thrive as well. From a legal point of view, it does represent a set of rules and obligations imposed on relevant subjects, but most relevant in my opinion will be all the guidelines and recommending opinions to be followed, as promised within its text.

Q: In using AI for short listing potential applicants, what could be considered as “high risk AI”?

A: In this case the right candidate (fulfilling all set requirements) might be excluded from the list without providing any rational grounding due to biased AI system or its malfunctioning. Such exclusion would be considered extensively intrusive to fundamental rights of the applicant.

Q: Is AI by itself a risk to humanity or it is only a risk in relation to human behavior? In relation to AI risks, what are the obvious risks of AI in relation to transhumanism?

A: I do understand that this question is slightly going beyond the scope of AI act applicability. Nevertheless, in my opinion the answer would be yes. AI may represent a risk to humanity if we consider its nature as uncontrollable or unpredictable, but such aspects may occur in most if not all aspect of life (not only viral environment but machines or things in interaction with humans or animals may “react” or better say produce outcome which is unexpected).

In case we are looking into AI as being product there always will be or should be concern of bugs or another malfunction. Especially when we take into consideration the robustness of the AI then the portfolio of unexpected might be broaden for instance when the AI needs to face a novelty or adversarial inputs. Moreover, if the input in creating AI system is biased wrongfully programmed its operation at work may misaligned with human values and may lead to many ethical dilemmas.

Secondly AI in relation to human behavior may broaden already mentioned aspects to even further impact. We may discuss malicious use, economic or social impact using of AI systems may have and then also the level of dependence on strictly AI or any other or similar program operated system would be an issue. It would increase vulnerability of society as a whole. All these aspects would be emphasized in case where the AI system at the beginning was developed as biased or in case where such a case would be non-reviewable.

To sum it up, while AI itself poses certain inherent risks due to its potential capabilities and unpredictability, the primary risks to humanity often emerge from how humans design, deploy, and interact with AI systems.

Ethical development, proper regulation, and responsible usage are crucial in ensuring that AI technologies benefit society while minimizing potential harms. Therefore, the risks are not solely intrinsic to AI but are significantly influenced by human behavior and decision-making.

Answer to your second question:

I would say the main obvious risk may represent fear of lack of privacy or loss of human autonomy and moral agency. All these aspects may result in affecting mental health and relationships. Finally, there will always be also the concern of AI overpowering humans, but I do not thing this represent a real threat for now.

What is important to have in mind in this regard is (in my opinion) that while the intersection of AI and transhumanism holds the promise of significantly enhancing human capabilities, it also brings a host of risks that need to be carefully managed. These risks span ethical, social, psychological, and existential domains, requiring a comprehensive approach to ensure that the development and application of AI in transhumanism are aligned with the best interests of humanity. Robust ethical guidelines, effective regulation, and inclusive dialogue are essential to navigate these challenges.

Q: Is there a DPIA for the AI Act 2024, similar for GDPR?

A: It can be considering the DPIA for AI as being a tool for data processing.

Q: What are the current ethical principles which guide the use AI in an international level? Are there international legal laws which guide the use of AI? How often is AI risk assessment being done, according to EU laws in relation to AI? What are the common risks related to AI, and how often do they happen?

A: These guidelines are still in process to be prepared and published. As it was said during the webinar AI Act is still at the beginning of its lifecycle. Only after this act will come into force there can be respective authorities forcing its application and adherence to it.

The risk assessment should be present throughout the lifecycle of AI system. The producers are obliged to comply with pre-market risk assessment and there their liability is not final. If any malfunction occurs during the remaining lifecycle of the AI system already placed on the market, they are obligated to address it adequately and, if necessary, withdraw the system from use.

Q: How ethical dilemmas in AI use, like ensuring human oversight, preventing misuse, and aligning AI actions with societal values is controlled and managed in countries not in Europe, like in Africa?

A: I am sorry, but being a European expert and attorney at law I am not aware about further AI regulation in African countries. It would be just my recommendation to gain relevant business and market strategy benefit to approximate current set up of AI management as much as possible to EU standard if they are targeting to collaborate with EU market.

Q: At what point do org look at the AI standards and AI ACT – depending on what AI systems you have, what is the trigger for standards and the AI ACI, i.e. as privacy is to GDPR Regulation?

A: Any organization using or developing AI systems must evaluate and assess their integration within their business operations. Producers of AI systems must comply with all legal requirements throughout the entire lifecycle of the AI systems, both before and after they are placed on the market.

Organizations using AI systems must adhere to compliance requirements concerning how they use the systems, what kind of data they process, and how the AI systems interact with their operations etc.

Q: Is there an update to European legislation that takes into account the new directives on AI?

A: You mean AI liability Act (directive)? This is still in stage of proposal following the AI Act to be valid and effective and hence be used as useful tool to enforce the damages injured by weaker party.

Q: With the evolving AI, how can we make sure to catch up with that speed, as well as ensure proper risk management in place?

A: This is more a question of the market environment. The AI Act establishes the basic rules for relevant parties, which, in my opinion, should already be incorporated into daily business operations if they wish to declare transparency and trustworthy AI.

Q: An AI Car if accident happened, who is responsible for the damaged caused by AI car?

A: It would depend on the cause of the accident which would need to be examined by the experts. The cause of the accident would need to be examined by experts. When considering a car accident, we must account for potential injuries and property damage.

It is important to recognize the human factor, such as people suddenly standing or crossing the driving path, which is different from other scenarios. If liability cannot be linked to human actions, we need to investigate other potential causes of the harm.

If the only reasonable option is to attribute liability to the car, its operation must be thoroughly examined. It is crucial to understand that the AI Act does not address liability for damage caused by AI. It only outlines the rights and obligations of those involved in AI development.

Therefore, if an AI-driven car causes damage, it must be investigated whether the accident resulted from a system malfunction or another cause. The AI Liability Act (EU Directive) provides effective tools for all parties involved to facilitate such investigations, especially in cases where there is a lack of cooperation from the high-risk AI system producer.

Leave a Reply

Your email address will not be published. Required fields are marked *