Generative AI is revolutionary, we are not simply talking about point-in-time solutions or systems, it provides the ability to re-imagine an end-to-end service delivery model. In a short span of time, it has proven to be a groundbreaking technology, pushing boundaries of what can be achieved in terms of creativity, problem-solving, and human-like interactions.
The introduction of ChatGPT in 2023 marked a pivotal moment in the evolution of AI. This revolutionary chatbot, developed by OpenAI, helped to realize the power of AI in the mainstream, from the realm of enterprise jargon, to the consumer domain.
Artificial Intelligence (AI) has been a subject of research and development for decades, but recent advancements have propelled it into the spotlight. The convergence of factors such as increased computing power, vast amounts of data, and sophisticated algorithms has fueled AI’s rapid growth. AI encompasses a wide range of technologies, including machine learning, natural language processing, computer vision, and robotics. These technologies enable machines to perform tasks that were previously considered exclusive to humans, such as recognizing patterns, making decisions, and communicating in natural language.
AI systems can be broadly categorized into two main types: discriminative and generative.
Discriminative AI: This type of AI is designed to identify patterns based on trained data. It learns from historical data to make inferences and classify new data points. Discriminative AI is commonly used in classification tasks, such as product recommendation, image recognition, spam filtering, and fraud detection.
Generative AI (GenAI): In contrast, generative AI is capable of creating new data, content, or “artifacts” that are similar to but not identical to content in their training dataset.
They are often used for tasks that involve creativity or the generation of new ideas or patterns. Generative AI has demonstrated remarkable capabilities in tasks such as generating new text, creating images and videos, identifying new molecular structures in drug research, and voice generation. However, with “great power comes great responsibility”, and as this continues to evolve, it brings potential risks that demand careful consideration. “Responsible AI” reinforces trust and confidence with customers, shareholders, regulators, and partners within the ecosystem. It is about incorporating deliberate and thoughtful considerations in the design, development, and deployment of AI solutions.
Every new technology comes with risks, but with AI and GenAI, these risks expand in impact and scale:
- Data Privacy Risk: In a recent survey by Deloitte, data privacy is ranked at the top, pressing ethical concerns with using Generative AI. The data privacy risks associated with AI are multifaceted and stem from the technology’s inherent capabilities, the vast amounts of data it processes, and the evolving nature of its applications. Given that GenAI can create realistic images and sounds or audio, it also raises concerns around fake digital footprints, thus, impacting consumers’ personal information being possibly misused. Respecting data privacy means avoiding the use of AI beyond its intended use.
- Fairness Risk: The risk of fairness stems from the challenges in ensuring that AI systems do not discriminate against individuals based on their ethnicity, race, or gender. Inherently, we as humans have societal biases, and the data we have, or collect, is not always representative of diverse perspectives. As a result, AI models trained or consuming such data replicate these biases and amplify them. Hence, we need to ensure that the systems we develop or deploy reduce biases and do not worsen them.
- Transparency Risk: AI models are complex and with GenAI’s self-learning ability it is extremely challenging to understand how the model arrived at this decision. This lack of explainability can hinder trust and accountability. With transparency a critical component is to make users aware of when they are interacting with AI, clarity on what data and variables went into the system to help make a decision.
- Safety and Security Risk: Security risks with GenAI and AI get heightened exponentially. GenAI systems, such as ChatGPT (or open to prompts), can inadvertently process and reveal sensitive, private, or proprietary information. Generative AI can be used for malicious activities, such as sophisticated phishing emails, malware, manipulating source code, corruption of data or systems to malfunction, and other cyber threats. With GenAI, convincing and realistic content raises concern around Deepfake (an image/audio/video that has been convincingly altered to misrepresent someone as doing or saying something that was not actually done or said) – making it extremely real. These AI-powered attacks can bypass traditional security measures, making them more difficult to detect and prevent.
- Robustness Risk: Robustness relies on the outputs of an algorithm or system and the ability to learn from feedback. To foster trust, an AI system must be able to work in all conditions. As an example, in a manufacturing plant, it is important for a model to produce outputs that are both consistent and reliable, likewise, robustness is extremely critical with selfdriving vehicles.
- Accountability Risk: Lack of accountability with autonomous decision-making is quite significant with GenAI. As this technology evolves, decision-making without sufficient human oversight raises concerns, and hence, to develop responsible AI systems a commitment to accountability; across organizations, developers, and data scientists, business users to the end users is a must. It also entails our ethical compass – just because we can do something with data, does not mean that we should. Accountability also enables trust and supports transparency and auditability.
On top of all the risks of discriminative AI, Generative AI with its ability to create novel data and content, introduces additional risks that compound the concerns associated with discriminative AI. These risks include:
- Harms at Scale: Generative AI has the potential to generate harmful content at an unprecedented scale. This includes fake news, malicious software, and deepfakes. The widespread dissemination of such content can have detrimental effects on individuals, organizations, and society as a whole.
- Inaccuracies and Hallucinations: Generative AI can be used to create convincing but false or misleading content, posing challenges to education and the fight against misinformation. Students may rely on AI-generated content for research or assignments, potentially leading to the spread of inaccurate information.
- Copyright and Intellectual Property Issues: Generative AI’s ability to create unique content raises questions about copyright and intellectual property rights. Determining ownership and usage rights of AIgenerated content can be complex and challenging.
AI Regulations and Policy Landscape
The risks underscore the urgent need for responsible AI practices. The desire to innovate at any cost must be balanced with prudence and a commitment to mitigating potential harms. Responsible AI involves developing and deploying AI systems that are fair, accountable, transparent, and respectful of human values.
Recognizing the significance of responsible AI, the government, regulators, academia, civil society organizations, enterprises, and industry bodies worldwide have collaborated to develop frameworks and toolkits to guide the ethical development and deployment of AI systems.
The dilemma being faced by policymakers in the era of GenAI is “how to balance innovation with effective controls that support unintended consequences and guide an AIenabled future that works for everyone”. Policymakers, governments, along others in the eco-system have to play a critical role in achieving this goal.
Refer: AI regulation | Deloitte Insights
How to Get Started?
There is a lot to consume and the evolution of this technology is happening at lightning speed.
Some notable examples include:
EU AI Act: On March 13, 2024, the European Parliament adopted the Artificial Intelligence Act (AI Act) that was proposed by the European Commission on April 21, 2021, and it is expected to enter into force at the end of the legislature in May. It aims to establish a comprehensive framework for responsible AI. The EU’s AI Act is the first legal framework for AI in the world, promoting a risk-based approach which focuses on establishing rules on data quality, transparency, human oversight, and accountability across Europe. The act also aims to ensure that AI systems respect fundamental rights, safety, and ethical principles, and address risks of very powerful AI models.
NIST AI Risk Management Framework: The National Institute of Standards and Technology (NIST) in the United States has developed the AI Risk Management Framework to assist organizations in identifying, assessing, and mitigating AI-related risks.
The fear of not being left behind is real and questions come up; such as, “what can we do to capitalize on GenAI technology while balancing the risks it poses.” Here is what you can consider:
- Establish an AI Governance Framework
Establish an end-to-end AI governance operating model where experts can help and guide the organization to use AI solutions appropriately by accessing the implications for existing processes, proactive policy enhancements, and incorporating necessary safeguards and guidelines into those processes. Clearly articulate your definition of AI and how you expect it to be designed and implemented. Ensure a risk and impact-based governance process supports the development and execution of AI solutions. - AI Literacy Programs
Initiate educational sessions for business and tech leaders, along with data users, to foster a responsible and ethical understanding of AI applications. These sessions aim to enhance their understanding and knowledge of ethical practices in generative AI usage. - Establishing Appropriate Audit Processes for Vendors
Enhance your existing third-party due diligence processes and audit processes for vendors to ensure thorough testing and validation of AI models.
IEEE 7000 Series: The Institute of Electrical and Electronics Engineers (IEEE) has published a series of standards, known as the IEEE 7000 series, that provide guidance on various aspects of responsible AI, including transparency, accountability, and fairness.
ISO/IEC 42001: The International Organization for Standardization (ISO) has developed ISO/IEC 42001, it provides a comprehensive framework for managing AI systems, addressing key elements such as transparency, explainability, and autonomy. By adhering to ISO/ IEC 42001, organizations can effectively navigate the complexities of AI and ensure that their AI systems are developed and used responsibly.
NYC Law 144: NYC Local Law 144 represents a first-of-itskind law that regulates the use of AEDTs in the workplace requiring an independent “Bias Audit” of Automated Employment Decision Tool (AEDT).
- Implement an appropriate level of testing and validation, whether independent or internally led, to allow for visibility and transparency on the impact of risks when generative AI solutions are embedded within a third-party software.
- Embed “Human in the Loop” Protocols
These technologies, for now, are best enabled with ‘human in the loop’ operating models so the business users can monitor and review the outputs generated, as well as work as the feedback loop to refine the outcomes.
Ensure data lineage is well understood for all data that serves as input to any AI solution to have control of data risks and quality issues at the source. - Continuous Monitoring and End-to-End AI Development Lifecycle Process and Establish Transparent Communication
Establish an approach for monitoring and regularly assessing your AI models. Reflect on the risks and impacts to ensure the right guardrails are implemented and reflected within the development lifecycle process.
Account for auditability requirements and prepare model cards and data cards for these systems that can also work to support transparency. These documents provide detailed information about the system’s purpose, data sources, algorithms, and performance metrics.
Organizations should also establish transparent communication channels to inform users, customers, employees, and regulators about AI system changes and updates. This includes ongoing updates on the system’s performance and adherence to responsible AI principles.
In conclusion, responsible AI is a critical imperative for organizations seeking to leverage AI technology in an ethical and sustainable manner. By embracing responsible AI practices, organizations can mitigate risks, ensure compliance with emerging regulations, and build trust with stakeholders. The time to act is now, and organizations that proactively adopt responsible AI principles will be well-positioned to thrive in the AI-driven future.