Everywhere we look, we find stories about Artificial Intelligence and how it is transforming the business environment or enabling companies to dramatically cut costs. However, there are a number of real world case studies that would caution business decision makers to proceed with AI deployments only after careful consideration of the use cases and risks.
In fact, a recent Harvard Business Review case study suggests that about 95% of corporate AI initiatives are not paying off! The root causes can be traced to one of the following:
- Siloed or Piecemeal Approaches: Many companies treat AI initiatives as an isolated pilot project instead of integrating it with core business processes, data, and software infrastructure.
- Focus on Experimentation over Strategy: Most investments are funneled into sales and marketing or pilot experiments rather than into foundational, back-end transformations that generate sustainable value.
- Lack of Maturity and Integration: Onlyabout 4–5% of organizations have mature AI capabilities that deliver competitive advantage, while over half lack developed processes or talent for productive deployment.
- Don’t Try to Automate Fundamentally Broken Processes: Redesign first, then automate (remember this from Business Process Re-Engineering?)
In summary, most AI investments aren’t paying off because of fragmented strategies, overemphasis on experimentation, and unrealistic short-term expectations.
The few organizations delivering real business value are following focused, integrated, and risk-aware approaches anchored in business strategy and process transformation.
With those guidelines firmly in mind, let’s turn our attention to the risks of deploying AI in today’s business environment. The objective of course is to have the highest possible confidence in a successful outcome, taking both return on investment and sensible risk management into account.
Let’s consider some of the significant risks associated with AI in today’s business environment, which include (but are not limited to) the following:
- Data leakage including inadvertent disclosure of customer or sensitive data – especially PII (Personally Identifiable Information), as well as business critical data or intellectual property that is shared via 3rd party and 4th party relationships (more on this below).
- Lack of review or sanity checks on AI output, especially with regard to hallucinations (see the recent claim against a major airline in which AI generated false citations on a personal injury claim).
- Overdependence on AI for automating tasks, leading to process errors or incorrect conclusions (some estimates are that 25-30% of AI generated responses contain significant errors). What if we take action based on AI recommendations and it turns out to be wrong? Who is liable?
- The risk that your AI system may be identified under local or state regulatory requirements including, but not limited to, the following:
- Operating an “Automated Employment Decision Tool” (New York City)
- A “High-Risk Artificial Intelligence System” (Colorado Artificial Intelligence Act SB-205)
- An “Automated-Decision System” according to California Civil Rights Department regulatory action 2025-0515-01
- A final risk is the proliferation of “rogue AI” in the organization – that is, AI use that goes around the well documented AI use cases and may circumvent corporate policy, industry regulations, and governance requirements. Managing this risk requires a careful balance between deploying controls and not stifling innovation.
Proposed Mitigation Approaches
So, now that we’ve talked about the risks, what strategies and specific guidance can we suggest to mitigate them?
While gaining an industry – recognized certification on AI Governance (such as ISO/IEC 42001 for example) can assist in resolving customer concerns, the reality is that there are few absolute security controls that will enforce guidelines around AI usage in the enterprise.
ISO/IEC 42001, for example, requires that businesses develop a policy for appropriate use, generate an inventory of AI use cases and models or tools in use, train employees, perform annual reviews, and investigate findings and exceptions.
While certifications such as ISO/IEC 42001 are helpful in communicating your organization’s stance with regard to AI risk management, there are a number of specific recommendations to better manage the use of AI in the workplace, such as those proposed below:
- Require all employees to attend AI Competency Training based on a detailed corporate policy. The policy and training should take into account:
- Data classification and restrictions on what is shared with partners or third parties
- Approved use cases and AI models
- Specific guidelines, such as; do not enter client or sensitive data as either prompts or background information
- All employees must acknowledge the training and that acknowledgement should be tracked for compliance purposes.
- Perform an Annual Inventory of all AI use cases throughout the organization, and a comprehensive risk assessment for each. This is an especially important step if your organization’s AI tool is flagged under state or local regulations. Exposure to any of these regulations (and there are others) will require that you perform an AI Risk and Impact assessment, or pursue a certification such as ISO/IEC 42001.
- Disclose to clients and customers when AI has been used to generate reports, recommendations or contract clauses, or require a two-level review for AI-generated material, again when used in client facing communications, reports, or contracts.
- Lastly and perhaps most important, consider a non-public LLM rather than the popular public offerings. Leading law firms, for example, are deploying private LLMs that are “air gapped” from the public internet. In this way those firms can use their internal case files as training data without privacy concerns or fear of public disclosure. This approach might be an excellent alternative to using public facing LLMs such as ChatGPT, Claude, or Llama, particularly if your company wishes to use internal data such as contracts or source code as training data.
Discussion
Considering the various risks above, what can be done to take advantage of the emerging state of AI technology while minimizing the adverse effects or consequences?
First, a comprehensive AI policy with clear and direct guidelines for acceptable use of AI technology in the workplace and presents specific instructions and guidelines, such as “do not enter customer or privacy data as a prompt into any AI model”.
Another good recommendation is to direct all business related AI traffic into a small number of “approved” LLMs. Even better is to purchase the “enterprise” grade offering from OpenAI or others, which offers some expectations of privacy when compared to the publicly available models.
As suggested above, before including any AI-generated responses into customer communications, and particularly for contractual documents, a detailed review should be performed to ensure that there are no “hallucinations” or other unintended content.
On the topic of third party disclosure, where data is shared with a partner or vendor, and then that third party exposes a company’s data to a vendor’s AI, is a major risk area that is very difficult to control, sometimes referred to as “fourth party risk”.
A simple example would be allowing AI to generate meeting summaries from a Zoom call. If proprietary data, intellectual property, or other sensitive information was disclosed in the call, and then summarized by AI – then that data has been used as training data, largely via the process of “Retrieval Augmented Generation” or RAG.
It’s important to note that once this form of data leakage occurs, it’s practically impossible to roll it back, particularly if you are using one of the major LLM platforms (OpenAI, Claude, Meta, Llama, etc). These models are so big and so complex that the tracking and management of data elements at the atomic level is just not possible.
For this reason many companies, particularly those in highly regulated industries such as healthcare, financial services, and law, have disabled the “AI Assistants” offered by Slack, Zoom, and others – because the risk of inadvertent disclosure of sensitive or privacy data outweighs any notional benefit offered by generation of automated meeting summaries or email responses.
It should also be pointed out that presently there are very few technical controls available that can monitor, analyze, alert, and if necessary report on employee’s interactions with AI models. The ideal tool would be an extension of data loss prevention (DLP) tools that are commonly used in financial services, law firms, and other highly regulated businesses. DLP tools are problematic because they generate a high rate of false positive reports, since they flag suspect content based on pattern matching on number formats (such as credit card numbers) and keywords or sentence structures. Those false positives then require investigation and resolution in the company incident ticketing system or SIEM (Security Information and Event Management). Currently there are few if any tools that can record and log employee interactions with AI for subsequent review and audit.
Conclusion
A good analogy for the state of risk assessment in the AI domain can be found by looking at automobile traffic fatalities. All of us drive (or ride in) cars as part of our daily lives, and traffic accidents claim over 30,000 lives per year. According to the National Highway Transportation Safety Administration (NHTSA) 29% of traffic fatalities are attributable to excessive speed. Another 8% are attributable to distracted driving. This is an example of normalization of risk in our daily lives.
Normalization of risk also describes how many business decision makers are dealing with AI systems in today’s business environment. They are forging ahead with large scale AI deployments, in many cases without adequate risk management or security controls in place, mostly because decision-makers are in pursuit of profits to be made, radical cost cutting (including staff reductions), and of course new competitive advantages that AI promises.
Based on the above, we can see that there are many clear benefits to deploying AI to address a wide range of use cases and business opportunities. In doing so, responsible leaders will assess the risks and ensure that appropriate risk mitigations are in place, and maintain a clear view of how effective those mitigations are, as measured by overall risk reduction and decreased liability and regulatory exposure.
Organizations can layer a multi-level approach to translate high-level ethical principles and AI risk management structures into concrete AI management controls and design standards combining ISO/IEC 42001, ISO/IEC 27001, NIST AI Risk Management Framework (AI RMF), and ISO 9001. This approach will ensure decision makers align AI assurance programs with binding laws and best practices.
Adopting ISO/IEC 42001 will assist decision makers tackle the concerns and obstacles associated with the diligent deployment of AI technologies by providing a set of criteria for the establishment, maintenance, and continuous enhancement of AI management systems in support of their business objectives while ensuring AI systems are developed and used responsibly following the five fundamental elements around the responsible use of AI.
- Security: Protecting AI systems from unauthorized access and threats
- Safety: Safeguarding that AI operations do not pose risks to humans or property
- Fairness: Promoting unbiased decision-making and preventing discrimination
- Transparency: Providing clear insights into AI processes and decisions
- Data quality: Overseeing the accuracy and integrity of data used by AI systems
The AIMS governance umbrella will provide organizations significant benefits, including enhanced trust and stakeholder confidence, a competitive advantage, reduced financial and reputational risks from AI failures, and improved operational efficiency through streamlined quality processes.







