ChatGPT has hit the headlines again as serious data privacy concerns have surfaced about potentially damaging personal information being used to feed the world’s first free general Artificial Intelligence (AI).
ChatGPT is a web-based virtual assistant that can understand and respond to questions and conversations like a human, and its launch sent tidal waves around the world when it was released by its owners, OpenAI, without warning.
To make the ChatGPT brain work, OpenAI developers allowed the AI to gorge on 45 terabytes of text data which is equivalent to billions of words and sentences from a wide variety of sources, such as books, articles, and websites, as well as scrape data from a variety of publicly available sources.
When ChatGPT appeared, Privacy Regulators – like most of the world – were stunned by its performance but were immediately under pressure to understand how this complex technology worked and determine if it was safe.
Italy’s Privacy Regulator – the Italian Data Protection Authority – was quick to investigate and the first to respond publicly. They discovered that data used to train ChatGPT contained personal information of its citizens and immediately issued a ban citing concerns over inappropriate content and privacy violations with the General Data Protection Regulation (GDPR) – a law established across Europe in 2018 to protect individuals’ personal data and privacy.
One step behind the Italian Regulator, Canada announced on April 5th that it had launched an investigation alleging the collection, use, and disclosure of personal information without consent was not in keeping with their laws. Its Privacy Commissioner, Philippe Dufresne, stated:
“We need to keep up with and stay ahead of – fast-moving technological advances.”
Keeping up with AI appears to be a much harder job when the way the ChatGPT black-box brain processes data and spits out its responses stumps even the brightest scientists. Furthermore, ChatGPT is not alone, but just one of a host of new Generative AIs launching at speed to market, and each is trained, computes, and responds differently depending on algorithms, the data they are trained on, and the controls set by the developers.
Across Europe, Data Protection Authorities have yet to unite and decide how to react to ChatGPT and other generative AIs, but there may be pressure from the central coordinating European Data Protection Board in Brussels for all 26 EU Regulators to quickly consider a joint approach.
Back on British soil, the UK Government is almost certain to not support any type of ban against ChatGPT or any AI technologies.
The Government’s Department for Science, Innovation, and Technology and the UK Data Privacy Regulator (the ICO who report directly to Parliament with sponsorship by the Department for Digital, Culture, Media, and Sport) are aligned, and even share the same sentiment when describing British compliance laws as a “burden.”
On March 29, the UK Government launched a white paper on the use of Artificial Intelligence stating that: “… organizations can be held back from using AI to its full potential because a patchwork of legal regimes causes confusion and financial and administrative burdens for businesses trying to comply with rules.” However, the white paper also emphasises that AI must comply with “existing laws” and not discriminate against individuals or create unfair commercial outcomes.
This is all very laudable, but this is where the white paper loses its brilliance.
The Role of the Government and the ICO
The UK’s privacy law is known as the Data Protection Act 2018 (DPA18), and it is, essentially, the UK’s implementation of the EU General Data Protection Regulation (GDPR).
If the Government truly has the DPA18 in its crosshairs to make changes for the betterment of business, what does the independent UK Regulator, the Information Commissionaires Office (ICO), have to say about it?
The answer came just 24 hours later on March 30, in an ICO email newsletter that arrived from the ICO who are responsible for enforcing the DPA18 (UK-GDPR).
In that, the ICO stated: “In our ICO25 strategic plan, we said we would create a practitioner forum as part of our efforts to reduce the burden and cost for organizations of complying with the laws we regulate.”
It is uncanny that the Department for Science, Innovation, and Technology, and the entirely separate body of the ICO, seem to share the same narrative – even down to the same word – and that both consider the current UK Privacy law a “burden” to business. The fact is, the DPA18 (UK-GDPR) was never designed around creating business benefits but was set to protect the fundamental rights and privacy of personal data of individuals within the UK.
A Necessary Change
Technical progress is essential for the development of the economy of Britain. However, AI feels very different in terms of speed to market, speed of adoption, and its claims of accelerated productivity. It is also a threat to jobs, incomes, our rights to privacy, and the fundamental way we conduct our lives.
More than 1,800 signatories of a letter calling for a sixmonth pause on the development of AI systems “more powerful” than that of GPT-4 (ChatGPT’s successor) was signed by Elon Musk, scientist Gary Marcus, Apple cofounder Steve Wozniak, and computer engineers from Amazon, DeepMind, Google, Meta, and Microsoft. What this letter will achieve is unknown, but it seems a fair request if only all countries signed up for it and AI was halted – which they will not – and it has not.
Without a doubt, AI is here to stay and will have a significant impact on stimulating future growth and the competitiveness of UK businesses, but the tradeoff in terms of privacy and its affecting lives negatively is critically important to debate and places us all in a moral dilemma.
The Achilles Heel of the Government’s Plan
It is not hard to understand the Government’s motivation to try and move Britain ahead of the competition but tinkling with the Data Protection Act to try and shoehorn AI benefits for business into this law, but the Government’s approach may be both deeply flawed and a naïve ambition.
Even if the UK adjusts its Privacy laws to make it easier for British businesses to play with AI, the UK does not exist in a vacuum, and any organization within Britain that processes, stores, collects, or sucks up any personal data of citizens of another country, may need to comply with the respective other country’s Privacy laws.
GDPR has extraterritorial reach beyond its own geographic boundary. And, there are several other data privacy laws worldwide that have the same global powers. Thus, no matter what UK officials alter within the DPA18 (UK-GDPR), British companies will still have an obligation to manage any 3rd country’s personal data in compliance with all those other foreign laws. Therefore, why not just stick within the global gold standard of GDPR and have done with it?
Currently, countries with extraterritorial reach include; Canada with its Personal Information Protection and Electronic Documents Act (PIPEDA), Brazil’s Lei Geral de Proteção de Dados (LGPD), California’s Consumer Privacy laws (CCPA/CPRA), and myriad other global-effect privacy laws in Australia, Japan, China, and this list is ever increasing.
UK Adequacy Also at Risk
Additionally, there are further and deeper concerns that may dampen the enthusiasm of the UK Government and British businesses keen to tinker with the UK version of the GDPR.
Currently, Britain sits in a privileged but very precarious position in terms of EU personal data flowing in, out, across, and between the UK and the European Union countries. Post Brexit, the European Commission agreed to provide the UK permission to continue open data flows with the EU because, at that time, the UK’s privacy laws were consistent with the EU’s General Data Protection Regulation (GDPR), although Britain rebranded this as the UK DPA18.
However, this adequacy agreement with the EU is not permanent and was provided with a four-year sunset clause for review in 2025. Hence, if the European Commission gets an inkling that Britain is making changes to the DPA18 before that deadline – and those changes no longer provide a similar level of data protection for EU citizens – the European Commission can revoke or suspend the adequacy decision immediately resulting in a new and endless waltz of paperwork, red tape, and rubber stamps that will cost British businesses dearly in time and money.
Twenty-Six EU Regulators on the Hunt
In addition to battling the complexity of retrofitting laws to the tune of AI, there is also another thunderous Data Privacy storm about to blow across Europe, but this time driven by real humans called the European Data Protection Board (EDPB).
The EDPB in Brussels is leading a coordinated campaign with 26 European Data Protection Authorities charged with targeting and assessing any organizations they deem fair game – large, medium, and super-sized – to find out which are failing to provide company Data Protection Officers with adequate resources, seniority, tools, training, staff, budgets, and executive support to meet their day-to-day Data Privacy tasks as mandated by the GDPR.
This is a clear signal that the EDPB are displeased that GDPR has been taken off the boil for too long, and by too many organizations since its launch in 2018, which indicates that the EDPB know that companies are failing to resource their Privacy teams properly, resulting in poor compliance with the GDPR.
Part of the drop in focus on GDPR and data privacy can be blamed on COVID-19, the pressures of business change, and remote working. Additionally, the volume of hacks, ransomware, and other IT vulnerabilities seems to have won far more attention and budget from Boards nowadays, and the funds have likely been reallocated away from the Data Privacy Officer (DPO) to the Chief Information Security Officer (CISO).
To Privacy Regulators, however, these commercial challenges are of little concern. Privacy Regulators are charged with focusing on protecting the privacy rights of individual data subjects – not business operating costs – and they want to see organizations invest appropriately by using Articles 37 to 39 of the GDPR as the stick to take organizations to task for not supporting their DPOs effectively.
What to Do Next
There was a reason why GDPR was launched to a grand fanfare five years ago, and that purpose is in front of us today.
GDPR is fundamentally about protecting human rights and the privacy that we all want and value. Right now, we need to trust that GDPR – and DPA18 in the UK – is fit for purpose, and we must trust in Regulators to enforce this for the benefit of individuals and society as a whole.
In terms of the next steps for businesses considering what to do:
AI technologies like ChatGPT are here to stay so understand their value and innovate consciously to ensure progress remains consistent with corporate values and compliant with the GDPR and other privacy laws that apply.
In terms of decision-making, as well as the technical and usual teams that are involved in innovation, include ethics, corporate social responsibility, risk, and compliance, and not only ask “What do we think” but also “How do we feel” as there are moral dilemmas to debate as well.
Data privacy has never been more important than in this AI evolution because the guiding Regulation is the GDPR, and there is a great benefit – and risk mitigation – in bringing a DPO with a proactive, positive, and commercial mind-set to the top table to offer advice and help to navigate this.
A great deal has changed since GDPR came into effect five years ago, so look again at your internal Privacy Office and assess if it is still able to deliver against the essential compliance requirements today and that privacy by design is integral to any innovation or change.
Most useful is to carry out a high-level Privacy Audit to help the Board understand what gaps exist, what to do to close those gaps, what the priorities are, and identify where future efforts, funds, and personnel need to be deployed.
AI has significantly increased the stakes in terms of opportunity and risk – so rather than do what you have done before, seek out new ways to face these new challenges.