Moments That Do Not Belong To Us
When you unlock your phone in the morning, when you scroll through a news feed, when you pause over an ad or click yes on those cookie popups, who owns that moment? When you scroll past a headline, or pause on a video, or even when you hesitate for half a second before clicking “like,” who claims that tiny slice of your life? You might think it is just yours, nothing but a simple browser visit. But in many ways that gesture, that data point, is already someone else’s to use. Who decides what happens to it? Who writes the rules?
Because data is personal. It is your heartbeat on a smart watch, your child’s voice recorded on a smart toy, your private chats with a friend. And in the age of artificial intelligence that data is, en masse, being collected, reused, traded, and transformed into something that can follow you for years. What you feel private is already public property.
The Illusion of Ownership
Artificial intelligence doesn’t exist without data. It lives because people generate data in their daily routines. That includes your chats, your photos, your emails, your GPS, your voice commands, what you buy, and what you search for. Every translation, every facial recognition camera, every recommendation engine is built on the choices, behaviors, and identities of millions of people. You. Me. All of us.
And you might think generating that data means you own it. But in most cases that is not true. We generate it, but we rarely own it. It sits in the servers of global companies, wrapped in legal terms no one reads, sold to advertisers, or shared with partners you never knew existed.
And that raises uncomfortable questions. When your health data is used to train a new AI diagnostic tool, should you share in the benefits? When your photos are scraped from the internet to build facial recognition models, do you deserve a say? When algorithms decide which job candidates are worthy or which neighborhoods deserve more police patrols, who is accountable for the mistakes, biases, or harm?
Additionally, accountability is very blurry. We live in a world where invisible systems can influence your mortgage application, your career, your reputation, even your freedom. Do you feel like you are in control of that? Or do you feel like the rules are being written somewhere else, without you at the table?
When the Walls Have Ears
Let me give you a personal story that showcases clearly what I am talking about.
A few months ago, I was talking with my wife about planning a short trip to Spain. We never searched for it, never booked anything, just tossed around the idea while making dinner. The next morning, my phone lit up with travel deals, flight suggestions, and hotel offers in Málaga. Coincidence? Maybe. But it felt like our walls had ears.
I didn’t click those ads, but the message was clear: somewhere, somehow, data from a private conversation had been captured, inferred, or assumed. And suddenly, I wasn’t just browsing the internet. The internet was browsing me.
It left me with one question I couldn’t shake: who gave permission for my life to be turned into a sales pitch?
Laws, Fines, and the Search for Accountability
So who is making sure that the cost of privacy is not paid entirely by ordinary people, while corporations reap the benefits?
In Europe, the answer has been to legislate early and hard. The General Data Protection Regulation (GDPR) remains the world’s most powerful privacy law, giving people the right to know how their data is used, to access it, and to demand its deletion. It has teeth. Vodafone in Germany was fined €45 million for failing to protect customer data. OpenAI in Italy was fined €15 million for mishandling personal data used to train its models. Replika, an AI chatbot company, was fined €5 million for failing to protect minors and for lacking transparency in how it used people’s information.
These are not symbolic penalties. They show that regulators are watching, and that data protection is not negotiable. Additionally, the new EU AI Act goes even further. It classifies AI systems by risk (low, high, and unacceptable) and demands transparency, human oversight, and strict accountability when personal data is processed to drive decisions. The message is quite blunt: not all AI is welcome in Europe, especially if it feeds on people’s lives without consent.
Across the Atlantic, the picture is more fragmented. The United States does not yet have a federal privacy law to match GDPR, but states are stepping in. California’s Consumer Privacy Act (CCPA) is the strongest of these, and it has begun to show teeth. In July this year, Healthline was fined $1.55 million for mishandling sensitive health data and sharing it in ways consumers never agreed to.
Regulators in California, Colorado, and Connecticut are now coordinating “privacy sweeps” to ensure that companies honor opt-outs. At the same time, the Federal Trade Commission (FTC) has signaled that AI models trained on unlawfully obtained data may constitute an unfair practice under U.S. law. That is a turning point, because it means the training set itself can now be a legal liability.
Beyond laws, there are frameworks trying to steer companies before regulators arrive. The NIST AI Risk Management Framework in the United States, and ISO/IEC 42001, the world’s first AI management system standard, both emphasize transparency, accountability, and security by design. But they are voluntary, even though they still give organizations a vocabulary and a map for how to build AI responsibly. I personally do not think they are not enough on their own, but they point toward a future where power can be balanced, if enough companies take them seriously and competently implement them.
So at the end of the day the challenge is far from over. For every new rule, there is a loophole waiting to be found. For every fine, there is a company that quietly treats it as just another cost of doing business. For every framework that sets a higher bar, there are organizations that choose the bare minimum. Which brings us back to the question that refuses to go away.
The Question That Refuses to Go Away
So we return to where we began. Who owns the data, and who writes the rules?
Is it the companies that harvest it, the governments that regulate it, or the people who generate it? Should it be shared, like a public resource? Should it be protected, like a human right? Or should it be left to markets, even if that means concentration of power and erosion of freedom?
I believe the rules cannot be written by corporations alone, nor by governments acting in isolation, nor by citizens who remain silent. They have to be written together. And the ownership of data must be tied to the dignity of the people who create it.
So the next time you open that app, or click accept on a privacy notice, or share a photo, or chat with a bot, ask yourself this simple thing: do I know who owns what I am giving up? Do I know who will decide what happens to it?
Demand access to your data. Understand what rights you have. Support laws and movements that put individuals into rule-writing. Do not let your privacy be something people bargain for. Let it be something you control.
This is not just a technical question. It is a moral one. This is about power, about dignity, about trust, about the kind of society we choose. Because how we answer it will decide whether AI is a tool that serves humanity, or a system that exploits it. It will decide whether people are the citizens in the digital age or its raw material.







