
The prevalence of the generative AI brings to light important questions around who owns our data and whether or not we have a say in how it’s used. Almost every platform markets AI features. In our professional lives, AI pop-ups offer to summarise Zoom meetings or guide us through Salesforce’s CRM, meanwhile users of Meta’s Instagram and WhatsApp have compared removing its AI chatbot to “opting out of a bad blind date”.
Social media feeds are populated by so-called AI slop and advertisers are also utilising AI for low-effort image generation. In the enterprise arena, tech companies promise productivity gains and better decision-making with GenAI. Those efficiency gains have proved elusive so far – most respondents to McKinsey’s The State of AI report have yet to see bottom-line impact from GenAI use – but this hasn’t deterred companies from investing in the technology.
Can you opt out of handguns? You can choose not to buy one, but you can’t choose not to get shot by one.
People can chose not to type prompts into ChatGPT or ban the technology altogether – as a quarter of businesses have – but AI is now a standard feature on many platforms. This all begs the question: is it even possible to opt out of GenAI?
GenAI: intrusive by design
“What does it mean to opt out of a technology that is used by others to interact with you?” asks Bruce Schneier, a public-interest technologist and chief of security architecture at decentralised data platform Inrupt. “Let’s say it’s used to write fundraising emails from political credits. Can you tell your email programme to delete all messages written by AI?”
“Can you opt out of handguns?” he asks. “You can choose not to buy one, but you can’t choose not to get shot by one.”
GenAI is intrusive by design. The platforms can only work by hoovering up enormous pools of data to train the large language models that power them. And where does the data come from? Long-forgotten forum posts, half-baked opinions or asides posted to social media, real conversations between friends or strangers, artwork, poems, blog posts and online media. If it’s on the public internet, it’s all up for grabs.
And it’s not only the public internet that is fueling GenAI. Meta allegedly downloaded copyrighted material from digital book repository, LibGen, in order to train its AI systems. Meanwhile, the UK’s creative community recently criticised the government for pressing ahead with an ‘opt-out’ model for art and generative AI – meaning the onus would be on creatives and copyright-holders to tell GenAI platforms they don’t want their data used to train AI models.
This opt-out approach continues the existing privacy model where the onus is on the end user to keep their data private.
It’s near impossible to opt out of GenAI in any meaningful way on an individual basis
“It seems to me that it’s impossible to opt out of generative AI,” says Carissa Veliz, an author and associate professor of philosophy and ethics at the University of Oxford. “That is a huge problem, because it means that these systems do not respect privacy laws. We’re supposed to have the right to ask companies to delete our data, but these companies don’t even know which data they use.”
Even if companies did know what data was used and where it was located, they’d be unlikely to acquiesce and delete it, she claims. Doing so would mean scrapping their models and starting from scratch, excluding any contested data. “It’s not going to happen,” Veliz adds.
“It’s near impossible to opt out of GenAI in any meaningful way on an individual basis,” adds Jaya Klara Brekke, who is the chief strategy officer at Nym, a privacy technology company. The nature of AI and LLMs is that they work in the aggregate, meaning even if you manage to opt out of training data, someone else won’t. The collection of everyone else’s data nevertheless sets a norm. It’ll be hard to avoid the use of these technologies in your day-to-day working and social life.”
Why GenAI needs citizen participation
The good news is that there may be a technical solution to this issue, according to says Matthew Hodgson, who is the cofounder and CEO of Element, a private messaging application. If a precedent is set for defining the integrity of information, where there’s a mechanism to separate AI-produced content from material made by humans, the problem could be “solved the other way around”, he explains.
He imagines a future where people own any data they produce but can also prove that it was made by them. In this case, they could flag whether or not they want that data to be used to train LLMs. In this scenario, if your data comes up in a model you can prove that it was stolen against your wishes, Hodgson explains.
“Regulation is just not enough to protect the digital rights of people and that includes privacy, creative rights and many other violations the AI industry commits regularly,” adds Klara Brekke.
Digital rights, she says, have to be part of the infrastructure by default to be effective. But, more importantly, alternatives have to be “supported, funded and developed so it’s clear that there are alternatives”.
Regulation should go hand-in-hand with major investment and, in the case of GenAI, privacy and alternative ownership models of data, such as with the blockchain, could introduce intriguing new possibilities. “The EU is starting to wake up to the fact that big tech can’t just be regulated away but that investment is needed in alternatives,” Klara Brekke says.
Generative AI appears to be here to stay. If we can’t yet opt out of it, then perhaps the technology companies that create it should be held more accountable through regulation and stronger data ownership rights.

The prevalence of the generative AI brings to light important questions around who owns our data and whether or not we have a say in how it’s used. Almost every platform markets AI features. In our professional lives, AI pop-ups offer to summarise Zoom meetings or guide us through Salesforce's CRM, meanwhile users of Meta's Instagram and WhatsApp have compared removing its AI chatbot to "opting out of a bad blind date".
Social media feeds are populated by so-called AI slop and advertisers are also utilising AI for low-effort image generation. In the enterprise arena, tech companies promise productivity gains and better decision-making with GenAI. Those efficiency gains have proved elusive so far – most respondents to McKinsey's The State of AI report have yet to see bottom-line impact from GenAI use – but this hasn't deterred companies from investing in the technology.
Can you opt out of handguns? You can choose not to buy one, but you can’t choose not to get shot by one.