The Internet is (Deeply) Fake
This year started with a lot of people grokking. The term ‘Grok’ comes from Robert Heinlein’s sci-fi novel ‘Stranger in a Strange Land’, in which a man raised by Martians returns to Earth and struggles to adapt to human culture. To grok, in Martian, means to understand something so completely that you become one with it. Given the book’s libertarian nature, and the similarity of 'grok' to first-principles thinking tech leaders love so much, it’s no surprise Elon Musk has cited it as a favorite and consequently named the LLM created by his AI company after it. Because of this chatbotification of Grok, today the word has taken on a new meaning altogether: generating naked images of people.

Grok was launched and branded as the AI that would let users "use it as they like." Musk found the time to go to various podcasts, laughing about how amazingly Grok could roast people. 'Listen to it Joe, it's swearing!'. As it turns out, users aren’t interested in clever roasts at all; they want to see steamy pictures of the people they know, either directly or not, without clothes. The absurdity of it all was neatly captured by Dutch news program NOS, where the anchor first demonstrated harmless edits of herself (adding cats to an image) before awkwardly concluding that "a lot of other things are possible". Pictures of nude celebrities, political leaders, and regular people like you and me were made en masse. Public pressure started mounting more extremely in the past month though, as child-pornography was being generated as well. This has forced the Grok team into the ironic position of adding safeguards and complying with existing laws.
Generating hyperrealist images or videos of people, which are also known as deepfakes, has been technically possible for a while now. However, where you once needed serious Photoshop skills and hours of work to construct one, now every idiot with a Chromebook can do it. The financial cost of creating a deepfake is also becoming negligable, and is only going to get cheaper in the upcoming years. This democratization of this type technology is showing us clear downsides on a personal level, but is also undermining democracy itself.
Enjoying this article? Subscribe to get my new articles and scientific research delivered to your inbox.
State actors like Russia, China, and Iran have weaponized these tools at scale, with Meta having to disrupt dozens of their networks since 2017. This is also not limited to text or imagery: given limited data of your voice, minimal expertise is needed to fabricate convincing audio of you saying anything. In Slovakia in 2023, fabricated audio of the liberal leader was created to convince voters he was committing fraud. He lost by a narrow margin. Recently, and closer to home for me, members of a prominent Dutch political party used deepfakes to discredit the opposition leader. All of this misinformation appears to be effective, as the untrained eye can't tell fake from reality.
Things get even more worrying if you look beyond generative AI. Even the faintest information you give up online helps malicious actors predict whether you're a swing voter or what content might turn you anti-democratic. You might think "I have nothing to hide" when you share your age, location, or what articles you click on. But that's really all that is needed: a few innocent data points reveal political leanings or what messages will make you angry or afraid. You're not hiding anything criminal; you're giving away the manual for manipulating you.
Feel like throwing your laptop out the window already? While this all is very scary, I believe we are in a transition period, not a collapse. This isn't the end of truth or democracy; we have to get to the end of naively accepting digital content as real by default. We need to learn, collectively and painfully, that the internet is fake. Or at least, that it can be. The flood of synthetically generated content is now amplifying every negative effect social media already had. Yes, society is very late with acting on this, and I feel this all around me. I'm shocked by how long people my age have been saying openly how they would love to, but simply can't quit the modern opium that is Meta's products. This is however partly due to a fundamental mismatch: the people writing regulations often lack technical expertise, while those with deep platform knowledge are often the ones profiting from the current system. Just listen to how TikTok's CEO or Facebook's main Lizard were questioned in American legislative hearings: it’s really painful. Their European counterparts are luckily much better, exemplifying that Europe is getting to a place where digitally native generations are getting into positions of power (i.e. Kaja Kallas, Simon Harris, Alice Bah Kunke, Kim van Sparrentak, Barbara Kathmann). This is something we have to take full advantage of on both a cultural and institutional level.
Culturally, we need to rebuild our relationship with digital content. This means treating screenshots, viral videos, and audio clips with the same skepticism we'd apply to an email from the Nigerian Prince. Since 2018, digital literacy has been part of Swedish primary and secondary schools; let the rest of Europe quickly take example from our Scandinavian friends. Let’s have more ‘press and media weeks’ like they have in France, where everyone is taught to investigate articles. Let it be a journalistic standard to openly fact check debates. Maybe most urgently: let’s get phones out of all European schools, as quickly as possible, and follow the concrete guidelines provided by the UK’sSmartphone Free Childhood, and its Dutch counterpart Smartphone Vrij opgroeien.
Institutionally, governments and platforms need to move faster and more decisively. The consumer cannot protect itself, and I believe the government is the only one that can in this case. The EU has already deployed three major pieces of legislation: the Digital Services Act (DSA), targeting disinformation, hate speech and illegal content, the Digital Markets Act (DMA), designed to limit abuses linked to market dominance, and the AI Act, which regulates high-risk AI systems including those used for deepfakes and social media recommendation algorithms. As a result, the EU has been able to nuke Apple, Meta, Google and X with fines. Let's continue this, and expand on it.

The EU should mandate that platforms offer chronological feeds as the default option, not buried in settings. This indeed hurts their engagement metrics and advertising revenue, and that’s perfectly fine. Platforms have built business models on algorithmic manipulation; regulation should force them to choose for democracy and user safety over profits. The DSA framework provides the foundation for holding platforms accountable, and the AI Act already classifies technologies like social media recommendation systems as high-risk AI and has specific rules generative AI systems should adhere to. Now we need to help make enforcement easier and get these systems to become less rotten at their core. A few ideas of mine:
- Platforms should be required to publish regular transparency reports showing how their algorithms perform against known manipulation campaigns, with independent auditors having API access to verify claims.
- New foundational LLMs should be tested on various benchmarks to establish whether they are in line with what we deem to be acceptable behavior.
- Mandate techniques that make abuse harder at the source; i.e. cryptographic watermarks for generated content, and Zero Knowledge Proofs to mark human content.
Nothing one can't hack around, but it raises the now very low barrier nonetheless.
At the same time, we should not stop at building barriers: we can also improve our lives with this tech. The LLMs used by bots to write hate-threads can also help us summarize page-long terms of service and help users navigate the web more privately. Browser extensions and AI assistants can block trackers, detect phishing attempts, and warn users before they engage with suspicious or coordinated content. Europe should aggressively subsidize this layer of consumer-facing infrastructure, creating incentives for companies to build tools that protect users rather than exploit them. The goal should be to make investing in protective technology more profitable than simply treating content moderation as a Bangladeshi/Filipino cost center.
So keep going, Europe, but go much faster. The threat of misinformation is real and accelerating at a pace like never before, and we need to start acting today, as a European front, to safeguard democracy and our cultural values. Technology isn't inherently moral. Its impact depends entirely on who uses it and how. We must make harmful applications as difficult as possible while enabling beneficial ones. Perhaps then, in the near future, we can actually all agree that generative AI, the internet and maybe even social media are pretty cool.
Want to get notified when I post something new? .