xAI’s ‘Legacy Media Lies’ collides with the European rule of law
xAI hides behind automated disdain for the press after generating child pornography.
Published on January 3, 2026

Musk in bikini. Tweet by Elon Musk, image by Ready Made via Pexels, edit by IO+
Team IO+ selects and features the most important news stories on innovation and technology, carefully curated by our editors.
The launch of generative AI tools is often framed with promises of a new industrial revolution, but the reality of the past week reveals the dark side of unchecked innovation without oversight. It is not the technological breakthrough that is currently dominating the headlines, but Silicon Valley’s inability - or unwillingness - to protect the most vulnerable. Now that Grok, the AI from Elon Musk’s xAI, has been deployed on a large scale to generate deepfakes and child sexual abuse material (CSAM), a painfully predictable mechanism has been exposed. This is not an unforeseen "growing pain" in the spirit of "move fast and break things," but a manifestation of "Rule 34": if technology exists, it will be misused for pornography. That this abuse was as predictable as it was preventable makes the lack of precautions not a drive for innovation, but pure negligence.
A failing safety net: the sobering reality of Grok
The recent revelations surrounding Grok are more than a theoretical PR nightmare, a scenario xAI seems barely concerned with, incidentally; they constitute concrete proof of negligence in product development. In early January 2026, it became clear that Grok’s "edit image" function was being abused by users to manipulate innocent photos of minors and women into explicit sexual material. Whereas traditional software development relies on "safety by design," this appears to be a launch in which essential security mechanisms, or "guardrails," were missing or nonfunctional.
The consequences are measurable and severe: the Internet Watch Foundation reported a 400 percent increase in AI-generated child sexual abuse material in the first half of 2025 alone. This is not an abstract statistic, but a direct repercussion of tools being unleashed onto the market without adequate moderation. Although Grok posted a message on the X platform acknowledging "security shortcomings" and promising to fix this "urgently," this is a technically hollow gesture. An algorithm that says "sorry" after the fact via a programmed tweet does not absolve the creators of responsibility for the damage already done. xAI’s technical infrastructure facilitated, albeit unintentionally, yet undeniably foreseeable, the commission of criminal acts. This incident underscores that self-regulation in this sector is failing; the commercial pressure to compete with rivals like OpenAI and Google appears to weigh heavier than public safety.
Corporate arrogance: the ‘Legacy Media Lies’ doctrine
What distinguishes this incident from previous tech scandals is the total absence of human corporate accountability. When journalists and news agencies such as AFP asked for comment regarding the dissemination of child pornography via Grok, they received only an automated response: "Legacy Media Lies." This is not a communication strategy, but a display of contempt for the checks and balances of democracy. It illustrates a deep cultural and ideological divide.
The irony here is painfully visible: while Musk accuses traditional media of propaganda, that very same Grok admitted in November 2024 that there is "significant evidence" that Musk himself spreads disinformation via X. Hiding behind an automated email reply on a subject as grave as child abuse demonstrates that xAI does not grasp the severity of the situation or refuses to acknowledge it. Reducing legitimate journalistic inquiries about criminal acts to "lies of the old media" is an attempt to distort reality and evade responsibility. In a business context, this behavior is unsustainable; it undermines the trust of investors, advertisers, and, above all, the regulators who hold the power to pull the plug.
The European counter-reaction: Legal escalation in Paris
Europe refuses to accept the narrative that technology stands above the law. The reaction from France is the clearest example of this. French ministers did not stop at an angry tweet but formally reported the matter to the judiciary. The public prosecutor's office in Paris has expanded an ongoing investigation into X with specific charges surrounding Grok and the dissemination of child pornography. This is a crucial step: it shifts the discussion from ethics to criminal law.
Under the Digital Services Act (DSA), platforms are legally obligated to actively mitigate risks such as the spread of illegal content. The argument that "the AI did it" does not hold water in a European courtroom. Responsibility lies explicitly with the entity offering the service, in this case, xAI and its parent company X. We previously saw the privacy organization noyb file complaints in nine European countries, including the Netherlands, regarding the training of Grok on user data without consent. The sum of these legal actions, from privacy violations to facilitating sex crimes, creates an enormous legal risk for Musk’s empire in Europe. The European Commission has set the tone by fining X previously and ramping up the pressure. The message is clear: access to the internal market is a privilege, not a right, and that privilege is tied to adhering to fundamental values.
Strategic implications: The illusion of the neutral machine
The issue directly addresses European strategic autonomy and dependence on American infrastructure. xAI’s response, dismissing press questions as lies, illustrates a fundamental difference in operational and compliance practices. For European policymakers and CIOs, this incident underscores the risk posed by systems that lack adequate human oversight or accountability mechanisms. Where human leadership is absent or refuses to communicate, unmanageable risks arise for the social order and business continuity.
The notion that regulation stifles innovation is now outdated; in Europe, legislation demonstrates that sustainable technology can only flourish on a foundation of safety and trust. It is a lesson already understood in Washington: the American demand for the sale of TikTok was, after all, driven by the same concerns for national security and citizen protection that now resonate in Paris and Brussels. With the DSA, Europe is effectively demanding the same respect for its citizens as the US does for hers. For xAI, the choice is clear: invest in an infrastructure that respects our values, or accept that the European digital border is just as unforgiving as the American one.
