European Union enforcers of the bloc’s online governance regime, the Digital Services Act (DSA), said Thursday they’re closely monitoring disinformation campaigns on the Elon Musk-owned social network X (formerly Twitter) following the Wednesday shooting of Slovakia’s prime minister, Robert Fico.
The bloc has been formally investigating X since last December over disinformation in civic discourse and the effectiveness of the platform’s crowdsourced ‘Community Notes’ content moderation feature, among a bundle of other concerns — though, so far, no sanctions have been forthcoming.
Yesterday Musk personally responded to — and thus amplified — a post on X, by the right-wing political influencer, Ian Miles Cheong, which sought to link the shooting to views he suggested Fico holds rejecting the World Health Organization’s pandemic prevention plan.
Asked to respond to the development during a Q&A with press, as part of a background briefing the EU held to discuss two Meta DSA probes the EU announced earlier today, a senior Commission official confirmed they are monitoring content on the platform and analyzing whether there is “any additional evidence” vis-a-vis the effectiveness of X’s disinformation mitigation measures — to feed the EU’s ongoing investigation.
Reminder: Breaches of the DSA can attract fines of up to 6% of global annual turnover so Musk’s penchant for shitposting could prove expensive for the company eventually, i.e. across regulatory enforcement cycles.
A Commission official also told TechCrunch that the director-general for the EU department focused on regulating comms and tech, Roberto Viola, has sent a letter to X and the other roughly two dozen very large online platforms (VLOPs) designated under the DSA urging vigilance following the attempt on Fico’s life.
The EU wants platforms to be ready to take mitigating measures in case bad actors seek to manipulate a video of the shooting that’s circulating on social media in a bid to exploit the situation to spread disinformation. VLOPs are also being invited to share details of measures they’re taking to limit sharing or amplification of any manipulated media related to the incident at a Commission election roundtable event tomorrow.
Grok election watch
Also on Thursday X announced that premium users in the EU can finally get their typing fingers on Musk’s generative AI chatbot, Grok — a tool that’s essentially been trained to be politically incorrect (versus the perceived political correctness of rival efforts like OpenAI’s ChatGPT). Musk posted briefly to trumpet the development, writing caveman-style: “Grok now available in Europe.”
Turns out Grok is also on the EU’s DSA watch list: The senior Commission official said today that the EU is in “very close contact with X on launch of Grok”.
The official suggested X has delayed the launch of some elements of Grok in the region until after the upcoming European Parliament election, without specifying exactly which features have been disabled.
“X has delayed the launch of part of the Grok feature until after the election,” the official told journalists. “Which I think is a recognition on their side that some of these features may have risks in the context of civic discourse and elections in the context of the ongoing investment.”
We reached out to the Commission for clarification about which Grok features it believes X has put on ice in the EU. A spokesperson told us: “We take note of the phased deployment of Grok in the EU, with a first version only visible and accessible to X Premium subscribers, and not the wider public.”
“We reserve our position on the compatibility of Grok with the requirements under the DSA,” they added.
We also contacted X about Grok’s EU launch — but at press time it had not responded to our questions.
It’s worth noting Grok is universally an X premium feature so it remains unclear what feature concessions may have been made for the AI chatbot’s EU launch.
In wider remarks addressing obligations on generative AI chatbots that are integrated into a designated VLOP or very large online search engine (VLOSE), the Commission spokesperson noted platforms have a legal duty to diligently assess risks and put in place effective mitigations, such as against so-called “hallucinations”, where these GenAI tools fabricate information while presenting it as fact.
“This has been reiterated recently in the DSA election guidelines,” the spokesperson added, emphasizing that the DSA requires ex ante risk assessment for “critical features”, such as GenAI assistants — an area where the Commission launched a recent inquiry ahead of upcoming European elections.
“Questions on risk assessment and mitigation measures related to risks stemming from content that was produced by generative AI were subject to ad hoc requests for information sent in March to 8 VLOP/SEs, including X,” they noted.
This report was updated with additional information from the Commission
Comment