US

Illicit Content on Elon Musk’s X Draws E.U. Investigation

The European Union announced a formal investigation Monday into X, the social media platform owned by Elon Musk, accusing it of failing to counter illicit content and disinformation, a lack of transparency about advertising and “deceptive” design practices.

The inquiry is perhaps the most substantial regulatory move to date against X since it scaled back its content moderation policies after Mr. Musk bought the service, once known as Twitter, last year. The company’s new policies have led to a rise in incendiary content on the platform, according to researchers, causing brands to scale back advertising.

In going after X, the European Union is for the first time using the authority gained after last year’s passage of the Digital Services Act. The law gives regulators vast new powers to force social media companies to police their platforms for hate speech, misinformation and other divisive content. Other services covered by the new law include Facebook, Instagram, Snapchat, TikTok and YouTube.

The European Commission, the 27-nation bloc’s executive branch, had signaled its intention to look more closely at X’s business practices. In October, regulators initiated a preliminary inquiry into the spread of “terrorist and violent content and hate speech” on X after the start of the Israel-Gaza conflict.

“The evidence we currently have is enough to formally open a proceeding against X,” Margrethe Vestager, the European Commission’s executive vice president overseeing digital policy, said in a statement. “The Commission will carefully investigate X’s compliance with the DSA, to ensure European citizens are safeguarded online.”

X said it “remains committed to complying with the Digital Services Act and is cooperating with the regulatory process.”

“X is focused on creating a safe and inclusive environment for all users on our platform, while protecting freedom of expression, and we will continue to work tirelessly towards this goal,” the company said.

The investigation highlights a major difference between the United States and Europe in policing the internet. While online posts are largely unregulated in the United States as a result of free speech protections, European governments, for historical and cultural reasons, have put more restrictions in place around hate speech, incitement to violence and other harmful material.

The Digital Services Act was an attempt by the E.U. to compel companies to establish procedures to comply more consistently with rules around such content online.

The announcement Monday is the beginning of an investigation without a specified deadline. The inquiry is expected to include interviews with outside groups and requests for more evidence from X. If found guilty of violating the Digital Services Act, the company could be fined up to 6 percent of global revenue.

When Mr. Musk took control of the platform, he dissolved its trust and safety council, overhauled its content moderation practices and welcomed scores of banned users back to the platform. Dozens of studies published since then have described a near-instantaneous rise in antisemitic content and hateful posts.

The Institute for Strategic Dialogue, a nonprofit focused on monitoring extremism and disinformation, found that antisemitic posts in English more than doubled on X after Mr. Musk’s takeover. The European Commission found that engagement with pro-Kremlin accounts grew 36 percent in the beginning of this year after Mr. Musk lifted mitigation measures.

As a rash of natural disasters took place around the world this summer, climate misinformation spread widely on X. A scorecard evaluating social media companies awarded X a single point out of a possible 21 for its work defending against climate-related falsehoods.

E.U. officials said X may not be in compliance with rules that require online platforms to respond quickly after being made aware of illicit and hateful content, such as antisemitism and incitement of violence and terrorism. The law also requires companies to conduct risk assessments about the spread of harmful content on their platforms and mitigate it.

Officials raised concerns about X’s content moderation policies in languages other than English particularly as elections across the continent approach in 2024.

In addition, the investigation will examine X’s efforts to address the spread of false information. The company relies on a feature, called Community Notes, that lets users add context to posts that they believe are misleading, an approach that E.U. officials said may not be sufficient. Regulators will also look into the ways in which posts by X users who pay to be authenticated, signified by a blue check mark, are given more visibility.

The investigation will test the E.U.’s ability to force large internet platforms to change their behavior. Mr. Musk has been an outspoken proponent for free speech rights and, in May, pulled X out of the E.U.’s voluntary code of practice intended to combat disinformation.

In October, after the E.U. initiated its preliminary inquiry, Mr. Musk challenged regulators to share evidence of illicit content on X. “Please list the violations you allude to on X, so that the public can see them,” he said.

Stuart A. Thompson contributed reporting.

Related Articles

Back to top button