News

Dark Corners of the Web Offer a Glimpse at A.I.’s Nefarious Future

When the Louisiana parole board met in October to discuss the potential release of a convicted murderer, it called on a doctor with years of experience in mental health to talk about the inmate.

The parole board was not the only group paying attention.

A collection of online trolls took screenshots of the doctor from an online feed of her testimony and edited the images with A.I. tools to make her appear naked. They then shared the manipulated files on 4chan, an anonymous message board known for fostering harassment, and spreading hateful content and conspiracy theories.

It was one of numerous times that people on 4chan had used new A.I.-powered tools like audio editors and image generators to spread racist and offensive content about people who had appeared before the parole board, according to Daniel Siegel, a graduate student at Columbia University who researches how A.I. is being exploited for malicious purposes. Mr. Siegel chronicled the activity on the site for several months.

The manipulated images and audio have not spread far beyond the confines of 4chan, Mr. Siegel said. But experts who monitor fringe message boards said the efforts offered a glimpse at how nefarious internet users could employ sophisticated artificial intelligence tools to supercharge online harassment and hate campaigns in the months and years ahead.

Callum Hood, the head of research at the Center for Countering Digital Hate, said fringe sites like 4chan — perhaps the most notorious of them all — often gave early warning signs for how new technology would be used to project extreme ideas. Those platforms, he said, are filled with young people who are “very quick to adopt new technologies” like A.I. in order to “project their ideology back into mainstream spaces.”

Those tactics, he said, are often adopted by some users on more popular online platforms.

Here are several problems resulting from A.I. tools that experts discovered on 4chan — and what regulators and technology companies are doing about them.

Artificial images and A.I. pornography

A.I. tools like Dall-E and Midjourney generate novel images from simple text descriptions. But a new wave of A.I. image generators are made for the purpose of creating fake pornography, including removing clothes from existing images.

“They can use A.I. to just create an image of exactly what they want,” Mr. Hood said of online hate and misinformation campaigns.

There is no federal law banning the creation of fake images of people, leaving groups like the Louisiana parole board scrambling to determine what can be done. The board opened an investigation in response to Mr. Siegel’s findings on 4chan.

“Any images that are produced portraying our board members or any participants in our hearings in a negative manner, we would definitely take issue with,” said Francis Abbott, the executive director of the Louisiana Board of Pardons and Committee on Parole. “But we do have to operate within the law, and whether it’s against the law or not — that has to be determined by somebody else.”

Illinois expanded its law governing revenge pornography to allow targets of nonconsensual pornography made by A.I. systems to sue creators or distributors. California, Virginia and New York have also passed laws banning the distribution or creation of A.I.-generated pornography without consent.

Cloning voices

Late last year, ElevenLabs, an A.I. company, released a tool that could create a convincing digital replica of someone’s voice saying anything typed into the program.

Almost as soon as the tool went live, users on 4chan circulated clips of a fake Emma Watson, the British actor, reading Adolf Hitler’s manifesto, “Mein Kampf.”

Using content from the Louisiana parole board hearings, 4chan users have since shared fake clips of judges uttering offensive and racist comments about defendants. Many of the clips were generated by ElevenLabs’ tool, according to Mr. Siegel, who used an A.I. voice identifier developed by ElevenLabs to investigate their origins.

ElevenLabs rushed to impose limits, including requiring users to pay before they could gain access to voice-cloning tools. But the changes did not seem to slow the spread of A.I.-created voices, experts said. Scores of videos using fake celebrity voices have circulated on TikTok and YouTube, — many of them sharing political disinformation.

Some major social media companies, including TikTok and YouTube, have since required labels on some A.I. content. President Biden issued an executive order in October asking that all companies label such content and directed the Commerce Department to develop standards for watermarking and authenticating A.I. content.

Custom A.I. tools

As Meta moved to gain a foothold in the A.I. race, the company embraced a strategy to release its software code to researchers. The approach, broadly called “open source,” can speed up development by giving academics and technologists access to more raw material to find improvements and develop their own tools.

When the company released Llama, its large language model, to select researchers in February, the code quickly leaked onto 4chan. People there used it for different ends: They tweaked the code to lower or eliminate guardrails, creating new chatbots capable of producing antisemitic ideas.

The effort previewed how free-to-use and open-source A.I. tools can be tweaked by technologically savvy users.

“While the model is not accessible to all, and some have tried to circumvent the approval process, we believe the current release strategy allows us to balance responsibility and openness,” a spokeswoman for Meta said in an email.

In the months since, language models have been developed to echo far-right talking points or to create more sexually explicit content. Image generators have been tweaked by 4chan users to produce nude images or provide racist memes, bypassing the controls imposed by larger technology companies.

Back to top button