Fear of generative AI-fueled disinformation grew even more following events like the use of a President Joe Biden voice clone to attempt to discourage voting in New Hampshire.
Generative AI tools could be used maliciously for everything from creating misleading news articles and social media posts, to fabricating video “evidence” of people trying to rig elections, to even tying up government offices by spamming them with mass records requests, according to recent research from panelist Valerie Wirtschafter. And political candidates also might try to dismiss authentic but unflattering recordings of them by asserting the recordings are deepfaked.
But while AI-fabricated content is a challenge to information integrity, it doesn’t appear to have hit a crisis point. Per Wirtschafter’s research, in the time since ChatGPT’s launch, AI-generated media has only accounted for 1 percent of the posts that X users flagged as misleading under the social media platform’s Community Notes program. Such findings suggest that online spaces are not currently seeing “an overwhelming flood” of generated content. It also underscores that more traditional forms of mis- and disinformation, such as images and videos taken out of context, continue to present a real problem.
A core piece of fighting misinformation is ensuring the public has access to reliable, trustworthy sources of information — which often includes the local newspaper. But local journalism has long been embattled, and Jurecic said one fear is that media companies eager to use GenAI to cut costs will worsen that problem.
“I’m not a person who thinks that we’re going to be able to replace all reporters with AI. But I am worried that there are people who own media companies who think that,” Jurecic said.
Society is in a phase where people are still re-adjusting their understanding of what “fake” looks like in a world where generative AI exists. But people have been through such shifts before, re-setting their expectations after Photoshop emerged, and after earlier methods of photograph trickery came to light, said Northeastern University Assistant Professor Laura Edelson, who studies “the spread of harmful content through large online networks.”
In today’s media environment, how realistic an image or video seems is no longer an indicator of how authentic it is, said Princeton computer science professor and Director of the Center for Information Technology Policy Arvind Narayanan. Instead, people will likely look to the credibility of content’s source to determine whether to trust it. Some social media platforms are taking steps that can help users assess credibility, he said. For example, X’s Community Notes feature lets qualifying users attach clarifying, contextualizing notes to images and videos that appear in posts. That’s “a big step forward,” Narayanan said, even if the degradation of X’s blue checkmarks was “a big step backward.”
Meta has also promised to start labeling AI-generated images, and Jurecic said it’ll be important to study the impact of such interventions. For example, researchers will want to find out whether people start to automatically trust anything without a label or whether they’re still wary that the system could miss flagging something, and whether people still re-share content marked as GenAI-created. And even so, what matters most in fighting deception isn’t whether or not content was created without the aid of AI, but whether it’s being framed and presented in an honest manner, she added.
Perhaps one of the most helpful parts of a program to label generative AI in social media feeds is that it helps the average person stay aware of just how realistic the latest synthetic media has become, Narayanan said. GenAI is rapidly improving, and not everyone can easily keep themselves up to date on its newest capabilities, but this kind of intervention can help by reaching people in their day-to-day lives.
Panelists also pointed to some early explorations into whether generative AI can also be used to help improve the trustworthiness of online information. For example, former OpenAI Trust and Safety team lead Dave Willner and former Meta civic integrity product team lead Samidh Chakrabarti suggest in a recent paper that large language models (LLM) might eventually be able to help sites enforce their content moderation policies at scale. But policies have to be rewritten in exacting ways to be understandable to LLMs and new technological developments are needed before such an application is practical, the authors said.