Study: People Often Trust Fake Local News Sites More Than Real Ones; Yale Political Scientist Warns of Growing Influence of AI-Driven ‘Pink-Slime’ News

The Detroit City Wire is not a real newspaper. Its website features news stories created by an artificial intelligence model and owned by a company with ties to partisan messaging.
But a new study supported by the Institution for Social and Policy Studies shows that many people often prefer such algorithmically produced websites to one featuring actual journalists working for a traditional news organization open to scrutiny and adhering to ethical practices.
“People can come across these sites and be tricked into thinking they were real and trustworthy when they are algorithmically generated,” said Kevin DeLuca, ISPS faculty fellow and assistant professor of political science at Yale. “But that’s not the most troubling aspect to me. These sites seem to take money from campaigns or political operatives and write pieces that are biased or have some sort of angle, with this veneer of a legitimate news site. Such sites have the potential for making people believe things that are not true without understanding where the information is coming from.”
DeLuca and David Beavers, a Ph.D. candidate at Harvard University’s Department of Government, published a working paper last month describing a study in which they tested whether people could distinguish between real and largely algorithmically generated news sites and whether a digital media literacy tip sheet could help improve their discernment.
The algorithmically generated sites studied by the researchers mimic the appearance of legitimate local newspapers, but they don’t necessarily fabricate news stories. Instead, they often produce stories from government press releases, crime statistics reports, or articles on campaign finance that come from a traditional news source.
“A lot of researchers and media watchers focus on completely fake stuff online masquerading as news, such as AI images,” DeLuca said. “In those cases, the challenge is figuring out whether the content is true or false. Technology is making that task more difficult, but it’s not a new problem. Photoshop has been around for a long time, and if there is no record of an event, there are ways to figure out if it is fake.”
But this type of so-called pink-slime journalism is more dangerous. It’s harder for people to figure out if it is human-generated or reliable.
“The stories are often technically accurate, but the sites are designed to look like legitimate local newspapers while quietly advancing a political agenda.” DeLuca said. “So, the real issue isn’t just true versus false. It’s whether people can tell which sources are trustworthy in the first place.”
The researchers showed participants both real and algorithmically generated local news sites based on the state they lived in and asked them which news site they would pick to learn about public affairs in their state. In pilot studies, participants who saw live websites were 12 percentage points more likely to prefer the real sites than people who saw only static screenshots of the sites. For the main experiment, the researchers chose to use real, functional homepage links to better reflect how people assess information in the real world, and they filtered out participants who didn’t engage with the pages.
“We wanted to give the journalistic sites a fair shot,” DeLuca said. “In real life, people can scroll, click around, or look for an ‘About’ page or a byline before making a judgement about the website. A screenshot doesn’t capture any of that.”
However, the researchers found that even after seeing the digital media literacy tip sheet, 41% of participants still preferred the algorithmic site, compared with 46% of participants in a control group who did not see the tip sheet.
“We thought people would see the AI news and decide they wouldn’t want to read it because it’s low quality and not trustworthy,” DeLuca said. “But people’s preferences turned out to be much more mixed and closer to 50-50 than we expected.”
The tip sheet encouraged users to look at bylines, article dates, website “About” pages, and other cues demonstrating a site’s credibility. Are these real people working in a real place with a real address and management structure? Might the site be promotional? Satire?
In open-ended responses, people mentioned these cues nearly three times as often as people who were not given the tip sheet.
“The tip sheet worked as intended,” DeLuca said. “It just didn’t change people’s site preferences much.”
Instead, people seemed to choose a website based on the topics covered and the perceived bias of the content.
“People like to read stuff they agree with,” DeLuca said. “If you think it’s biased in a way you don’t like, you choose the other site.”
Overall, the findings suggest that people care much more about perceived bias and the topics covered when choosing a site to read, rather than site features that indicate journalistic credibility.
In addition, DeLuca said that real news sites suffered from a clutter of ads, which do not appear on the algorithmic sites used for this study. Participants who complained about ads were 20% less likely to choose the real site. And people were more likely to prefer a site that appears local, even if it is fake.
“They would prefer the fake Garden State Times over CNN just because it sounds local,” he said.
The researchers also found that educating the public to increase trust in news media could have unintended consequences.
“If people have high levels of media trust, it affects everything — the real sites and the fake sites,” DeLuca said. “Increasing trust in general doesn’t necessarily help people discern between the two.”
DeLuca recommends that news organizations reduce intrusive ads that damage credibility and user experience; make bylines, ethics policies, and About pages more prominent; and better explain the value of and effort behind true journalism.
“People should prefer the real sites, because there is a lot of work that goes into producing well-sourced, accurate journalism,” he said.
DeLuca’s research at ISPS focuses on political economy and political representation, including elections, election laws, and the role of the media in the political process. The Society for Political Methodology awarded DeLuca and his co-authors The Miller Prize for the best work appearing in political analysis last year for their paper on how to reduce partisan bias when drawing congressional districts.
Currently, DeLuca is exploring how bias in local news affects information processing, how large-language models, such as ChatGPT, can be used to analyze headlines for perceived positive or negative sentiments toward political candidates, and the potential for AI to enhance journalism ethically.
“I think there’s potential to use AI and Chat-GPT-type tools in a way to enhance journalism,” he said. “Not replace it.”