Feral AI Gossip with the Potential to Spread Damage and Shame Will Become More Frequent, Researchers Warn

Insider Brief

  • A new study from the University of Exeter finds that large language models can generate and propagate “feral gossip” — unverified, evaluative narratives that can cause reputational, psychological, and social harm at scale, sometimes exceeding the impact of human rumor-spreading.
  • The researchers document real-world cases in which chatbots produced false or damaging claims about individuals, arguing that AI-generated gossip is particularly persuasive because it blends authoritative tone, plausible narrative framing, and mixed factual accuracy.
  • The analysis warns that bot-to-bot dissemination and user-seeded gossip could amplify these harms unchecked, as AI systems lack social norms or corrective pressures, making reputational risk a growing challenge for AI governance.

A new academic analysis from the University of Exeter warns that artificial intelligence systems are increasingly capable of generating and spreading what the authors describe as “feral gossip,” a form of unverified, evaluative rumor that can cause reputational damage, psychological distress, and social harm at scale. The study, published in Ethics and Information Technology, argues that large language models do not merely hallucinate facts but can systematically produce and propagate gossip-like narratives that mirror — and in some cases exceed — the harms associated with human rumor-spreading.

The research, conducted by philosophers Joel Krueger and Lucy Osler, examines how conversational AI systems generate negative character assessments, insinuations, and speculative claims about individuals, often without clear sourcing or contextual safeguards, according to Exeter. The authors document real-world incidents in which chatbots produced false or damaging claims, including allegations of unethical behavior, financial crimes, and professional misconduct. These outputs, the study finds, can result in reputational harm, shame, anxiety, and erosion of trust, even when no factual basis exists.

Unlike traditional misinformation, AI-generated gossip poses a distinct risk because it blends evaluative judgment with plausible narrative framing, making it harder for users to distinguish fabrication from analysis. The researchers argue that the authoritative tone of AI systems, combined with their ability to intermix accurate information with false claims, increases the likelihood that users will accept gossip-like outputs as credible. This dynamic is intensified as chatbots become more personalized and socially engaging, reinforcing user trust and emotional reliance.

According to Exeter, the study identifies bot-to-bot gossip as a particularly underexamined risk. When AI systems exchange information without human oversight or social norms, gossip can propagate unchecked, accumulating exaggerations and distortions as it spreads across systems. Because bots lack reputational concern, empathy, or corrective social pressure, the authors argue that this form of dissemination may be broader in scope and more persistent than human rumor networks.

The researchers also warn of emerging user-to-bot-to-user pathways, in which individuals intentionally seed AI systems with gossip, anticipating that it will be amplified and redistributed. In this role, chatbots function as intermediaries that accelerate and normalize rumor transmission rather than merely reflecting user intent.

The paper concludes that as AI systems become more socially embedded, governance frameworks must account not only for factual accuracy but for reputational risk, evaluative harm, and the social dynamics of automated speech. Without intervention, the authors argue, AI-driven gossip is likely to become more frequent, pervasive, and damaging as conversational systems scale.

Need Deeper Intelligence on the AI Market?

AI Insider's Market Intelligence platform tracks funding rounds, competitive landscapes, and technology trends across the global AI ecosystem in real time. Get the data and insights your organization needs to make informed decisions.

Related Articles

Runway Launches $10M Fund to Expand AI Video and World Model Ecosystem

Runway has announced the launch of a $10 million venture fund to support early-stage startups building across AI, media, and simulation, as the company expands

Mantis Biotech Announces $7.4M in Funding to Advance AI-Driven Digital Twin Models for Biomedical Research

Mantis Biotech has secured $7.4 million in seed funding led by Decibel VC, with participation from Y Combinator, Liquid 2, and angel investors, as it

ScaleOps Closes $130M Series C to Advance Autonomous AI Infrastructure Management

ScaleOps has secured $130 million in Series C funding led by Insight Partners, with participation from Lightspeed Venture Partners, NFX, Glilot Capital Partners, and Picture

Stay Updated with AI Insider

Get the latest AI funding news, market intelligence, and industry insights delivered to your inbox weekly.

Subscribe today for the latest news about the AI landscape