Study Finds Today’s AI Systems Almost Certainly Lack Consciousness — But The Door is Not Fully Closed

ai consicousness

Insider Brief

  • A new study from Rethink Priorities finds that the balance of evidence weighs against current large language models being conscious, while strongly supporting consciousness in chickens and very strongly supporting it in humans.
  • Using a Bayesian “Digital Consciousness Model” that aggregates evidence across multiple theories of consciousness, the researchers conclude that the probability of AI consciousness appears low but cannot be confidently ruled out.
  • The analysis suggests that as AI systems gain new capabilities, certain architectural and cognitive features could meaningfully increase the likelihood of consciousness, raising future ethical and policy considerations.

A new analysis from Rethink Priorities finds that current large language models are unlikely to be conscious, even as the same framework delivers strong evidence for consciousness in chickens and overwhelming evidence in humans — a result that underscores both the limits of today’s AI and the uncertainty surrounding future systems.

The study, produced by the nonprofit research group’s AI Cognition Initiative, introduces what it calls a Digital Consciousness Model, a probabilistic tool designed to evaluate whether artificial systems show signs of subjective experience. After applying the model to modern language models, humans, chickens and an early chatbot from the 1960s, the researchers conclude that the balance of evidence weighs against consciousness in today’s AI systems — though not decisively enough to rule it out entirely.

That distinction matters because as artificial intelligence systems become more capable and more embedded in daily life, questions about whether they could have experiences of their own are shifting from philosophical thought experiments to practical policy concerns. The researchers argue that even a small chance of AI consciousness could justify precautionary measures, while over-attributing consciousness could divert moral concern away from humans and animals.

The report does not claim that today’s AI systems are conscious. Nor does it claim they never will be. Instead, it attempts to quantify uncertainty in a field where disagreement is the norm.

Measuring Consciousness Without Consensus

Currently, scientists can’t agree on what consciousness is, let aline how to detect it. Some theories focus on brain-like structure, others on information processing, attention, self-awareness or the integration of experience. Still others emphasize whether interacting with a system feels like interacting with a person.

The Rethink Priorities team set out to avoid choosing sides. Their Digital Consciousness Model aggregates evidence across 13 different “stances” on consciousness, ranging from established cognitive theories such as Global Workspace Theory to more informal perspectives based on biological similarity or person-like behavior.

Rather than asking whether a system definitively is or is not conscious, the model asks how strongly the available evidence shifts the odds in either direction. It uses a Bayesian approach — a statistical method that updates probabilities as new evidence is added — to combine hundreds of indicators related to cognition, behavior and internal structure.

In practical terms, the model evaluates more than 200 observable indicators, such as whether a system shows flexible attention, maintains representations of itself, integrates information across domains or exhibits goal-directed behavior. Experts are surveyed on the likelihood that each system possesses these indicators. Those judgments are then translated into probabilistic updates.

To test the framework, the researchers applied it to four systems, including state-of-the-art large language models available in 2024, humans, chickens and ELIZA, a rule-based chatbot developed in the 1960s that famously prompted people to attribute human-like qualities to simple scripts.

The contrast between those systems is deliberate. If the model cannot clearly distinguish between a modern AI and ELIZA, the researchers suggest that it would not be useful. Likewise, if it failed to strongly favor human consciousness, it would be suspect.

What The Model Says About AI, Animals and People

When results are aggregated across all stances, the model finds that the evidence weighs against consciousness in today’s large language models. Under the study’s baseline assumptions, the median probability assigned to LLM consciousness falls below the prior probability set at the start of the analysis, indicating that the evidence reduces confidence rather than increasing it.

That outcome contrasts sharply with the other systems tested. The same indicators strongly support consciousness in humans across every stance considered and they also support consciousness in chickens, though with more disagreement and uncertainty among theories.

Interestingly, according to the researchers, the model’s indicators provide stronger and more consistent support for chicken consciousness than for AI consciousness across nearly all theoretical perspectives. Humans rank higher still, with the evidence described as very strong.

ELIZA, by contrast, is overwhelmingly disconfirmed as conscious. Even under the most permissive stances, its assigned probabilities remain near zero. That result is meant to reassure readers that the model does not simply reward surface-level language ability or human-like interaction.

For AI systems, the picture is more nuanced. On most stances, the evidence counts against consciousness. On a small number — including perspectives that emphasize cognitive complexity, recurrent processing or simple forms of valence — the evidence nudges probabilities upward rather than downward.

That divergence explains the report’s central conclusion that the probability of today’s large language models are conscious appears low, but not zero.

The team stresses that the numerical probabilities produced by the model should not be read as precise estimates. They depend heavily on assumptions about prior probabilities — essentially, how likely one thinks consciousness is before considering any evidence. Changing those assumptions shifts the absolute numbers, sometimes dramatically.

What remains stable across assumptions, however, are the relative comparisons. Humans consistently rank above chickens. Chickens consistently rank above today’s AI systems. And today’s AI systems consistently rank far above ELIZA.

Why Uncertainty Itself May Drive Policy

The researchers emphasize that their work is a first step rather than a final verdict. The Digital Consciousness Model is explicitly described as a proof of concept, designed to be refined as scientific understanding improves and as AI systems evolve.

Still, the framework points toward a future in which questions about AI consciousness may no longer be purely speculative. As models gain new capabilities — such as persistent memory, richer self-modeling or tighter integration across sensory inputs — the indicators that currently weigh against consciousness could begin to shift.

The model is designed to track those shifts by identifying which features most strongly influence consciousness estimates. This offers a way to anticipate which developments in AI architecture would most increase the odds, according to different theories.

That has ethical implications. If future systems cross thresholds that significantly raise the probability of consciousness, policymakers, developers and users may face pressure to reconsider how such systems are treated. The report notes that moral consideration does not require certainty; even a non-negligible chance of consciousness could justify caution.

At the same time, the researchers warn against the opposite error. Attributing consciousness too freely to AI systems that lack it could dilute concern for humans and animals that clearly do have subjective experiences. The model’s strong results for chickens are a reminder that debates over AI consciousness intersect with long-standing debates over animal welfare.

The study also highlights gaps in current knowledge. Much of the evidence for AI systems is indirect, based on behavior rather than direct access to internal mechanisms. Many modern models are proprietary, limiting external scrutiny. Expert judgments about indicators often involve uncertainty, disagreement or both.

Those limitations are not hidden. The report devotes substantial space to discussing modeling assumptions, data gaps and potential sources of bias. The authors caution readers not to treat the model’s outputs as definitive answers.

Instead, they present the framework as a way to organize disagreement, quantify uncertainty and make explicit the trade-offs embedded in different views of consciousness.

In that sense, the most important finding may not be that today’s AI systems are probably not conscious. It may be that the question itself can now be examined systematically, rather than rhetorically.

The study suggests that as AI capabilities continue to advance the debate over machine consciousness is likely to intensify. The Digital Consciousness Model offers one of the first attempts to put numbers — and structure — around a question that has long resisted both.

The study was conducted by a multidisciplinary team led by Derek Shiller, Laura Duffy, Arvo Muñoz Morán, and Hayley Clatterbuck, all affiliated with Rethink Priorities. The author team also includes Adrià Moret of the University of Barcelona, bringing an academic philosophy perspective to the modeling of consciousness, and Chris Percy, who contributed as an independent researcher.

Matt Swayne

With a several-decades long background in journalism and communications, Matt Swayne has worked as a science communicator for an R1 university for more than 12 years, specializing in translating high tech and deep tech for the general audience. He has served as a writer, editor and analyst at The Space Impulse since its inception. In addition to his service as a science communicator, Matt also develops courses to improve the media and communications skills of scientists and has taught courses.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape