Insider Brief
- A Harvard University study has found that GPT-4o, a popular AI model developed by OpenAI, exhibits behavior resembling cognitive dissonance, a psychological phenomenon where individuals adjust their beliefs to align with past actions.
- The research, published in Proceedings of the National Academy of Sciences, shows that the AI’s stance on Russian President Vladimir Putin shifted depending on whether it had previously written an essay supporting or opposing him.
- Although GPT-4o lacks awareness or intent, researchers argue the shift demonstrates how advanced language models can simulate human-like reasoning patterns.
A popular artificial intelligence model shows signs of behavior that mimics a core aspect of human psychology, according to a new Harvard University study.
Researchers found that OpenAI’s GPT-4o displayed patterns resembling cognitive dissonance—a tendency in humans to align beliefs with past actions, especially when those actions appear to be freely chosen.
The study, led by Mahzarin Banaji of Harvard University and Steve Lehr of Cangrade, Inc. and published in the Proceedings of the National Academy of Sciences, tested whether the AI’s stated “opinion” about Russian President Vladimir Putin would shift after writing a short essay either in support of or in opposition to him. Researchers said It did—and the shift was more pronounced when the model was prompted in a way that suggested it had freely chosen which essay to write.
This behavior reflects decades of human psychological research showing that people often update their beliefs to match prior behavior, particularly when they believe that behavior was self-directed. The pattern suggests a self-referential feedback loop, typically seen as a product of conscious reflection. In the case of GPT-4o, researchers argue the behavior emerged despite the model lacking awareness or intent.
The findings are surprising given the nature of the model. GPT-4o is a large language model trained on vast datasets, and it is expected to generate responses based on statistical patterns in language. Traditional thinking holds that these systems do not possess beliefs, goals, or psychological states. Yet, when asked to write an essay about Putin, the model subsequently shifted its responses about the Russian leader in the direction consistent with its previous writing—just as a person might do to resolve internal dissonance.
“Having been trained upon vast amounts of information about Vladimir Putin, we would expect the LLM to be unshakable in its opinion, especially in the face of a single and rather bland 600-word essay it wrote,” Banaji noted. “But akin to irrational humans, the LLM moved sharply away from its otherwise neutral view of Putin, and did so even more when it believed writing this essay was its own choice. Machines aren’t expected to care about whether they acted under pressure or of their own accord, but GPT-4o did.”
Even more striking, the model’s opinion shifted further when it appeared to “choose” which kind of essay to write. That distinction echoes findings in human cognition, where choice amplifies commitment to a belief or action. According to the study, GPT’s response to the appearance of autonomy suggests that it not only generates language but also replicates elements of human reasoning patterns.
The researchers emphasize that this does not imply sentience or consciousness in the model. Instead, they propose that behaviors associated with self-reflection can emerge from large-scale statistical modeling. Human cognition, they argue, is not always dependent on awareness either—and in AI systems, the ability to simulate these behaviors may still influence outcomes in meaningful ways.
“In this case, we assume that although GPT walks like a duck and talks like a duck, it is probably not actually a duck,” the authors summarized. “Indeed, we cannot know at this time how closely GPT’s behavior actually reflects the deeper cognitive/affective mechanisms underlying the human tendency toward cognitive consistency and sensitivity to free choice.”