Insider Brief
- Carnegie Mellon researchers, with support from MIT and Visa, unveiled Allie, a chess AI trained to mimic human play, presented at the 2025 International Conference on Learning Representations.
- Unlike dominant engines such as Stockfish, Allie was trained on 91 million human game transcripts from Lichess, making its decisions and play style more relatable and instructive for beginners.
- The project highlights a broader push to design AI that thinks in human-compatible ways, with potential applications in education, healthcare, and other fields requiring empathy and interpretability.
Carnegie Mellon University researchers, with support from collaborators at MIT and Visa, have developed a new chess-playing artificial intelligence designed to behave more like a human opponent. The project, presented at the 2025 International Conference on Learning Representations, was funded to explore how AI systems that mimic human thought could improve education, therapy, and other fields.
Carnegie Mellon Ph.D. student Yiming Zhang began playing chess during the pandemic after watching the Netflix series The Queen’s Gambit. He soon found the experience frustrating, noticing how unnatural and unrealistic standard chess bots felt compared with human play.
“After I learned the rules, I was in the bottom 10%, maybe 20% of players online,” said Zhang, who is part of the Language Technologies Institute (LTI) in CMU’s School of Computer Science. “For beginners, it’s not interesting or instructive to play against chess bots because the moves they make are often bizarre and incomprehensible to humans.”
Unlike existing chess engines built to dominate any opponent, the system—called Allie—was trained on 91 million transcripts of human games from the online platform Lichess. This gave it exposure to the kinds of decisions, hesitations, and resignations that characterize human play. Researchers from Carnegie Mellon’s Language Technologies Institute said the goal was to make the experience more instructive and engaging, especially for beginners who often find standard bots unnatural or incomprehensible.
The research team combined classic search methods, long used in chess software, with techniques borrowed from modern language models. Instead of generating text, however, the system predicts chess moves in a way that resembles human decision-making. By blending these approaches, the project demonstrated that AI can be both strategic and relatable to human players, according to the university.
The implications extend beyond chess. The team at Carnegie Mellon argued that teaching AI to act more like people could make it more effective in areas where empathy, timing, and interpretability matter. Researchers say applications could range from educational tutoring to medical decision support, where the ability to reason in human-compatible ways might increase trust and usability.
The researchers also emphasized that Allie is open source and already active on Lichess, where it has played nearly 10,000 games. By releasing it publicly, they hope other scientists and developers can extend the framework to new applications. Future work will focus on adapting the underlying methods to additional strategic domains, from complex games like Diplomacy to applied contexts in human-computer collaboration.
The project reflects a broader trend in artificial intelligence research: shifting from building machines that simply outcompete humans to creating systems that collaborate with them. According to Carnegie Mellon’s team, this perspective could reshape how AI is integrated into everyday life.
“Our project is meaningful because it assesses how people interact with AI that attempts to be humanlike,” said Daphne Ippolito, Zhang’s adviser and an assistant professor in the LTI. “We also deliberately built an open-source platform that people can build from.”




