In a recent interview with Dr. Brian Keating, physicist and AI researcher Max Tegmark laid bare his thoughts on the transformative — and potentially perilous — future of artificial intelligence (AI). Known for his work in cosmology and his deep dives into the nature of reality, Tegmark has shifted much of his focus to understanding AI’s trajectory. His insights, punctuated by vivid analogies and bold predictions, paint a future that is both awe-inspiring and daunting.
“It’s very dangerous to bet against AI progress,” Tegmark remarked early in the conversation. Reflecting on his pivot to AI research eight years ago, he described the rapid evolution from systems that could master chess to today’s large language models, capable of passing the Turing Test. But Tegmark is clear-eyed about current limitations: “When you use something like ChatGPT, it uses thousands of times more energy to do a task than your brain would. Our software is incredibly dumb compared to what’s physically possible.”
One of Tegmark’s central points is that embodiment — the integration of sensory experiences into learning — may not be the barrier many believe it to be.
“Even if AI doesn’t have a robot body, it can develop insights, emotions and intuitions similar to humans,” he argued. He likened the process to how our brains interpret electrical signals from our senses. “Your brain doesn’t care if the signal comes from your eyes or ears; it’s all the same kind of electrical data.”
Tegmark also addressed the prospect of AI outpacing human intelligence, a scenario he believes is inevitable if current trends continue.
“One of the first things AGI will do is AI research better than us,” he stated. This recursive improvement could lead to exponential advancements. “Imagine an AGI redesigning its hardware and software to be a thousand times more efficient. That’s when you see the ‘foom’ — a sudden leap to superintelligence.”
Despite his optimism about AI’s potential, Tegmark voiced concerns about governance.
“Every time we’ve built powerful technology, it’s been a double-edged sword,” he warned. “We need safety standards, just as we have for cars, food, and electricity. It’s common sense, yet the U.S. currently has no meaningful AI regulation.” Drawing a parallel to the introduction of seatbelt laws, he added: “The car industry resisted seatbelts, claiming it would destroy the market. What actually happened? Auto sales exploded because people felt safer. The same can be true for AI.”
Tegmark’s most provocative assertion came when discussing AI’s ability to generate new scientific paradigms: “Right now, it can’t do it. But within ten years — maybe even two — it likely will. We’ve already seen AI discover a new law in ozone chemistry that humans hadn’t noticed. That’s just the beginning.”
Ending on a reflective note, Tegmark called for humanity to take an active role in shaping AI’s future.
“Asking what will happen is the wrong question,” he said. “The right question is: What do we want to happen? We’re not bystanders; we’re builders. Let’s create a future we’re excited to live in.”
In Tegmark’s vision, the future of AI isn’t just about machines becoming smarter — it’s about humanity becoming wiser in how we wield them. The stakes couldn’t be higher.
Featured image: Credit: Physicistjedi at English Wikipedia