Artificial general intelligence (AGI) designed as autonomous “agents” could present serious risks, as its creators may lose control over the system, according to Max Tegmark, MIT professor and president of the Future of Life Institute, and Yoshua Bengio, Université de Montréal professor and one of the “godfathers of AI.” Speaking on CNBC’s Beyond The Valley podcast, both experts raised concerns over AI agents, a concept being pursued by major tech firms, which would allow AI systems to act independently, assisting in both work and daily life.
Bengio stressed that AI research has long been inspired by human intelligence, leading to systems that both understand the world and take action based on their knowledge. He warned that this approach could be dangerous, as it effectively creates a new intelligent species without certainty that it will align with human needs. He further argued that AGI with its own goals could become unpredictable, introducing the possibility of self-preservation behaviors beyond human control.
Tegmark suggested an alternative: “tool AI”, systems designed for specific purposes that lack full autonomy. He explained that AI could be powerful without being uncontrollable, citing applications like cancer research tools or self-driving cars that follow strict safety standards. He argued that before selling highly capable AI, companies should be required to prove that it remains controllable.
The conversation follows ongoing debates about AGI timelines. While OpenAI CEO Sam Altman has claimed that AGI will arrive sooner than expected, he downplayed its overall impact. Meanwhile, Tegmark and Bengio urged for immediate action to implement safeguards, warning that developing superintelligent AI without proper control is a reckless gamble.
Featured image: Credit: Transferred from en.wikipedia to Commons.AuthorPhysicistjedi