AI Could Take Control & Make Us Irrelevant, Warns Nobel Prize Winner Hinton

In a recent interview, Geoffrey Hinton, often referred to as the “Godfather of AI,” shared his thoughts about how the future of artificial intelligence (AI) will unfold. A professor at the University of Toronto, and now Nobel Prize winner, he has played an integral part in the advancement of AI technologies, specifically teaching networks of simulated brain cells how to learn. While his work has garnered global acclaim, including a Nobel Prize, Hinton now voices deep concerns about the long-term dangers of AI.

Hinton’s warning comes as AI continues to rapidly evolve, with systems becoming more sophisticated and powerful.

“The biggest long-term danger is that once these artificial intelligences get smarter than we are, they will take control. They’ll make us irrelevant, and that’s quite worrying,” Hinton stated. He believes this scenario is not just science fiction but a real possibility if AI development is not properly regulated and controlled.

Having quit Google to focus on these potential risks, Hinton stressed the need for immediate action.

“Nobody knows how to prevent that for sure, so we need to do lots of research on that right now,” he urged. Hinton believes the responsibility lies with larger entities. While the conversation of AI often touches on its short-term implications, like automation and job displacement, Hinton stressed that the longer-term existential risks are far greater. As for those kinds of concerns that begin to affect the average person, actually, there is not much that can be done on an individual level.

“It’s the big companies that need to do stuff. They need to do more safety research, and governments need to regulate it effectively,” he explained, while adding that this is not a situation where individuals can make a significant impact, comparing it to environmental efforts like recycling, which are often oversimplified as solutions to larger systemic problems.

Despite these concerns, Hinton is not entirely pessimistic about AI. He acknowledges that the technology has numerous beneficial applications, particularly in healthcare.

“There are many very good uses of AI,” he said, citing ongoing work at Toronto’s Vector Institute, which he co-founded. “We can make much better healthcare using AI,” Hinton noted, pointing to its potential to revolutionize industries and improve lives.

Hinton cautioned that predicting the future of AI is incredibly difficult due to the exponential nature of its development.

“It’s like fog. You can see very clearly for a certain distance, and then the wall comes down. After that, you can’t see anything,” he explained. Given how rapidly the field has advanced in just the past decade, he warns that AI may develop capabilities beyond what anyone can foresee today.

Hinton’s remarks come at a time when discussions about the ethical implications and risks of AI are intensifying. His unique perspective, shaped by decades of pioneering work, serves as both a celebration of AI’s potential and a stark warning about its future.

“In ten years’ time, there’ll be things that I would very confidently predict we wouldn’t have yet, and we’ll have them,” he concluded.

Featured image: Credit: https://www.flickr.com/photos/collisionconf/53803195889/AuthorVaughn Ridley/Collision via Sportsfile — Collision Conf, Wikipedia

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape