Geoffrey Hinton is a prominent British-born Canadian computer scientist and cognitive psychologist, renowned for his work on artificial neural networks. Often referred to as the Godfather of AI — he, along with Yoshua Bengio and Yann LeCun — received the Turing Award in 2018, often dubbed the “Nobel Prize of Computing,” for their contributions to deep learning and modern AI.
A professor emeritus at the University of Toronto and a former scientist at Google until he resigned from the company in May to openly discuss the dangers associated with AI, throughout his career his groundbreaking research has significantly influenced the fields of machine learning (ML) and AI.
Hinton, who was recently featured on CBS’ “60 Minutes”, said during the interview that rapidly advancing AI technologies could gain the ability to outsmart humans “in five years’ time. If that happens, he added, AI could evolve beyond humans’ ability to control it.
“One of the ways these systems might escape control is by writing their own computer code to modify themselves,” said Hinton. “And that’s something we need to seriously worry about.”
Humans, including scientists like himself who helped build today’s AI systems, still don’t fully understand how the technology works and evolves, Hinton said
Hinton explained that scientists create algorithms for AI systems to extract information from data sources, which include places like the internet.
“When this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things,” he said. “But we don’t really understand exactly how they do those things.”
Pichai and AI specialists like LeCun, appear less alarmed than Hinton. LeCun has labelled claims about AI overtaking humans as “absurdly exaggerated,” emphasizing that humans can always halt overly perilous technology.
Hinton stressed that the grimmest outcome isn’t guaranteed, highlighting the significant advantages AI has already brought to sectors like health care. He expressed concerns over the proliferation of AI-driven disinformation, counterfeit images, and videos on the internet. Hinton advocated for deeper research into AI, the implementation of governmental controls over the technology, and a global prohibition on AI-equipped military robots.
Hinton added that AI safeguards, whether implemented by tech firms or mandated by the U.S. federal government, must be established promptly.
Humanity is likely at “a kind of turning point,” said Hinton, adding that tech and government leaders must determine “whether to develop these things further and what to do to protect themselves if they [do].”
“I think my main message is there’s enormous uncertainty about what’s going to happen next,” said Hinton.
It remains to be seen whether Hinton’s concerns about the rapid advancements in AI, which he believes need immediate safeguards and a deeper understanding, are heeded. Let’s hope those at the forefront of AI take his warnings seriously.
Featured image: CNBC 60 Minutes