Search
Close this search box.

Richard Branson, Ban Ki-moon, and J. Robert Oppenheimer’s Grandson Call for Urgent Measures Against AI and Climate Threats

Screenshot (1353)

Richard Branson, Ban Ki-moon, and J. Robert Oppenheimer’s Grandson Call for Urgent Measures Against AI and Climate Threats

Prominent figures, including Virgin Group’s Richard Branson, ex-UN Secretary-General Ban Ki-moon, and Charles Oppenheimer, have called for immediate global action on critical issues like AI dangers, climate change, pandemics, and nuclear threats.

In an open letter facilitated by The Elders, an NGO initiated by Nelson Mandela and Branson for human rights advocacy, they emphasize the need for decisive, science-based action and global cooperation to mitigate these risks. This call to action highlights the necessity of moving beyond fossil fuels, establishing fair pandemic responses, reinitiating nuclear disarmament dialogues, and ensuring AI benefits humanity. The letter, supported by the Future of Life Institute, founded by MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, underlines AI’s potential risks if mismanaged, aligning with the institute’s broader mission to guide transformative technologies towards positive outcomes.

“The old strategy for steering toward good uses [when it comes to new technology] has always been learning from mistakes,” Tegmark told CNBC in an interview. “We invented fire, then later we invented the fire extinguisher. We invented the car, then we learned from our mistakes and invented the seatbelt and the traffic lights and speed limits.”

Tegmark also said that once AI surpasses a certain threshold of power, adopting a strategy of learning from mistakes could lead to disastrous outcomes. He likened his perspective to that of safety engineering, drawing a parallel with space missions to the moon. He explained that careful consideration of potential mishaps is crucial when undertaking such high-risk efforts, like sending humans in explosive fuel tanks to an environment where no external assistance is available. This meticulous approach to safety, he argued, is why those missions were successful.

“That wasn’t ‘doomerism.’ That was safety engineering. And we need this kind of safety engineering for our future also, with nuclear weapons, with synthetic biology, with ever more powerful AI,” said Tegmark.

Featured image: Max Tegmark, Illustration by TIME: Credit: Max Tegmark