Search
Close this search box.

The Dawn of Superintelligence? Former OpenAI Superalignment Team Member Says Race to AGI Has Begun

AI image on AGI so, metta
AI image on AGI so, metta

The Dawn of Superintelligence? Former OpenAI Superalignment Team Member Says Race to AGI Has Begun

Insider Brief

  • The conversation about superintelligence and Artificial General Intelligence has almost instantly shifted from science fiction fans to real world policymakers and entrepreneurs.
  • Leopold Aschenbrenner, a former member of the Superalignment team at OpenAI, writes that the AGI race has officially begun.
  • Aschenbrenner expects managing this transition will be extremely tense and complex.

“More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly.” — Woody Allen

I’m only kidding about this quote, but it does reflect the growing angst about the growing power of artificial intelligence. Fortunately, we have some guides who can help us select a third road that Woody may not have seen.

In fact, fust a few years ago, conversations about superintelligent Artificial General Intelligence (AGI) and trillion-dollar compute clusters were reserved for science fiction and Comicon attendees, along with a few Ray Kurzweil fans. These days, the conversations — even among the tech elite living in future-forward cities like San Francisco — have shifted dramatically, according to a former member of the Superalignment team at OpenAI. It’s not a sci-fi only club now.

Leopold Aschenbrenner writes in Situational Awareness: The Decade Ahead that industrial power is being marshaled to build machines that can think and reason far beyond human capabilities.

In other words, the artificial general intelligence (AGI) race has officially begun, writes Aschenbrenner. By 2025 or 2026, experts expect these machines to surpass college graduates in intelligence. By the end of the decade, they could be smarter than any human.

In other words, Aschenbrenner suggests we’re on the brink of achieving true superintelligence.

Writing in the piece’s introduction, he writes : “Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.”

NVIDIA analysts might believe that 2024 is near the peak of AI advancements, but the truth is far more profound, Aschenbrenner says. Mainstream pundits see only hype and business-as-usual, not realizing the seismic shifts underway.

“Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them,” writes Aschenbrenner.

While a thorough reading of Situational Awareness is advised, we’ll try to break down the main points and help you follow the roadmap.

From GPT-4 to AGI

Aschenbrenner’s projections for AGI by 2027 seem strikingly plausible when tracing the trendlines in compute and algorithmic efficiencies.

“GPT-2 to GPT-4 took us from preschooler to smart high-schooler abilities in just four years,” he notes. If this trend continues, we should expect another leap in intelligence by 2027.

The capabilities of GPT-4 were a shock to many, capable of writing code, essays, reasoning through difficult math problems and acing college exams. This progress has been relentless. For those of you who remember rudimentary visual identification tasks as an ironclad way to keep bots from entering websites, the advance is startling. From barely identifying simple images of cats and dogs a decade ago to now saturating all known benchmarks, the progress is fairly obvious.

The models just want to learn, and as they are scaled up, they learn more.

Aschenbrenner writes, “It is strikingly plausible that by 2027, models will be able to do the work of an AI researcher or engineer.”

AI researchers that are AI  sounds very close to the iterative path to an intelligence explosion.

The Intelligence Explosion

Indeed, according to Aschenbrenner, the leap from AGI to superintelligence is not just a possibility but an expected progression.

For some of you, it might be important to recognize the difference between AGI and super intelligence, although they are — as we’ll find out — closely related.

AGI refers, broadly, to machines that can understand, learn, and apply knowledge across a broad range of tasks at a level comparable to human intelligence. For example, Aschenbrenner describes AGI as machines that “can think and reason” and are expected to outpace college graduates by 2025/26.

Superintelligence refers to AI systems that possess intelligence far surpassing that of the brightest human minds across virtually all fields, including scientific creativity, general wisdom, and social skills. According to Aschenbrenner, superintelligence is the next step beyond AGI.

As we move into the superintelligence era, hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress into just one year. This rapid acceleration would lead to vastly superhuman AI systems.

The implications of superintelligence are profound. Such power would dramatically challenge the economic and military landscapes. It’s a race, and the stakes are incredibly high, Aschenbrenner writes, adding that the free world must prevail, as losing this race to authoritarian regimes would be disastrous.

The challenge has been accepted and industrial mobilization required for this AI future is already underway. Trillions of dollars are being invested in GPU, data center and power buildouts. By the end of the decade, U.S. electricity production will have grown significantly to support this demand.

Locking down the labs is another crucial aspect of this race, according to Aschenbrenner. Current security measures at leading AI labs are inadequate, practically handing AGI secrets to the Chinese Communist Party on a silver platter. Securing these secrets against state-actor threats will require immense effort, and we are not currently on track to achieve this.

The Challenge of Superalignment

One of the most pressing technical challenges is superalignment. Controlling AI systems that are much smarter than humans is an unsolved problem. While it is a solvable issue, the rapid intelligence explosion could easily lead to catastrophic failures. Managing this will be extremely tense and complex.

Aschenbrenner writes: “But I also want to tell you why I’m worried. Most of all, ensuring alignment doesn’t go awry will require extreme competence in managing the intelligence explosion. If we do rapidly transition from from AGI to superintelligence, we will face a situation where, in less than a year, we will go from recognizable human-level systems for which descendants of current alignment techniques will mostly work fine, to much more alien, vastly superhuman systems that pose a qualitatively different, fundamentally novel technical alignment problem; at the same time, going from systems where failure is low-stakes to extremely powerful systems where failure could be catastrophic; all while most of the world is probably going kind of crazy. It makes me pretty nervous.”

Obviously — or not — superintelligence will provide a decisive economic and military advantage. China is still very much in the game, Aschenbrenner  warns and the survival of the free world depends on maintaining our preeminence over authoritarian powers.

He writes: “If and when the CCP wakes up to AGI, we should expect extraordinary efforts on the part of the CCP to compete. And I think there’s a pretty clear path for China to be in the game: outbuild the US and steal the algorithms.”

Because energy is the critical factor of building AGI, Aschenbrenner suggests China has a distinct advantage. He writes that in the last decade, China has built about as much electrical capacity as the entire US capacity. The US capacity has remained flat.

The race to AGI is not just about technological superiority but about the very survival of our values and way of life.

As the race to AGI intensifies, national security will become involved. The U.S. government will awaken from its slumber, and by 2027 or 2028, we can expect some form of government AGI project.

Aschenbrenner believes that no startup can handle the complexities of superintelligence alone.

Aschenbrenner concludes with an important question: “What if we’re right?” If the predictions about AI advancements are accurate, we are in for a transformative decade. The potential for AI to reshape our world is immense, and the journey from GPT-4 to superintelligence is just beginning.

To go back to the original premise that superintelligence only recently was a conversation point for scifi fans in a Comicon convention, Aschenbrenner challenges the readers to not discount this  theory.

“At this point, you may think that I and all the other SF-folk are totally crazy,” he writes “But consider, just for a moment: what if they’re right? These are the people who invented and built this technology; they think AGI will be developed this decade; and, though there’s a fairly wide spectrum, many of them take very seriously the possibility that the road to superintelligence will play out as I’ve described in this series.”

Each essay is meant as a standalone article, but you may want to read the entire piece. You can read the collection here.