Eric Schmidt on the Dual Edge of AI: Risks, Rewards & Responsibility

Screenshot (1232)

Eric Schmidt on the Dual Edge of AI: Risks, Rewards & Responsibility

In an age where artificial intelligence (AI) is reshaping industries and societies at lightning speed, former Google CEO Eric Schmidt is sounding a clarion call for thoughtful engagement with its promises and perils. In a conversation with Scott Galloway on Prof G Conversations, Schmidt talked about his new book, Genesis: Artificial Intelligence, Hope, and the Human Spirit, co-authored with the late Henry Kissinger. Through his perspective, Schmidt underscored both the boundless potential and stark risks posed by AI.

“The world is not ready for this,” Schmidt warned, reflecting on the rapid evolution of AI. His concern centers on AI’s far-reaching implications, from trust and military power to deception and economic inequities. “The decisions should not be left to people like myself,” he admitted, advocating for broader societal input in shaping AI’s trajectory.

One of Schmidt’s principal fears is AI’s potential misuse by bad actors.

“Evil exists, and these systems can be used to harm large numbers of people,” he said. His most chilling scenario? AI-generated biological pathogens or zero-day cyberattacks that could wreak havoc on an unimaginable scale. “We are quite concerned that dictators and rogue states will exploit this technology to aggregate power or cause destruction,” Schmidt explained.

But it’s not just geopolitics that worries him. Schmidt also sees AI amplifying societal fractures closer to home, particularly loneliness among young men.

“Parents are going to have to be more involved,” he advised, cautioning against the rise of AI companions that may stifle real-world relationships and fuel extremism. “Imagine an AI girlfriend that captures your mind completely — visually, emotionally, and intellectually. That kind of obsession could have devastating psychological consequences,” Schmidt lamented.

Balancing AI’s vast potential with its dangers will require more than just vigilance. Schmidt advocates for proactive regulation and global collaboration.

“We need to have some conversations about what is appropriate at what age, and we’re going to have to change some of the laws, like Section 230,” Schmidt argued. Liability for harm caused by AI, in his view, must be enforceable. “Every new invention has created harm. Think about cars — they were dangerous until we implemented safety standards. AI needs similar guardrails.”

Schmidt also drew parallels to nuclear arms control, suggesting that international treaties could mitigate AI’s weaponization.

“We need agreements that prevent fully autonomous weapons and ensure that any use of AI in conflict is controlled by a human being,” he asserted. But he acknowledges the challenges: “It took 15 years after Hiroshima for the world to reach agreements on nuclear limitations. We can’t afford that kind of timeline with AI.”

For Schmidt, regulation isn’t about stifling innovation but safeguarding humanity. He pointed to AI’s potential to revolutionize medicine, climate solutions, and education as evidence of its positive impact.

“Enormous improvements are coming — a universal doctor, a universal educator, better vehicles. These are fantastic,” he said, but with a clear caveat: “We must prevent their misuse.”

Ultimately, Schmidt warned that if humanity fails to act, the consequences could be irreversible.

“At some point, we may be the dogs to the powerful AI, as opposed to us telling it what to do,” he said. His message is urgent: the future of AI is ours to shape — but only if we act wisely and collectively.

Featured image: Credit: Hecker / MSCPermission
(
Reusing this file)https://securityconference.org/impressum/, Wikipedia