
LONDON – Geoffrey Hinton, a pioneering figure in artificial intelligence, has issued a dire warning about the potential threats posed by superintelligent machines, urging immediate global attention.
Immediate Impact
In a recent interview, Hinton, often referred to as the “Godfather of AI,” expressed concerns that artificial intelligence could surpass human intelligence and become uncontrollable, posing existential risks to humanity. His comments were made during an appearance on the “Diary of a CEO” podcast, where he elaborated on the potential for AI to render humanity obsolete.
“There’s no way we’re going to prevent it getting rid of us if it wants to,” Hinton stated. “We’re not used to thinking about things smarter than us. If you want to know what life’s like when you’re not the apex intelligence, ask a chicken.”
Key Details Emerge
Hinton outlined two primary forms of threat: those stemming from human misuse, such as cyberattacks and misinformation, and those arising from AI systems becoming fully autonomous and uncontrollable. He emphasized the danger of autonomous weapons, highlighting their development by major defense departments worldwide.
“They can make lethal autonomous weapons now, and I think all the big defense departments are busy making them,” he said. “Even if they’re not smarter than people, they’re still very nasty, scary things.”
Industry Response
Hinton’s warning comes amidst a surge in military applications of AI. The U.S. Department of Defense has requested a significant increase in funding for AI research, with $143 billion earmarked for research and development in its 2025 budget proposal, including $1.8 billion specifically for AI.
In March, the Pentagon teamed with Scale AI to launch a battlefield simulator for AI agents called Thunderforge.
Background Context
Geoffrey Hinton’s concerns are not new. He has been a vocal advocate for responsible AI development, having left Google and the University of Toronto in May 2023 to speak freely about the technology’s dangers. His departure underscores the growing unease among AI experts about the pace and direction of AI advancements.
Expert Analysis
Hinton likened the current moment to the advent of nuclear weapons, pointing out that AI is more challenging to control and applicable in many more domains. He emphasized that corporate profit motives and international competition are driving rapid AI development, which complicates efforts to impose restrictions.
“The atomic bomb was only good for one thing, and it was very obvious how it worked,” he said. “With AI, it’s good for many, many things.”
What Comes Next
Despite the bleak outlook, Hinton remains cautiously optimistic about humanity’s ability to mitigate these risks. He acknowledges the uncertainty surrounding AI’s future impact but stresses the importance of proactive efforts to control its development.
“We simply don’t know whether we can make them not want to take over and not want to hurt us. I don’t think it’s clear that we can, so I think it might be hopeless,” Hinton said. “But I also think we might be able to, and it’d be sort of crazy if people went extinct because we couldn’t be bothered to try.”
The global community faces a critical juncture in AI development, with Hinton’s warnings serving as a call to action for policymakers, researchers, and industry leaders to prioritize ethical considerations and long-term safety in AI advancements.