Last Updated:
Geoffrey Hinton warns AI development is like a nuclear arms race, driven by shareholder pressure, lacking ethics, and risks surpassing human control

Experts allege that the biggest driving force behind AI development today is shareholder pressure. (Image: Pixabay)
Geoffrey Hinton, widely regarded as the “father of artificial intelligence”, has issued a stark warning against the uncontrolled race in AI development, comparing it to an arms race that could end in catastrophic consequences. Speaking to Fortune magazine, Hinton accused leading tech giants of prioritising shareholder profits and competitive advantage over human safety and ethical responsibility.
Hinton, who resigned from Google last year to speak freely about the dangers of AI, told Fortune that the industry’s current trajectory is “not about humanity” but about market domination. Companies are focused on building more powerful models faster than their rivals, he said, warning that the real threat is not limited to misinformation or job losses. The graver danger, he argued, is the possibility of AI systems surpassing human control. “We’re not ready,” he cautioned, “and we’re not even trying to be.”
Hinton alleged that the biggest driving force behind AI development today is shareholder pressure. He described the current competition as a profit-fuelled sprint, in which companies are rushing to outdo one another in creating more powerful AI models with little consideration for the societal risks. “This is not progress for humanity, this is progress for corporate interests,” he remarked.
He further stressed that without safeguards and ethical alignment, the release of superintelligent systems could be as destructive as “a nuclear arms race”.
According to Hinton, one of the most glaring weaknesses of today’s AI strategy is the absence of an ethical framework. While billions are being poured into model development and data monetisation, very few companies are willing to address the existential dangers posed by artificial general intelligence. He argued that just as the world established treaties and safeguards around nuclear proliferation, AI too requires global oversight, international agreements, and shared ethical standards to prevent disaster.
“This challenge cannot be left to individual companies chasing profits,” Hinton said, “It demands international cooperation.”
Hinton urged governments, regulators, and researchers to slow the pace of AI development, insisting that the technology has far outstripped society’s regulatory capacity. He appealed for a shift towards safety, transparency, and long-term thinking, rather than reckless acceleration. The day AI goes beyond human control, it will be very bad, he warned.
Hinton’s warning comes at a time when others in the AI industry are also raising red flags. Microsoft AI CEO Mustafa Suleyman recently cautioned about a new psychological risk he calls “AI psychosis”. As reported by Business Insider, Suleyman described it as a condition where individuals lose touch with reality after excessive interactions with AI systems, blurring the line between human and machine. He called it a “real and emerging risk” that could especially affect vulnerable people deeply immersed in conversations with AI agents.
With multiple voices from within the AI community sounding alarms, the debate over whether to pause, regulate, or accelerate AI development is only intensifying.
Read More