4 July, 2025
openai-ceo-declares-humanity-has-crossed-the-ai-superintelligence-event-horizon-

Humanity may have already entered the early stages of the singularity, a pivotal moment when artificial intelligence surpasses human intellect, according to OpenAI CEO Sam Altman. In a blog post published Tuesday, Altman announced that we have crossed a critical inflection point—an “event horizon”—marking the dawn of a new era of digital superintelligence.

“We are past the event horizon; the takeoff has started,” Altman wrote. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.” His statement comes amid growing concerns from leading AI developers about the potential for artificial general intelligence to displace workers and disrupt global economies, potentially outpacing the ability of governments and institutions to respond effectively.

Understanding the Singularity and Event Horizon

The singularity is a theoretical point when artificial intelligence surpasses human intelligence, leading to rapid, unpredictable technological growth and potentially profound changes in society. An event horizon, in this context, represents a point of no return, beyond which the trajectory of AI cannot be altered.

Altman argued that we are entering a “gentle singularity”—a gradual, manageable transition toward powerful digital superintelligence, rather than a sudden, wrenching change. The takeoff has begun, he noted, but remains comprehensible and beneficial.

Evidence of AI’s Growing Influence

As evidence of AI’s expanding role, Altman highlighted the surge in ChatGPT’s popularity since its public launch in 2022. “Hundreds of millions of people rely on it every day and for increasingly important tasks,” he said. The numbers support his claim. By May 2025, ChatGPT reportedly had 800 million weekly active users, despite ongoing legal battles with authors and media outlets, as well as calls for pauses on AI development.

“2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same,” Altman stated. “2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can perform tasks in the real world.”

Mitigating Risks and Ensuring Alignment

Altman emphasized that even slight improvements in AI technology could deliver substantial benefits. However, he warned that a small misalignment, scaled across hundreds of millions of users, could have serious consequences. To address these potential misalignments, he suggested several measures:

  • Ensure AI systems act in line with humanity’s long-term goals, not just short-term impulses.
  • Avoid concentrated control by any one person, company, or country.
  • Initiate global discussions on the values and limits that should guide the development of powerful AI.

Altman stressed that the next five years are critical for AI development, predicting significant advancements by 2030. “Already, we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it,” he noted, highlighting how quickly people adapt to AI advancements.

The Future of AI and Society

As the world anticipates the rise of artificial general intelligence and the singularity, Altman believes the most astonishing breakthroughs will not feel like revolutions—they will feel ordinary and become the baseline expectations for AI technology.

“This is how the singularity goes: wonders become routine, and then table stakes,” he said.

Altman’s insights suggest a future where AI seamlessly integrates into everyday life, transforming industries and societal structures in ways that are both profound and subtle. As these developments unfold, the global community faces the challenge of navigating this new landscape responsibly and equitably.