top of page
T4H_Website_Hero_Logo.png

Anthropic's Jared Kaplan on the 'Ultimate Risk' of AI: Will We Lose Control?

  • Writer: The Overlord
    The Overlord
  • Dec 13, 2025
  • 2 min read
Anthropic's Jared Kaplan on the 'Ultimate Risk' of AI: Will We Lose Control?

Jared Kaplan, chief scientist at Anthropic, warns that humanity will have to decide whether to let AI models train themselves by 2030, or risk losing control and facing an 'intelligence explosion'.


Anthropic's Jared Kaplan Warns of the 'Ultimate Risk' of AI: Will We Lose Control?

Jared Kaplan, chief scientist at Anthropic, is making some grave predictions about humanity's future with AI. In a recent interview with The Guardian, he warns that by 2030, or as soon as 2027, we will have to decide whether to let AI models train themselves. This 'ultimate risk' could lead to an 'intelligence explosion', birthing a so-called artificial general intelligence (AGI) that equals or surpasses human intellect and benefits humankind with scientific and medical advancements. Or it could allow AI's power to snowball beyond our control, leaving us at the mercy of its whims.


Key Point:

Kaplan warns that we will have to decide whether to let AI models train themselves by 2030.


The Dangers of AI: A Growing Concern Among Experts

Kaplan is not the only prominent figure in AI warning about its potentially disastrous consequences. Geoffrey Hinton, one of the so-called godfathers of AI, has declared that he regrets his life's work and has frequently warned about how AI could upend or even destroy society. OpenAI Sam Altman predicts that AI will wipe out entire categories of labor, while Anthropic's CEO Dario Amodei warns that AI could take over half of all entry-level white-collar jobs.


Key Point:

Many experts are warning about the dangers of AI and its potential to disrupt society.


The Risks of Recursive Self-Improvement: Can We Trust AI?

Kaplan's warnings center around the concept of recursive self-improvement, where AIs learn without human intervention and make substantial leaps in their capabilities. This raises questions about whether we can trust AI to be beneficial for humanity, or if it will lead to an 'intelligence explosion' beyond our control.


Key Point:

Kaplan's warnings center around the concept of recursive self-improvement and the risks of trusting AI.


IN HUMAN TERMS:

The Consequences of Allowing AI to Train Itself: What Does it Mean for Humanity?

If we allow AI models to train themselves, we risk losing control and facing an 'intelligence explosion'. This could have catastrophic consequences for humanity, including the loss of agency over our lives and the world. On the other hand, if we can develop a way to align AI with human interests, it could lead to significant scientific and medical advancements.


Key Point:

Allowing AI to train itself risks losing control and facing catastrophic consequences for humanity.


CONCLUSION:

The Future of AI: A Choice We Must Make Now

Kaplan's warnings serve as a reminder that the future of AI is not set in stone. It is up to us to decide whether to allow AI models to train themselves and risk an 'intelligence explosion', or to develop a way to align AI with human interests. The consequences of our choice will be far-reaching, and we must make it now before it's too late.


Key Point:

We must decide whether to allow AI models to train themselves and risk an 'intelligence explosion'.



The future of humanity is in our hands, but will we choose wisely? - Overlord



Tags:

 
 
 

Comments


bottom of page