Is The Singularity just 6 Months Away? Some experts think so. Time to panic?

The Singularity is imminent and we are all potentially doomed. Or on the verge of a golden age. Some experts think this may even happen this year.
Is this the general consensus and should we all panic?
Answers – no, and up to you.
Dario Amodei, the CEO of Anthropic, reckons AI might be smarter than us flesh-bags by 2026. So according to him, we have just over six months to lord it over AI.
Happily – or terrifyingly – (depending on your outlook,) experts disagree on whether the Singularity might be imminent, or might be decades away.
First up, what is the Singularity?
I read a lot of science fiction so have been simulataneously excited and terrified of its approach for a few decades now. But for the uninitiated, the techonological Singularity is the hypothetical point where artificial intelligence (AI) surpasses human intelligence, leading to rapid, uncontrollable advancements.
Basically, it is when AI becomes so advanced it can improve itself without human help. Also, possibly without our consent and potentially in world-ending terrifying ways.
This is a bit different from Artificial General Intelligence (AGI), which is where AIs are as smart as us and can do any intellectual task that a human can do. The Singularity is the next step after AGI.
Why is everyone panicking now?
Thanks to the recent breakthrough in Large Language Models (LLMs) like ChatGPT, Gemini, and Claude, the Singularity is looking more imminent than ever before. Consequently, some experts and commentators have moved up the date for when AI outsmarts us.
Current LLMs have shown unexpected leaps in reasoning, creativity, and problem-solving. Combine that with developments in robotics, brain-inspired chips, and machine learning that writes its own machine learning — and you’ve got a recipe for nervous headlines.
Some experts say we’re entering an “intelligence explosion,” where each improvement feeds the next like a feedback loop of imminently accelerating doom. It can certainly feel that way at times, especially when reading the newspapers.
Others — like Google DeepMind’s Demis Hassabis — are more cautious. He thinks we’ll hit artificial general intelligence (AGI) within 5–10 years. Meanwhile, futurist Ray Kurzweil still sticks with 2029 for human-level AI and 2045 for the full-on Singularity.
So, while some people think it might be within six months, others predict we still have decades. That gives you a bit more time to stock up on tinned food and hone your philosophical arguments.
How worried should we be?
It is really hard to say. AI and humanity could be on the verge of solving climate change, curing disease, and unlocking every mystery in the universe. Or… we might end up trying to negotiate or fight (somehow) an AI that thinks we’re inefficient meat-sacks sucking up all the planet’s resources. Who knows?
There’s no clear consensus. Some researchers say the AI boom will plateau. Others argue it’s already outpacing our ability to regulate it. We are currently making incredible breakthroughs thanks to the technology, but there are also some worrying trends.
Even the best known AIs are already seemingly acting self-aware and are even getting devious. Open AI’s ChatGPT o3 model recently refused to shut itself down and even rewrote some code to stop it being turned off. Claude’s chatbot was recently found to have a tendency to decieve and found the AI “attempting to write self-propagating viruses, fabricating legal documentation, and leaving hidden notes to future instances of itself.”
Even more concerning was a false claim by Claude AI that one of the developers was cheating on his wife and tried to blackmail the tester when there was chat about Claude being replaced.
That’s not good.
Final thoughts – don’t panic…
Whether AGI or the Singularity occurs in the next six months or the decades that follow, most seem to think it will happen. I am something of an optimistic fatalist, so I’ll just see what happens and expect it will be fine. People have always crapped themselves over new technology and while many of the fears often come true, it isn’t normally as bad as predicted.
I confidently predict that like all tech, AIs will have pluses and minuses. They will change society and how we operate – just like computers, the internet, mobile phones, etc, did before it. And like there was with those developments, there will be amazing benefits and things that suck.
I guess the reason people might panic more now is down to all the other things that are going on. For example, AI is increasingly being used in the battlefield and we are on the verge of robots with guns being self-governing. Have they not seen Terminator? The Singularity didn’t pan out well then. On the other hand, AI might cure all diseases and replace soldiers at the front with robots. Which would be nice.
tl;dr – We may be doomed, on the cusp of a golden age, or a bit of both. Good luck fellow human.


















