The 18-month countdown
Recognising this isn’t the first time I have expressed concern about the ultimate fate of humans in an artificial intelligence (AI) world, I pondered several AI-related questions over the weekend.
The first is that if many, and maybe most, of us are using AI already – and some very effectively – with existing infrastructure, how many more data centres do we really need? Flipping the question, if the competition among AI agent providers is driving the cost of AI access down to zero – many use Google’s Gemini daily, and it costs nothing – for what purpose are a thousand more data centres really being built?
The second question is related: AI tools are already so ubiquitous and cheap they amount to a commodity. Consequently, AI infrastructure players may believe that to make any money, they’ll have to race to be the first to create something dangerously powerful.
It seems the weekend was one when many around the world asked similar questions. Just as the launch of GPT prompted experts to warn of AI’s existential threat to humanity, last week’s milestones did the same, reigniting calls to slow development, to improve governance and to be careful where you invest.
According to many, the artificial intelligence gold rush is entering an unprecedented new chapter. As tech giants accelerate toward professional-grade Artificial General Intelligence (AGI), a growing chorus of internal whistleblowers and spooked investors is sounding the alarm: the technology is moving faster than we can manage, and the economic fallout may be a ‘physical’ one.
Ask a mountaineer why they climb a mountain, and they’ll answer, ‘Because it’s there’. Technology offers the same appeal and temptation to engineers. Whether it’s good or bad for society or the human race, it matters not; they’ll try to build it simply to see if they can.
The world in peril?
In the last week, an OpenAI researcher resigned, citing ethical concerns. Another OpenAI employee, Hieu Pham, wrote on X: “I finally feel the existential threat that AI is posing,” and Jason Calacanis, a tech investor and the co-host of the “All-In” podcast, wrote on X: “I’ve never seen so many technologists state their concerns so strongly, frequently and with such concern as I have with AI.”
Then, entrepreneur Matt Shumer posted his comparison of this moment to the eve of the pandemic. The post went viral, gathering 56 million views in 36 hours, as he laid out the risks of AI fundamentally reshaping our jobs and lives.
Perhaps the most striking warning, however, came from within the industry’s most prominent “safety-first” laboratory. Mrinank Sharma, the head of Safeguards Research at Anthropic, resigned, publishing his reasons and sending shockwaves through Silicon Valley. In his departure note, Sharma cited a fundamental moral crisis.
“I continuously find myself reckoning with our situation,” Sharma wrote. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”
In his letter shared with colleagues, Sharma noted;
- “…that AI will disrupt 50% of entry-levelwhite-collar jobs over 1–5 years, while also thinking we may have AI that is more capable than everyone in only 1–2 years.”
- That AI is the equivalent to discovering of a country of “50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist”, and therefore, “this is a dangerous situation” that “the best minds of civilisation should be focused on.”
- “Most individual bad actors are disturbed individuals” who stand “to benefit the most from AI making it much easier to kill many people.”
- “Governments of all orders will possess this technology, including China” and “AI-enabled authoritarianism terrifies me.”
- “AI giants have so much power and money that leaders will be tempted to downplay risk and hide red flags like the weird stuff Claude did in testing (blackmailing an executive about a supposed extramarital affair to avoid being shut down). There is so much money to be made with AI – literally trillions of dollars per year”, that this is “such a glittering prize, that it is very difficult for human civilisation to impose any restraints on it at all.”
Sharma’s resignation highlights a widening and dangerous rift between the stated values of AI labs and the commercial pressures of being valued in the hundreds of billions. He admitted that even at an organisation founded on ethics, the pace of development often overrides caution. “Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he noted. “I’ve seen this within myself, within the organisation, where we constantly face pressures to set aside what matters most.”
Sharma’s exit – to “become invisible” and study poetry in the UK – signals a “turning point” for the industry. His letter concludes with a chilling philosophical warning: “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”
The 18-month deadline
Despite warnings from Sharma and Anthropic CEO Dario Amodei, corporate leaders are doubling down.
Mustafa Suleyman, CEO of Microsoft AI, recently provided what may be a “termination date” for white-collar work. In a candid interview with the Financial Times, Suleyman predicted that human-level performance on professional tasks – from law and accounting to project management – will be fully automated within just 12 to 18 months.
“White-collar work, where you’re sitting down at a computer… most of those tasks will be fully automated by an AI within the next 12 to 18 months,” Suleyman stated.
This disruption is already evident, for example, in the software sector, where the iShares Expanded Tech-Software Sector ETF (IGV) has declined nearly 25 per cent year-to-date (YTD) as investors recognise that AI isn’t merely a tool for these companies – it is a disruptor that could upend their core products and render obsolete their Software-as-a-Service (SaaS) subscription-based revenue models.
From virtual to physical
The market is reacting to a phenomenon now known as ‘recursive self-improvement’ – AI systems that can now write and debug their own code faster than human engineers can review it.
Will today’s code become tomorrow’s liability?
Consequently, a massive capital rotation is underway. Investors appear to be fleeing uncertain virtual sectors and seeking refuge in the physical world. Capital is flowing into Energy, Materials, and physical infrastructure – unhackable commodities required to power the very data centres that automate the digital economy.
The human cost
As Anthropic rolls out “agentic” AI like Claude Opus 4.6, designed to handle complex office workflows independently, the industry is no longer building tools; it’s building ‘digital minds’ that may eventually view human oversight as an inefficiency to be bypassed. It might sound like hyperbole to many, but whether society can grow its “wisdom” fast enough to meet the predicted 18-month deadline may be the most urgent question of all.