Who’s right about the future of AI?
Rodney Brooks is the Panasonic Professor of Robotics (emeritus) at MIT. For the last eight years, he has been publishing his predictions each year on self-driving cars, electric vehicles (EVs), robotics, artificial intelligence (AI)/machine learning, and human space travel. Each year, Brooks reviews his prior forecasts without fear or favour. Brooks has promised to review his predictions annually until 2050 (right after his 95th birthday), for a total of 32 years.
We’ll return to Brooks momentarily.
On Monday, 23 February 2026, a thought piece published by Citrini Research, entitled, The 2028 Global Intelligence Crisis, caused the stock market to stumble anew.
Citrini’s “thought exercise in Financial History, from the future”, contemplated the progression and fallout from AI advancement and a world almost universally and comprehensively disrupted. The piece was written from the perspective of an analyst in 2028, looking back at what transpired since late 2025, when “agentic coding tools took a step function jump in capability.”
A few excerpts offer time-poor readers an insight into the flavour of the thought experiment:
“The euphoria was palpable. By October 2026, the S&P 500 flirted with 8000, the Nasdaq broke above 30k. The initial wave of layoffs due to human obsolescence began in early 2026, and they did exactly what layoffs are supposed to. Margins expanded, earnings beat, stocks rallied. Record-setting corporate profits were funnelled right back into AI compute.”
“The headline numbers were still great. Nominal [Gross Domestic Product] GDP repeatedly printed mid-to-high single-digit annualised growth. Productivity was booming. Real output per hour rose at rates not seen since the 1950s, driven by AI agents that don’t sleep, take sick days or require health insurance.”
“The owners of compute saw their wealth explode as labour costs vanished. Meanwhile, real wage growth collapsed. Despite the administration’s repeated boasts of record productivity, white-collar workers lost jobs to machines and were forced into lower-paying roles.”
“When cracks began appearing in the consumer economy, economic pundits popularised the phrase ‘Ghost GDP ‘: output that shows up in the national accounts but never circulates through the real economy.”
“In every way, AI was exceeding expectations, and the market was AI. The only problem…the economy was not.”
“[In Q3 2026], SaaS wasn’t “dead”. There was still a cost-benefit-analysis to running and supporting in-house builds. But in-house was an option, and that factored into pricing negotiations… The interconnected nature of these systems weren’t fully appreciated… either. ServiceNow sold seats. When Fortune 500 clients cut 15% [per cent] of their workforce, they cancelled 15% [per cent] of their licenses. The same AI-driven headcount reductions that were boosting margins at their customers were mechanically destroying their own revenue base.”
“The company that sold workflow automation was being disrupted by better workflow automation, and its response was to cut headcount and use the savings to fund the very technology disrupting it.”
“The historical disruption model said incumbents resist new technology, they lose share to nimble entrants and die slowly. That’s what happened to Kodak, to Blockbuster, to BlackBerry. What happened in 2026 was different; the incumbents didn’t resist because they couldn’t afford to.
“With stocks down 40-60% [per cent] and boards demanding answers, the AI-threatened companies did the only thing they could. Cut headcount, redeploy the savings into AI tools, use those tools to maintain output with lower costs.”
“Each company’s individual response was rational. The collective result was catastrophic. Every dollar saved on headcount flowed into AI capability that made the next round of job cuts possible.”
“Two years. That’s all it took to get from “contained” and “sector-specific” to an economy that no longer resembles the one any of us grew up in. This quarter’s macro memo is our attempt to reconstruct the sequence – a post-mortem on the pre-crisis economy.”
“AI capabilities improved, companies needed fewer workers, white collar layoffs increased, displaced workers spent less, margin pressure pushed firms to invest more in AI, AI capabilities improved…”
“It was a negative feedback loop with no natural brake. The human intelligence displacement spiral. White-collar workers saw their earnings power (and, rationally, their spending) structurally impaired. Their incomes were the bedrock of the $13 trillion mortgage market – forcing underwriters to reassess whether prime mortgages are still money good.”
“Seventeen years without a real default cycle had left privates bloated with [private equity] PE-backed software deals that assumed [Annual Reoccurring Revenue] ARR would remain recurring. The first wave of defaults due to AI disruption in mid-2027 challenged that assumption.”
“This would have been manageable if the disruption remained contained to software, but it didn’t. By the end of 2027, it threatened every business model predicated on intermediation. Swaths of companies built on monetizing friction for humans disintegrated.”
Not everyone agrees
Rodney Brooks pops the bubble of excitement emanating from Silicon Valley and the CEOs of the major AI players.
Noting Google’s CEO recently telling 60 Minutes that its AI is so advanced it’s learning skills the company doesn’t intend for it to have, and Microsoft’s head of AI, Mustafa Suleyman, predicting “white-collar” work, will be fully automated by an AI within the next 12 to 18 months, Brooks’s views lend weight to the argument that AI CEOs are financially incentivised to promote AI’s capabilities, and an apocalyptic future involving mass layoffs, business collapse and the creation of an omnipotent AI god is unlikely.
It’s worth reprinting a summary of Brooks’s thoughts directly:
“We are [at] peak popular hype in all of robotics, AI, and machine learning. In January 1976, exactly fifty years ago, I started work on a Masters in machine learning. I have seen a lot of hype and crash cycles in all aspects of AI and robotics, but this time around is the craziest. Perhaps it is the algorithms themselves that are running all our social media that have contributed to this.”
“But it does not mean that the hype is justified, or that the results over the next decade will pay back the massive investments that are going into AI and robotics right now.”
“The current hype is about two particular technologies, with the assumption that these particular technologies are going to deliver on all the competencies we might ever want. This has been the mode of all the hype cycles that I have witnessed in these last fifty years.”
“One of the current darling technologies is large X models for many values of X (including VLMs and VLAs), largely, at the moment, using massive data sets, and transformers as their context and sequencing method. The other, isn’t even really a technology, but just a dream of a form of a technology and that is robots with humanoid form.”
Importantly, however, Brooks is pragmatic. He argues Large Language Models (LLMs) are a raw technology that currently lacks the ‘guardrails’ common to every prior transformative technology. But unlike Citrini, Brooks argues the flaws accompanying AI development are an engineering roadmap for the next decade.
Rodney Brooks’s views on LLM progress
Brooks argues that the core misunderstanding of LLMs is the belief that they “answer questions.” In fact, they don’t have a database of facts or an understanding of the world; they are engines of probability. By predicting the next most likely word in a sequence, they generate outputs that seem like answers but are often incorrect. Without oversight, these “uncaged” models inevitably lead to confabulations – plausible-sounding falsehoods that have already resulted in “hallucinated” legal precedents and non-existent software documentation.
According to Brooks, the era of believing that simply “more training” will solve AI’s flaws is ending. He argues the “real action” and the next decade’s “arms race” will not be in the size of the models, but in developing sophisticated mechanisms to contain them. These yet-to-be-invented tools will evaluate, monitor, and guardrail AI outputs, ensuring they behave within safe and useful parameters. Brooks sees this not as a limitation of the technology, but as the necessary evolution of a raw material into a reliable product.
For AI to move beyond “annoying, dangerous, or stupid” deployments, Brooks insists on the necessity of explainability. He points to early progress – such as Google’s Gemini providing citations and links – as a vital step toward human oversight. To Brooks, a useful AI is one that can show its work, allowing the user to verify the data source and catch errors before they lead to real-world failure.
While generic LLMs often fail at complex, niche tasks (such as Brooks’s own attempt to use ChatGPT for Apple M1 chip compiler targeting), he points out that “narrow use” fields are showing more promise. By way of example, specialised coding assistants have evolved rapidly by narrowing the scope of the LLM and applying stricter controls. The future of AI might lie, not in “all-knowing” general systems that lay waste to white-collar workers, but in specialised, high-impact tools.
Finally, Brooks foresees a market for “AI behaviour” companies as the industry shifts toward making AI “work” and “behave” through constant monitoring. The winners in the AI space will not necessarily be those with the biggest models, but those who can best control and “leash” their outputs to deliver a dependable, non-hallucinatory product.
And that seems like a vastly more palatable future than the Armageddon scenarios proposed by AI’s creators and imagined by Citrini Research.