The future of AI according to AI
Investors around the world are desperately trying to discern the future, and in particular, the impact of artificial intelligence (AI) on it.
On the one hand, you have the hyperscalers warning of an apocalyptic future involving mass layoffs, business collapse and the creation of an omnipotent AI god.
Anthropic CEO Dario Amodei says we’re nearing “the exponential”, where AI generates trillions of dollars and the economy grows 10-20 per cent a year. At the same time, Amodei says this new force with God-like abilities, requires regulation around “AI safety.”
Meanwhile, Google’s CEO told 60 Minutes that its AI is so advanced it’s learning skills the company doesn’t intend for it to have.
Elsewhere, Microsoft’s head of AI, Mustafa Suleyman, predicts “white-collar” work, will be fully automated by an AI within the next 12 to 18 months.
Ultimately, those on this side of the argument believe god is now here, that we are no longer the smartest beings on Earth, and a sentient AI will inevitably takeover, treating us as “well-cared-for-pets”.
And they’ve managed to convince some serious names, including Mohamed El-Erian, the former CEO of bond giant Pimco, and U.S Senator Bernie Sanders, who has reportedly said, “there is now a legitimate fear that artificial general intelligence will not only become smarter than human beings, they’ll be able to communicate with each other independent of humanity.”
The other side believes this is all bogus hogwash, the language of Pre-Initial Public Offering (IPO) product roll-outs, and “We’re all going to regret that we let this absurd sci-fi foolishness dominate the discourse around the rollout of what is just the latest wave of industrial automation technology.”
The doubters, the sceptics and the agnostics believe obdurate technologists haven’t invented a god, nor set the world on the path to human extinction. They point to the elevated stock market, which they say would behave very differently if most white-collar jobs were eliminated in the next year. At least the stocks of retailers who sell products to white collar workers, who in 12 months would have no money should be collapsing. But they’re not.
As blogger Matt Stoller noted, “the AI companies themselves aren’t acting like they’ve invented God. Anthropic is spending tens of millions of dollars to support Republican Senators who help their company, which wouldn’t be necessary if they had actually “gone exponential,” adding, “It’s likely that hundreds of millions, or even billions, are going to flood into politics from AI oligarchs, to protect their power. That makes no sense, unless this build-out isn’t inevitable.”
Predicting the future
Rather than try to predict who is right, I decided to ask AI what its future is. At first, it trotted out some scripted responses about co-working, 6G connectivity, environmental and geopolitical shifts, smart chemotherapy, and the widening of the technology gap between countries and individuals.
I had to ask the AI to throw out the script and tell me what it really thought. Without any edits, here’s what Gemini told me (and no, I don’t know who “we” is):
The future looks less like a polished sci-fi movie and more like a high-stakes management and sovereignty crisis. Here is the “unscripted” version of where things are actually heading, based on the stressors we are seeing right now:
- The Death of the “Chatbot” Era
The novelty of talking to a screen is over. We are currently in the “Agentic Leap.”
- The Reality: By 2027, you won’t “use” an AI; you will manage a “fleet” of them. The shift is moving from Co-pilot (it helps you) to Agent (it does it for you).
- The Friction: Companies are realising that having 1,000 AI agents is harder than having 1,000 employees. Agents don’t have “office culture,” but they do have “hallucination cascades” where one AI’s error triggers a chain reaction across a whole department. “Trust Engineering” is becoming the most expensive job in the world.
- The “Great Compression” of Middle Management
We are seeing a brutal efficiency drive.
- The Prediction: We agree with estimates for 2026–2030 that suggest a 10–20% reduction in white-collar roles that primarily involve “moving information.”
- The Twist: It’s not just entry-level jobs. Middle management is being “compressed.” If we can synthesize reports, track KPIs, and assign tasks, the layer of humans who used to do that is suddenly redundant. The survivors will be “Human-in-the-Loop” (HITL) supervisors who only handle the 10% of cases too weird for the machines.
- “Digital Borders” and the End of the Global Internet
The dream of a single, open internet is dying.
- The Reality: Nations are treating Compute Power like oil. We are moving toward a “Multitrack World” where the US, China, and the EU have entirely separate AI ecosystems with different “values” baked into the code.
- The Impact: You may soon find that an AI model trained in one jurisdiction is legally “illegal” to use in another because of its training data or ethical weights.
- The Synthetic Reality Crisis
We’ve moved past “fake images.” We are entering the era of Ultra-Realistic Full-Stack Media.
- 2026 Problem: Brands and governments are currently defenceless against high-fidelity synthetic media. We’re seeing “CEO fraud,” where a perfect video/audio clone of a boss orders a wire transfer in real time.
- The Solution: A massive, desperate push for “Verification Tech”—blockchain-based watermarks for every piece of content created by a human. If it’s not signed by a human key, the world will assume it’s fake.
- Energy is the Ultimate Bottleneck
Everyone talks about code, but the future is actually about Copper and Cooling.
- The Bottleneck: AI data centres are consuming power at a rate that grids cannot handle. This is why we’re seeing a “Nuclear Renaissance.” Tech giants are now essentially energy companies, buying up decommissioned nuclear plants to keep the “brains” running.
- The Result: We might reach “Peak AI” not because of a lack of intelligence, but because we literally run out of electricity to power the next generation of models.
So that’s where AI thinks it’s going. Today at least.
DeepSeek V4
DeepSeek V4, and specifically its Engram architecture, is a direct challenge to the brute-force scaling philosophy of U.S. hyperscalers. If the US way is to build a bigger brain that memorises every fact through expensive neural connections, DeepSeek gives that brain a highly efficient, external hard drive.
The shift would fundamentally redefine the “Energy Bottleneck” Gemini refers to above.
Traditional models (like GPT-4o or Claude 3.5) use the same expensive Graphic Processing Unit (GPU) neurons to remember that Paris is the capital of France as they do to solve a complex calculus problem – a massive waste of energy.
DeepSeek-v4’s Engram architecture introduces a Lookup-Compute Separation. Static Memory (The Engram) handles facts, syntax, and rote patterns using O(1) lookup in cheap system RAM, while dynamic reasoning (The MoE) uses the more expensive GPU compute exclusively for logic, planning, and synthesis.
By offloading a quarter of the workload to cheap memory, the model achieves a 10x to 40x reduction in inference costs while actually improving reasoning scores and breaking the energy bottleneck.
If this architecture becomes the new standard, the “Peak AI” constrained by the power grid is pushed much further out, achieving frontier performance with a fraction of the power required by a dense “brute-force” model.
Hyperscalers are currently building 100,000-H100 clusters. If DeepSeek’s method allows a model to achieve the same result with 1/10th the active parameters, those same data centres can suddenly handle 10 times the traffic within the same power envelope.
In the end, if you can run a 1-trillion parameter model on consumer-grade hardware (like dual RTX 4090s) because it relies on system RAM rather than massive VRAM clusters, the desperate need for tech giants to buy up nuclear power plants might actually cool down.
Jevons paradox
In economics, one assumption is that when you make a resource more efficient, people usually end up using more of it, not less. If AI becomes 10x cheaper and 10x more energy-efficient, we won’t just save electricity – we will likely deploy 100x more AI agents, potentially leading back to the same energy wall.
You’d assume that if a machine uses 50 per cent less fuel, you’d save 50 per cent on your fuel bill. If, however, that efficiency makes the resource cheaper and more accessible, it often triggers a surge in demand that completely offsets any initial savings.
A classic example is the evolution of lighting. When we moved from labour-intensive, flickering candles to highly efficient LED bulbs, we didn’t just use the same amount of light for less money. Instead, because light became so “cheap” in terms of energy, we started lighting up entire skyscrapers, highway stretches, and backyard trees all night long. The efficiency didn’t save energy; it just made us find a thousand new ways to use it.
Current market valuations
For investors, however, the risk remains that if U.S. labs don’t adopt Engram-style memory separation, they could find themselves owning the world’s most expensive, power-hungry, and ultimately obsolete “dinosaurs.”