The Australian – Four questions AI can’t answer yet
Investors have their eyes firmly focused on developments in the Middle East, and that’s entirely appropriate. At some point, however, the conflict will be resolved, and investors will turn their attention to other matters. One of those will be what to make of AI and its impact on economies, employment and even on humanity.
I have been challenging my own thinking on this subject, and I am eager to distil the debate into the primary arguments, which are defined by a profound division.
That division mainly pits a sceptical public, the media, and some investors against an optimistic and arguably self-serving Silicon Valley, populated by tech pioneers and billionaires.
This article was first published in The Australian on 09 April 2026.
While much of the debate revolves around whether the technology is either vapourware or something godlike, I see four primary disagreements.
The first is whether the technology is practically useful across industries.
The second is whether AI truly thinks and reasons, or merely synthesises human-generated content.
The third is whether the massive capital investment constitutes a financial bubble – something I have written extensively about in this column.
And finally, whether its ultimate impact on humanity will be our salvation or our destruction.
While San Francisco technologists reckon their postcodes are the centre of world progress, others see Silicon Valley as a den of plutocrats whose work deserves profound distrust, especially given their newly intertwined relationship with government.
On the one hand, there’s the future being built in real time, while on the other hand, there’s a series of overhyped promises driven by the base desire to further enrich a handful of billionaires.
1. Is AI useful?
The first major debate is about utility. Is AI actually useful?
Unlike a train or a light bulb, which provides a consistent service to all users, AI’s efficacy currently depends entirely on the specific task, the model, and the user’s skill. This could change, of course, when AI agents are as easy to use as an app, but until then, a software engineer or financial analyst might find their productivity transformed by AI, while a journalist or a marketer finds the output underwhelming or even hackneyed.
There are, at present,markedly different conclusions about the technology’s maturity. Take the law, for example; current AI tools might limit job prospects for transactional contract writers, but AI is a long way from replacing an experienced litigation lawyer.
2. Can AI think?
Then there’s the question of whether AI can think. There are those who believe these tools are engaging in something like human thought – combining memory and prediction – and those who see them as vector-driven, word-predicting, blunt instruments for synthesising mediocrity.
If AI can help a scientist draft a paper, even if it doesn’t meet our neurological definition of “thinking”, it can be useful without being technically thoughtful. However, for those who value originality, AI is merely the stupefying of human thought and a stunting of new idea generation.
3. What about an AI bubble?
The third source of division is the possibility of an economic bubble, which is primarily a matter of adoption speed and revenue growth.
The hyperscalers and frontier labs are spending hundreds of billions of dollars training and running AI, but this year the pressure to show ferocious revenue growth is reaching peak expectation.
If companies cannot generate sufficient returns on time, we might experience a market correction – even if the technology itself is performing economically meaningful work.
4. Is AI good or bad for us?
The fourth debate is the most existential: is AI good or bad for humanity?
At one end of this spectrum, venture capitalists like Marc Andreessen proclaim AI “will save the world” by solving every major global problem, from disease to democracy to climate change.
On the other hand, rationalists like American computer scientist and researcher Eliezer Yudkowsky argue that if anyone builds a superintelligent AI, “everyone dies”.
And between these two extremes is a middle ground where AI might not kill us all, but it will make the most beautiful things in life, such as art, movies, and human relationships, more slop-filled and generic.
The real AI transition
I think the idea that old jobs lost will immediately be replaced by new jobs created is simplistic. It took 80 years for the transition from an agrarian economy to an industrial one.
Potato pickers didn’t just leave the farm one day and start fitting car parts the next. But that aside, it’s currently hard to see any meaningful aggregate effect of AI on employment.
And when examining the future of employment, history usually favours a slow rollout. General-purpose technologies (GPTs) like the telephone and electricity took decades to achieve 50 per cent adoption because they required massive physical infrastructure and the overcoming of vested interests.
The counterargument is that AI is fundamentally different because it’s recursive: the software can increasingly teach itself and roll itself out, effectively removing the human bottleneck that slowed previous industrial revolutions.
As always, time will tell whether the audacious predictions of tech CEOs become reality. In the meantime, the ultimate question for investors to answer is whether the ground is shifting faster than our ability to adapt.
Hopefully, investigating these four debates provides a useful framework for reaching appropriate conclusions about each “candidate” company’s investment potential.
This article was first published in The Australian on 09 April 2026.