Why the people building AI won’t touch it
U.S. Senator from Vermont, Bernie Sanders, and Professor Geoffrey Hinton – considered the ‘Godfather of AI’ – pointed out last week at Georgetown University that artificial intelligence (AI) and robotics aren’t inherently bad, but the people pushing this technology revolution are the richest people in the world. They suggest that Musk and Bezos, for example, aren’t staying up at night worrying about ordinary people. They aren’t worrying about working people. They’re not spending hundreds of billions of dollars to extend life expectancy, address global warming, shorten the workweek, and guarantee high-quality health care. They simply want more wealth and more power.
Clearly, Sanders thinks their motivations are incompatible with a just future. So, the issue then isn’t whether AI is inherently good or bad; it’s who controls it and who benefits from it.
Of course, as Professor Hinton noted, if we had a political system that was run for the benefit of the people, we should invest all our money and time in developing a very powerful AI that solves all our problems and does everything for us. But we don’t live in such a world, so we perhaps need to be very careful.
Ask the people at the coal face
If a chef refused to eat their own cooking, a boat builder refused to board their own boat, or the chief of an aircraft manufacturer never flew in their own planes, you’d have second thoughts too.
According to a Guardian investigation published this week (23 Nov 2025), that’s essentially the situation we now have in generative artificial intelligence (GenAI).
The article interviewed more than a dozen frontline AI workers – the human raters, tutors, and data labellers who literally teach ChatGPT, Google Gemini, Meta AI, Grok, and others how to sound coherent and “safe.”
You might be surprised to hear that training AI is actually a very human, manual and grubby business.
And almost without exception, the people at the coalface are the very people now telling their families, friends, and even their own children to stay away from the technology they help build.
According to the Guardian, the humans closest to the models have zero trust in them. They see the constant hallucinations, hidden biases, and outright dangerous outputs up close. Consequently, many have banned generative AI in their own homes and actively discourage others from using it.
The interviewees also noted that speed is crushing safety. Relentless pressure for fast turnarounds, vague instructions from AI industry bosses, and unrealistic deadlines meant quality is being sacrificed so companies can release new features and versions, maintain the perception of rapid progress and keep stock valuations rising.
According to the interviewees, AI hallucination rates are worsening, not improving. Indeed, an independent NewsGuard audit conducted between August 2024 and August 2025 revealed the top 10 models now almost never say “I don’t know” (down from 31 per cent last year to 0 per cent this year), while the rate of AI models confidently repeating dangerous falsehoods nearly doubled (from 18 per cent to 35 per cent).
Worryingly, sensitive topics, including medical questions, historical controversies, and hate-speech detection, are being handled by unqualified, low-paid contractors with no domain expertise, paid only cents per task, and racing against the clock. It stands to reason that if the data fed into these models is rushed, inconsistent, and frequently toxic, the people cleaning it must conclude the final product can never be truly trustworthy.
According to the same frontline workers, AI’s feedback loops are broken, and reported biases and errors remain unaddressed. The Guardian flagged an example where Gemini refused to answer basic questions about Palestinian history while happily giving long answers about Israel.
For investors
It’s obviously difficult to forecast many years out, but we can reasonably conclude that eventually regulators will respond. Regulatory risk, therefore, just went from “possible” to “when, not if”.
Some suggest we should expect an EU AI Act enforcement wave in 2026, plus probable U.S. federal legislation in 2026-27 once the new Congress is seated.
And if foundation models are considered harmful, regulators will eventually demand mandatory third-party auditing, massive transparency requirements, and potentially legal liability for hallucinations that cause damage.
The idea that the best data, the most money and the biggest compute will inevitably lead to an undisputed dominant lead could be challenged by a simple circumspection. If the data pipeline is being rushed, then the data going in could be garbage. If that’s right, then scaling – which is what the AI boom is all about right now – won’t be a ‘get-out-of-jail-free’ card.
That’s because enterprise adoption will slow, rather than accelerate. If CIOs, already nervous about hallucinations, regard The Guardian’s findings as legitimate, could they hit pause for even longer?
The hype-driven cheap funding for a land grab – the “land-and-expand leads to total platform domination” flywheel – will only produce decent returns for investors if those cyclical customers believe the outputs are reliable enough for mission-critical workflows. The Guardian suggests that trust could erode.
It’s worth remembering, however, that stock momentum can remain decoupled from fundamentals for a long time. Already in this, the greatest hype cycle in history, bad news has routinely been shrugged off. All eyes are on the north-easterly direction for monthly active users, and “compute capital expenditure (capex) guidance”. And that can continue for quarters, possibly even a couple of years.
It’s worth, however, keeping an eye on the ‘half-life’ of each new AI release, asking yourself if the ‘wow’ is diminishing.
We’ve already noted that, throughout history, general-purpose technologies (GPTs) have generated wild excitement, lowering the cost of capital and driving massive scaling that led to oversupply. The Guardian’s evidence is that AI owners are adopting a “move fast and ship” model that treats safety and quality as a public relations (PR) checkbox. Commoditisation and regulation are not improbable.
For many investors, it won’t make any difference, but if you’re long a stock at 80-120 times revenue, it will. Our suggestion is not to exit but to rebalance. After taking advice, consider taking some profits off the table and diversifying those profits into other asset classes, such as equities and fund managers with exposure to defensive and growing sectors that aren’t reliant on the artificial intelligence (AI) boom.