144-year-old mystery exposes an AI bubble
While investors look to the next Nvidia earnings call or the latest OpenELM (Open-source Efficient Language Models) release to predict the future of artificial intelligence (AI), Michael Burry – made famous for making billions shorting markets ahead of 2008 subprime crisis – recently turned to an article entitled Thought without Language, The Narrative of a Deaf-Mute, His First Thoughts and Experiences, in the June 19, 1880, edition of the New York Times.
“The case study is of a teacher at the Columbia Institute for the Instruction of the Deaf and Dumb. This particular teacher, Melville Ballard, is also a deaf mute and a graduate of the National Deaf Mute College.”
“Mr Ballard says that in his infancy, he communicated with his parents and brothers by natural signs or pantomime. His father, believing that observation would help to develop his faculties, frequently took him riding.”
“He continues that it was during a ride two or three years before he was initiated into the rudiments of written language that he began to ask himself the question, “How came the world into being?” and his curiosity was awakened as to what was the origin of human life, its first appearance, the cause of the existence of earth, sun, moon, and stars. At one time, seeing a large stump, he asked himself the question, “Is it possible that the first man that ever came into the world rose out of that stump? But that stump is only a remnant of a once magnificent tree; and how came that tree? Why, it came only by beginning to grow out of the ground, just like these little trees now coming up;” and he dismissed from his mind as absurd the connection between the origin of man and a decaying old stump.”
“He had no idea of what suggested to him the question as to the origin of things, but he had gained ideas of the descent from parent to child, of the propagation of animals and the production of plants from seeds.”
Burry’s latest missive suggests the multi-trillion-dollar “scaling myth” is built on a fundamental architectural error.
Burry suggests the story of Melville Ballard, who achieved profound philosophical reasoning long before he ever acquired a single word of language, reveals we have mistaken the output of artificial intelligence (language) for the engine of it (reason). In other words, today we believe that because AI writes, it must be intelligent.
The “Language-First” fallacy
There’s a conga-line of experts predicting AI will upend white-collar employment, lay waste to economies and even threaten humanity’s existence. Indeed, companies are lining up to announce the redundancy of thousands of employees. But Burry believes we are trading genuine reason for statistical simulation. What if true understanding is a silent intuition beyond mere words?
In essence, Burry disagrees with the “brute-force” logic of today’s Large Language Models (LLMs). The idea that feeding a machine enough text will eventually make it “know” things is an error. I think there’s merit, therefore, in Burry’s argument that we are currently trapped in a “Parameter Trap” – building increasingly sophisticated mirrors that reflect human patterns without ever possessing a true “System 2” capacity for reason.
Howard Marks
As an aside, legendary U.S. value investor, Howard Marks, likewise believes AI’s power lies not in its ability to think – indeed, he questions whether AI will ever be able to come up with truly new ideas. However, he does believe that what AI does, it does better than most humans, while the speed of change under AI won’t vastly outstrip society’s ability to adjust.
Back to Burry
Reason exists in the silence. It observes the world, deduces the origin of life from a tree stump, and only then uses language to label what it already understands.
Like a snake eating itself by the tail, the Silicon Valley Model reverses this: language is ingested at a massive scale to simulate reason. The result? A “primitive form of reason” prone to hallucinations because it lacks a grounding in reality.
From “Bonanza” to “Silly Valley”
On the very same page of that 1880 newspaper lies a report on San Francisco’s “Bonanza Speculators.” It describes a population “imbued with the passion for speculation,” leaping into fortunes that inevitably vanish when the “boom” hits its normal condition.
The irony isn’t lost on Burry.
If Burry is right, we’re pouring zillions of dollars, chips and megawatts into models that “simulate” understanding but never “see” the world, and if understanding truly transcends language, then the current AI architecture isn’t a bridge to the future – it’s an epochal and harrowingly expensive dead end.
The Ballard Case: Thought before words
The 144-year-old story about Melville Ballard is one of a deaf-mute child who, despite having no access to formal language, engaged in profound metaphysical reasoning. Ballard contemplated the origins of the universe, rejected the idea of humans originating from tree stumps as “absurd,” and deduced the mechanics of the solar system through pure observation.
The insight from this story is that complex thought exists in the silence before words. As Professor Samuel Porter argued in 1880, the capacity for reason is the foundation; language is merely the tool that unlocks and scales it.
Burry argues that modern Large Language Models (LLMs) are fundamentally flawed because they reverse this natural order. By putting language first, we aren’t building intelligence; we are building an “increasingly sophisticated mirror.”
There’s merit in that. How do LLMs deliver answers to your prompts? They scour the universe of human-written material – material first generated by human reasoning – and regurgitate it, simulating reason through logical inference.
Table 1. Reason-first vs. Language-first
|
The human model (Ballard/Porter) |
The Large Languge Model (LLM) (Modern AI) |
|
|
Starting point |
Capacity for reason (Silent) |
Language (Data-driven and data created by human reasoning) |
|
Process |
Reason Language Understanding |
Language Simulated Reason |
|
Outcome |
Transcendent Understanding |
Sophisticated mirror / hallucination |
|
Efficiency |
High (intuitive) |
Low (The “Parameter Trap”) |
And because LLMs lack the underlying capacity for reason (the “System 2” architecture), they hit “ragged edges” of knowledge and hallucinate. They simulate the output of understanding without ever possessing the engine.
Understanding transcends language
Drawing from his background as a trained medical physician, Burry offers a powerful analogy: expert surgeons or nurses often communicate through their eyes alone. This “intuitive grasp of reality” occurs because they have attained ‘understanding’ – a state in which all parts of a complex web connect, rendering formal language redundant.
“The expression of the eye is the language which cannot be misunderstood.”
For Burry, the “eye” represents a direct connection to reality that AI, trapped in a loop of its own linguistic parameters, simply cannot replicate.
Conclusion
Burry links the current AI hype to the second story in the same 1880 paper about San Francisco’s “Bonanza Speculators.” He notes that the Bay Area has a long, cyclical history of what he calls Supply Side Gluttony where speculation rises far beyond actual end-user demand. I have written about it on the blog in this article: Summing up the bear case for AI, referring to it as an overcapacity or a misallocation of resources, which is the inevitable result of a hype that produced cheap funding.
If Burry’s right, then true intelligence is the source of language, not its result. The takeaway is therefore a warning for investors: The world’s betting trillions on a wrong-footed start.
Burry notes San Francisco repeatedly enters ‘flush times’ driven by a gambling fever (whether for Nevada silver mines in 1880 or AI today), only to suffer when the ‘big bonanza’ fails to deliver. He concludes the multi-trillion dollar scaling myth of AI is likely another speculative bubble, suggesting the future isn’t in more data and power, but in compression – returning to the ‘silence of pre-linguistic reason’.