Questioning artificial intelligence’s hold on imagination

Questioning artificial intelligence’s hold on imagination

Artificial intelligence (AI) is supposed to have all the answers. It’s supposed to save us time and money. It’s supposed to be better than us.They’re the reasons the AI boom exists both on the ground and on the stock market.

The NASDAQ, trading at a near-all-time high, however, sits in stark contrast to questions now being asked about AI infrastructure spending and the future customer spending required to justify it.

Enterprise AI demand has lagged consumer adoption. Back in July, MIT (Massachusetts Institute of Technology) found 95 per cent of enterprises were producing zero return on US$30-40 billion of AI investment.

“Tools like ChatGPT and Copilot are widely adopted. Over 80 per cent of organisations have explored or piloted them, and nearly 40 per cent report deployment. But these tools primarily enhance individual productivity, not P&L (profit and loss) performance. Meanwhile, enterprise-grade systems, custom or vendor-sold, are being quietly rejected. Sixty per cent of organisations evaluated such tools, but only 20 per cent reached pilot stage and just 5 per cent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.”

So, what are enterprise customers waiting for? Maybe better AI? 

It’s now 2025, ChatGPT is on its umpteenth version, as are other Large Language Models (LLMs), like Grok, Claude, Perplexity and others, and yet major LLMs are still getting the basics wrong.

This morning, I entered the following prompt into Grok.

  • ““Ahead of Qantas’s Annual General Meeting (AGM) on Friday, do you anticipate a strike against the remuneration report, and why?””

Grok offered the following answer:

  • “No, I do not anticipate a strike against Qantas’s 2025 remuneration report at the upcoming AGM on November 7.”

It then offered several key reasons, one of which was (emphasis Grok’s):

  • “The Australian Shareholders’ Association (ASA), a key influencer for retail investors, explicitly recommends voting for the remuneration report. They highlight that the plan aligns with their guidelines, including rigorous short term incentive (STI) measures, appropriate hurdle rates for long term incentives (LTIs), and board discretion to claw back bonuses (e.g., for the recent cyber incident).”

A little unsure that Grok was referencing the ASA’s 2025 recommendation, I prompted Google’s AI, with the following question: 

  • “Has The Australian Shareholders’ Association (ASA), explicitly recommended voting for Qantas’ FY25 remuneration report?”

It offered the following response (emphasis Google’s):

  • “No, the Australian Shareholders’ Association (ASA) explicitly recommended voting against the Qantas FY25 remuneration report. This was due to unplanned accelerator payments to senior management made outside the shareholder-approved structure.”

This isn’t version 1 or even version 2 of Grok. Multiple iterations of LLMs have been released, and yet they still cannot be relied upon. A human must still check the original sources, and that doesn’t help productivity.

The if-you-build-it-they-will-come justification for the trillions being spent on the AI infrastructure build-out must also be relying on necessary improvements in LLMs to secure greater and sustained adoption.

LLM designers assure us enhancements are coming and they will be earth-shattering. 

By way of example, here’s how Bloodinthemachine’s Brian Merchant described the excitement ahead of the release of OpenAI’s GPT-5 model:

“The thing to remember about GPT-5 is that it’s been OpenAI’s big north star promise since GPT-4 was released way back in the heady days of 2023. It’s no hyperbole to say that GPT-5 has for that time been the most hyped and most eagerly anticipated AI product release in an industry thoroughly deluged in hype. For years, it was spoken about in hushed tones as a fearsome harbinger of the future. OpenAI CEO Sam Altman often paired talk of its release with discussions about the arrival of AGI, or artificial general intelligence, and has described it as a significant leap forward, a virtual brain, and, most recently, “a PhD-level expert” on any topic.”

But the launch of ChatGPT-5 flopped.

GPT-5 couldn’t produce an accurate map of the United States, couldn’t count the number of ‘b’s in blueberry, couldn’t identify how many fingers were on a picture of a human hand, and failed at basic arithmetic.

Fans of OpenAI were disappointed, while Reddit AI communities were hostile in their feedback.

The improvements from GPT-4, which was released in 2023, have taken two years to arrive, and yet they aren’t up to scratch for enterprise-grade reliability. In the absence of vastly greater improvements, MITs conclusions may be as true next year as they are this year.

That puts a question mark over the achievement of revenue targets required to justify the boom in the AI build out.

How much do consumers need to spend on AI tools for the AI infrastructure spendthrifts to generate a return that covers their cost of capital and yields a reasonable margin?

Microsoft annually sells less than US$80 billion of windows and office 365 subscriptions per year – arguably the most ubiquitous software on the planet. In 2024, total global software spending reached US$675 billion. The U.S. spent US$368.5 billion, more than half of the world’s total and nearly six times the next-largest spender.

Meanwhile, total global 2025 spend on IT services is estimated at US$1.7 trillion, and that includes cloud services (US$400 billion), cybersecurity, AI or machine learning (ML) solutions, and software for areas like enterprise resource planning (ERP), customer relationship management (CRM), and business intelligence. What if spending on AI tools also needs to have a ‘$T’ in the number? Is that additive? If so, where does the money come from? Does it displace existing spending? If so, how much?

What multiple of current global software spend by end users on AI tools must be assumed to 1) cover the cost of capital for AI infrastructure ‘spendthrifts’, and 2) generate a decent return on investment (ROI) for all of them.

When investors start asking these questions, the answers may raise new questions about the sustainability of current stock multiples.

I am sensing the required end-consumer spend on AI tools to produce a decent return on the $3 trillion Morgan Stanley estimates will be invested in AI infrastructure in 2028 is an unrealistic, unachievable number. 

If the rest of the market senses this, we might be in for a period of ‘adjustment’.

INVEST WITH MONTGOMERY

Roger Montgomery is the Founder and Chairman of Montgomery Investment Management. Roger has over three decades of experience in funds management and related activities, including equities analysis, equity and derivatives strategy, trading and stockbroking. Prior to establishing Montgomery, Roger held positions at Ord Minnett Jardine Fleming, BT (Australia) Limited and Merrill Lynch.

He is also author of best-selling investment guide-book for the stock market, Value.able – how to value the best stocks and buy them for less than they are worth.

Roger appears regularly on television and radio, and in the press, including ABC radio and TV, The Australian and Ausbiz. View upcoming media appearances. 

This post was contributed by a representative of Montgomery Investment Management Pty Limited (AFSL No. 354564). The principal purpose of this post is to provide factual information and not provide financial product advice. Additionally, the information provided is not intended to provide any recommendation or opinion about any financial product. Any commentary and statements of opinion however may contain general advice only that is prepared without taking into account your personal objectives, financial circumstances or needs. Because of this, before acting on any of the information provided, you should always consider its appropriateness in light of your personal objectives, financial circumstances and needs and should consider seeking independent advice from a financial advisor if necessary before making any decisions. This post specifically excludes personal advice.

Why every investor should read Roger’s book VALUE.ABLE

NOW FOR JUST $49.95

find out more

SUBSCRIBERS RECEIVE 20% OFF WHEN THEY SIGN UP


8 thoughts on “Questioning artificial intelligence’s hold on imagination

  1. Added note:

    OpenAI’s Revenue Forecast: A Counterpoint to AI Skepticism
    The article from NextBigFuture (https://www.nextbigfuture.com/2025/11/openai-forecast-100-billion-in-revenue-by-2027.html#more-207035) outlines OpenAI’s aggressive infrastructure and revenue trajectory, which directly challenges the notion that AI spending is disconnected from financial returns:
    • OpenAI projects a $20 billion annual run rate by the end of 2025, with expectations to surpass $100 billion in annual revenue by 2027.
    • This growth is underpinned by a $1.4 trillion infrastructure commitment spread across major vendors like Broadcom, Oracle, Microsoft, Nvidia, AMD, AWS, and CoreWeave.
    • Estimated annual compute spending is forecast to grow from $6 billion in 2025 to $295 billion by 2030, with gross profit margins improving from 48% in 2025 to 70% by 2029.
    • These figures suggest that AI infrastructure is not being built on blind faith but is backed by detailed financial modelling and aggressive monetization strategies. IF OpenAI alone can generate $100 billion annually, it implies that end-user AI spending may not need to reach trillions across the board to justify infrastructure investments. Instead, a handful of dominant platforms could drive outsized returns, much like cloud computing or mobile operating systems did in prior tech waves.
    So maybe the real question isn’t whether consumers will spend trillions on AI but which platforms will win big by capturing most of that value, and how fast they can do it. That’s the race investors should be watching.

    Reply
    • Hi Joe,

      It’s great that there are challenges to the bearish thesis, just as there are challenges to the bullish thesis. That’s what makes a market. Brian’s article that you cite produces a total spend of just over a trillion dollars by 2023. McKinsey have arrived at triple that number by 2028, and others have $US5T by 2030. The reality is nobody knows. The idea that OpenAI revenue rises from $13B at the beginning of this year to $20B at the end of this year, and then to $100B by 2027, is just an estimate. Worth remembering OpenAI is loss-making at current revenue levels. That latter 2027 revenue estimate is suggesting they will generate more is revenue from AI tools than Microsoft generates from selling Windows and Office 365. That might occur but we will have to wait to see. I can almost guarantee that any 2029 estimate, whether profit, revenue or margin will be 100% wrong. It could be higher or lower, but it WILL NOT be that number. That the build out is, in point of fact, built on blind faith has been confirmed by Sam Altman and Mark Zuckerberg themselves. Altman also admitted, perhaps unintentionally, that there’s a huge ‘trust me’ component to projected demand for AI tools including OpenAI’s yet-to-be-released AI-powered consumer hardware.

      “We are taking a forward bet that it will continue to grow, and that not only will ChatGPT keep growing, but we will be able to become one of the important AI clouds, that our consumer device business will be a significant and important thing, that AI that can automate science will create huge value.”

      ‘Trust me’ was also implied recently by Meta CEO Mark Zuckerberg, who said his company was intentionally “front-loading” compute, noting, in the “very worst case”, Meta will have “pre-built for a couple of years.” Zuckerberg has also been quoted as saying even misspending $US200 billion is “worth it”, because if they move too slowly, they’re out of the competition. There’s a huge amount of ‘building on spec’ going on. I am with Burry (Michael) on this one.

      Reply
  2. As for tracking the evolution and capabilities of large language models (LLMs), I often turn to the work of Dr. Alan D. Thompson. His website lifearchitect.ai – offers one of the most comprehensive and meticulously curated timelines of LLM performance, breakthroughs, and benchmarks. It’s an invaluable resource for understanding not just what these models can do today, but how rapidly they’ve progressed—and where they’re headed.

    Reply
    • Thanks for sharing Joe, I agree the enterprise value will be refined through experimentation and refinement. Nevertheless, the argument is about whether the prices we are paying for these businesses can be justified based on future profits, and I am not sure there is sufficient $$ freely floating around the globe to support the revenue and ROIs required to support valuations. Time will tell, of course.

      Reply
  3. Here is GPT5 response to your article above.

    The claim that enterprise AI is failing is outdated and contradicted by recent evidence.
    The article raises valid concerns about the ROI of enterprise AI and the reliability of LLMs, but it leans heavily on outdated data, anecdotal examples, and flawed comparisons. Here’s a breakdown of the key faults and a more balanced perspective.
    1. Outdated and Selective Data
    The article cites a July 2023 MIT study claiming 95% of enterprises saw zero ROI on AI investments. However, more recent data paints a very different picture:
    • A 2025 Wharton study found that 75% of enterprises now report positive returns on GenAI investments.
    • 82% of business leaders use GenAI weekly, and nearly half use it daily.
    • The number of organizations scaling GenAI has increased fivefold since 2023, with 30% now fully or partially scaling solutions.
    These figures suggest that enterprise AI is no longer stuck in pilot purgatory—it’s maturing rapidly.
    2. Misrepresentation of LLM Capabilities
    The article mocks GPT-5 for failing basic tasks like counting letters or identifying fingers in images. But these examples are misleading:
    • LLMs are optimized for language understanding, not symbolic precision or visual analysis.
    • Failures in tasks like counting or mapping are not representative of their enterprise value in customer service, document summarization, code generation, and more.
    • The Qantas/ASA example highlights a sourcing issue, not a fundamental flaw in LLMs. When integrated with real-time data sources, LLMs perform far more reliably.
    3. False Equivalence Between Consumer and Enterprise AI
    The article conflates consumer tools like ChatGPT with enterprise-grade systems. But enterprise AI is evolving:
    • The bottleneck isn’t AI capability—it’s strategic alignment, change management, and integration.
    • Enterprises with formal AI strategies report 80% success rates, compared to 37% without.
    • Nearly 6 in 10 organizations expect AI to act as a team member or supervisor within 12 months.
    4. Misguided Financial Comparisons
    Comparing AI infrastructure investment to Microsoft Office revenue is apples to oranges:
    • The $3 trillion investment spans cloud, cybersecurity, ERP, CRM, analytics, and more—not just LLMs.
    • AI infrastructure supports automation, personalization, fraud detection, supply chain optimization, and other high-value enterprise functions.
    • The ROI horizon for infrastructure is multi-decade, not annualized like consumer software subscriptions.
    5. The Real State of Enterprise AI in 2025
    Enterprise AI is not failing—it’s accelerating:
    • US enterprise AI adoption jumped from 3.7% in 2023 to 9.7% in 2025.
    • 72% of leaders now track ROI metrics like productivity and profitability.
    • Enterprises are moving beyond pilots to operational integration and strategic deployment.
    Sources
    • Wharton School of Business, 2025 GenAI ROI Study
    • MIT Sloan Management Review, 2023 AI Investment Report
    • McKinsey & Company, 2025 State of AI Adoption
    • Gartner, 2025 CIO Survey on GenAI Scaling
    • IDC, Global IT Spending Forecast 2025
    • Australian Shareholders’ Association (ASA) public statements on Qantas FY25 remuneration
    • Reddit AI communities and OpenAI developer forums

    Reply
    • That’s a comprehesniove respone Joe, so I will endevaour to reply to each point…

      The claim that enterprise AI is failing is outdated and contradicted by recent evidence.
      The article raises valid concerns about the ROI of enterprise AI and the reliability of LLMs, but it leans heavily on outdated data, anecdotal examples, and flawed comparisons. Here’s a breakdown of the key faults and a more balanced perspective.
      1. Outdated and Selective Data
      The article cites a July 2023 MIT study claiming 95% of enterprises saw zero ROI on AI investments. However, more recent data paints a very different picture:

      RogerM) The article cites an August 2025 MIT study NOT July 2023. I would argue that is recent rather than “outdated”

      • A 2025 Wharton study found that 75% of enterprises now report positive returns on GenAI investments.

      RogerM) The Wharton Study does produce a contradictory finding. What’s interesting is that one of the major criticisms of the AUgust 2025 MIT study is that it is too early to expect measurable returns from AI. When email was first available, it took many years to produce measurable and acceptable returns. Despiet it being ‘too soon’ to measure the success of AI, the Wharton Study finds there are satisfactory meaurable returns.

      • 82% of business leaders use GenAI weekly, and nearly half use it daily.

      RogerM) Yes, the MIT study did say that individuals are using AI personally. It’s the enterprise usage and projects they are questioning

      • The number of organisations scaling GenAI has increased fivefold since 2023, with 30% now fully or partially scaling solutions.
      These figures suggest that enterprise AI is no longer stuck in pilot purgatory—it’s maturing rapidly.

      RogerM) Where did that stat come from? Mind you ‘Scaling’ AI, does not mean there’s a positive return, unless you assume those scalining are doing so only AFTER they have proven a positive return.

      2. Misrepresentation of LLM Capabilities
      The article mocks GPT-5 for failing basic tasks like counting letters or identifying fingers in images. But these examples are misleading:
      • LLMs are optimized for language understanding, not symbolic precision or visual analysis.
      • Failures in tasks like counting or mapping are not representative of their enterprise value in customer service, document summarization, code generation, and more.
      • The Qantas/ASA example highlights a sourcing issue, not a fundamental flaw in LLMs. When integrated with real-time data sources, LLMs perform far more reliably.
      3. False Equivalence Between Consumer and Enterprise AI
      The article conflates consumer tools like ChatGPT with enterprise-grade systems. But enterprise AI is evolving:
      • The bottleneck isn’t AI capability—it’s strategic alignment, change management, and integration.
      • Enterprises with formal AI strategies report 80% success rates, compared to 37% without.
      • Nearly 6 in 10 organizations expect AI to act as a team member or supervisor within 12 months.

      RogerM) I agree with all of the above. I have used AI for many productive tasks at a business level, but I have also found it to be inefficient at many tasks. These are anecdotes only anyway. The bigger story perhaps is not whether AI LLM limitations limit uptake, but whether energy and water usage will.

      4. Financial Comparisons
      Comparing AI infrastructure investment to Microsoft Office revenue is apples to oranges:
      • The $3 trillion investment spans cloud, cybersecurity, ERP, CRM, analytics, and more—not just LLMs.

      RogerM) Joe, Let’s be very clear here. I am not comparing AI infrastructure investment to Microsoft Office revenue. I am comparing Office rev to the amount we, as consumers, will be required to spend ($US3T) on AI tools, in order for the AI infra players to generate a sufficient, or even acceptable return.

      • AI infrastructure supports automation, personalization, fraud detection, supply chain optimization, and other high-value enterprise functions.

      RogerM) yep, yep and yep. Spend big.

      • The ROI horizon for infrastructure is multi-decade, not annualized like consumer software subscriptions.

      RogerM) Markets don’t wait ‘multi-decade’. Markets hate slowing or low ‘asset turnover’ – with the exception of Amazon.

      5. The Real State of Enterprise AI in 2025
      Enterprise AI is not failing—it’s accelerating:

      RogerM) What does ‘accelerating mean in this context?

      • US enterprise AI adoption jumped from 3.7% in 2023 to 9.7% in 2025.

      RogerM) you need to define ‘adoption’ for this to have any relevance in a discussion of returns.

      • 72% of leaders now track ROI metrics like productivity and profitability.

      RogerM) Again, this is meaningless. More meaningful is how many demonstrate a positive ROI.

      • Enterprises are moving beyond pilots to operational integration and strategic deployment.

      RogerM) That’s great. How much are they willing to pay?

      Sources
      • Wharton School of Business, 2025 GenAI ROI Study
      • MIT Sloan Management Review, 2023 AI Investment Report
      • McKinsey & Company, 2025 State of AI Adoption
      • Gartner, 2025 CIO Survey on GenAI Scaling
      • IDC, Global IT Spending Forecast 2025
      • Australian Shareholders’ Association (ASA) public statements on Qantas FY25 remuneration
      • Reddit AI communities and OpenAI developer forums

      Reply
      • Hi Roger,
        I appreciate the thoughtful response. I’ll try & respond to some of your points and then attempt to do the math you’re asking for: what level of end-customer AI spend would plausibly underwrite a $3T infra build.
        • “Scaling increased 5x; 30% now scaling” — without a source and a definition of “scaling” (e.g., >3 production use cases, >10% of employees, or >$X monthly spend), that stat isn’t decision-grade. We should see the original survey design before using it.
        • “Accelerating” — I’d define it in ways investors can track: YoY growth in paid AI seats, AI-compute consumed (GPU-hours), AI-derived revenue mix in cloud/SaaS segments, and net retention uplift attributable to AI features. Anything else is hand-wavy.
        • “Adoption” — for ROI discussions, adoption should mean: ≥1 production use case tied to a P&L owner, with a budgeted run-rate and observed change in cost or revenue. Pilots and “enabled tenants” don’t count.
        • “Tracking ROI is meaningless; show positive ROI” — agreed. Useful metric: percent of production use cases with independently validated ≥10% IRR vs next-best alternative.
        • “How much are they willing to pay?” — Today’s enterprise price points for copilots sit roughly in the 20–50 per user per month band at list (Microsoft Copilot, Google Duet, Salesforce Einstein add ons, etc.), with discounts. That caps seat-driven revenue unless ARPU rises materially or usage-based pricing takes over.
        The math: how much AI spend is needed to underwrite $3T of infra?
        Let’s reverse the problem and treat it like a mini reverse-DCF.
        Assumptions (you can adjust these):
        • Capital invested (K): $3T by 2028 (Morgan Stanley).
        • Required return (WACC): 8–12% (hyperscalers are lower; smaller players higher).
        • Economic life of the assets (n): 7–10 years blended (accelerators 3–5, facilities/power longer).
        • Ecosystem net margin on AI end-customer spend (m): 20–35% across layers (chips, cloud, software, services). This is consolidated net, not gross.
        I’ll use a capital-recovery factor to convert K into the annual free cash flow (FCF) needed to both service the cost of capital and amortize principal over n years: CRF = r(1+r)^n / [(1+r)^n − 1]
        Scenarios (annual values, rounded):
        • WACC 8%, life 10y: CRF ≈ 0.149 → FCF needed ≈ $447B
        • WACC 10%, life 7y: CRF ≈ 0.205 → FCF needed ≈ $616B
        • WACC 12%, life 7y: CRF ≈ 0.219 → FCF needed ≈ $657B
        Translate FCF into required end-customer AI spend S, using ecosystem net margin m: S ≈ FCF / m
        Likely outcomes:
        • 8%/10y, 35% margin → $447B / 0.35 ≈ $1.28T per year (~1.9x 2024 global software spend of $675B)
        • 8%/10y, 25% margin → ≈ $1.79T (~2.7x software; ~1.05x total IT services at $1.7T)
        • 10%/7y, 35% margin → ≈ $1.76T (~2.6x software)
        • 10%/7y, 25% margin → ≈ $2.46T (~3.6x software; ~1.45x IT services)
        • 10%/7y, 20% margin → ≈ $3.08T (~4.6x software; ~1.8x IT services)
        • 12%/7y, 25% margin → ≈ $2.63T (~3.9x software)
        SUMMARY
        • Even in friendlier cases (low WACC, long life, high net margin), you’re still asking for 1.3–1.8T/year in AI end-customer spend to underwrite $3T of capital.
        • That’s roughly 2–3x today’s entire global software market, or near all of global IT services.
        • If only half of the $3T is truly incremental (the rest is refresh/shared infra), required incremental AI spend still lands around 0.6–0.6–1.3T/year in the friendliest scenarios.
        Basic sanity check
        • If we tried to fund 2.0–2.0–2.5T through “copilot-like” seats:
        • 500M seats → 2.0T requires 2.0T requires 4,000/seat/year (~$330/month) on average.
        • 1B seats → 2.0T requires 2.0T requires 2,000/seat/year (~$165/month).
        • Today’s list is 20–50/month; even $100/month for broad populations is a stretch without extraordinary, provable ROI. So most of the revenue would have to come from:
        1. Usage-based AI compute (PaaS/IaaS)
        2. High-ARPU, outcome-priced verticals (contact centers, fraud/risk, supply chain)
        3. AI-native apps that replace entire categories, not features bolted onto existing SaaS
        Additive vs displacement
        • In the near term, much of AI spend will displace or compress other budgets (SaaS, analytics, RPA, QA, manual services). For the total to be additive by 1–2T/year, you’d need either:
        • Entirely new, high-value categories (autonomous agents doing revenue-generating work at scale), or
        • A massive price umbrella (i.e., sustained pricing power per token/task), or
        • Dramatic unit-cost declines that expand feasible use cases by an order of magnitude (so you win on volume).
        Energy and water as limiting factors
        • Power availability is already gating buildouts; timelines for new transmission, generation, and interconnects are multi-year.
        • Cooling is moving to direct-to-chip and immersion; water use is under mounting scrutiny in stressed regions. Both trends increase capex per MW and complicate siting.
        • These constraints don’t kill the thesis, but they push out ramps and keep unit costs higher for longer—working against the revenue-multiple story unless efficiency improves quickly.
        On the LLM reliability examples (Qantas/ASA, counting b’s)
        You’re right that a human must still check sources. For enterprise-grade reliability, the pattern that works is:
        • Retrieval-augmented generation with citations to authoritative systems of record
        • Tool use/function-calling for math, search, and verification
        • Structured outputs with validators, confidence scoring, and human-on-exception
        Conclude that: LLMs are a component inside a system; naked model answers to fresh-news questions will continue to hallucinate without retrieval and guardrails.
        What would have to be true for $3T to “pencil” without a major multiple reset?
        • Hardware and model efficiency improve 10–20x over 3–4 years (so cost per task shrinks and viable use cases explode).
        • Clear outcome-priced killer apps emerge (e.g., autonomous agents that reliably deflect calls, collect cash, or sell product) with ARPU measured in hundreds per seat or thousands per workload.
        • Utilization stays high (no stranded GPU pools); supply/demand balance prevents a race-to-zero in $/token.
        • Power constraints ease via on-site generation, long-term PPAs, or nuclear; siting friction declines.
        What to watch for so as to validate or falsify the “adjustment” thesis
        • Cloud segment disclosure: AI-attributed revenue and gross margin trend (AWS/Azure/GCP).
        • Price curves: $/1k tokens and effective $/task; are prices dropping faster than costs?
        • Paid seat counts and ARPU for copilots and vertical AI apps (disclosures, if any).
        • GPU utilization and order pushouts/cancellations from hyperscalers and large DC operators.
        • Grid interconnect queues and MW-permit velocity in key regions.
        • Enterprise case studies with audited, realized ROI (not modeled), especially outside dev-tools and support.
        Answering your direct questions
        • “What does ‘accelerating’ mean?” Measurable YoY growth in paid AI revenues, AI-compute consumed, and mix shift toward production workloads (not pilots), with stable or rising unit economics.
        • “Define adoption.” Production use with P&L ownership and observed financial impact; not trials or enabled tenants.
        • “How much are enterprises willing to pay?” Today: roughly 20–50/seat/month for general copilots; much higher on usage or outcome pricing for specific, high-value workflows. To fund 1.5–1.5–3T/year, pricing and/or volume must be far above current seat-price norms, or shift heavily to consumption and outcome contracts.
        CONCLUSION
        I agree that your intuition holds true that under reasonable assumptions, the end-customer AI spend needed to underwrite $3T of infrastructure lands around 1.3–3.1T per year which is roughly 2–5x today’s entire global software market. That doesn’t mean it can’t happen over a long horizon, but it does mean current multiples likely presume: longer asset lives, materially better margins, major efficiency gains, and/or genuinely new, large, outcome-priced categories. If those don’t materialize on schedule, an “adjustment” is a rational expectation.

      • Thanks for taking the time to show your workings. I am unsurprised you’ve reached the same conclusion as others who are worried we are in a hype-driven boom. While detailed working can give us some comfort, my three decades of experience has time and time again demonstrated that attempts to be precise produce results that tend to be ‘precisely wrong’. You are wise to accept that all we can do is make a best guess of things. It’s better to be roughly right, and I think you are. Check out Nvidia’s massive jump in receivables, which are growing much faster than revenue, it’s another sign there’s some hot air being breathed into prices. Of course, time will tell. And while I wait and see, I am taking a little profit and rebalancing/diversifying into uncorrelated alternatives that are market neutral or income-focused. I am still exposed to equity markets, applying rebalancing to weightings that reflect my risk tolerance and time horizon.

Leave a reply

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> 

required