Digitaliziran

The Emperor Has No ROI

Or: How to Cut Headcount and Multiply Costs


The Pitch vs. The Spreadsheet

You’ve seen the headlines. You’ve attended the boardroom presentations. Some consultant with expensive slides told you AI will “transform your workforce” and you nodded along while mentally calculating severance packages.

Before you start drafting those redundancy letters, let’s talk numbers. Not the shiny projected ones. The real ones.


95% of AI Projects Return Exactly Nothing

MIT’s 2025 report “The GenAI Divide” dropped a truth bomb: despite $30-40 billion in enterprise AI investment, 95% of organizations report zero measurable return on their AI pilots.

Not negative return. Zero.

The report, based on 150 executive interviews and analysis of 300 AI deployments, found that most GenAI systems don’t retain feedback, adapt to context, or improve over time. They’re expensive parrots with excellent grammar.

“Most GenAI systems do not retain feedback, adapt to context, or improve over time.” - MIT Media Lab, July 2025

Only 5% of companies extracting value. And those winners share a common trait: they didn’t try to replace humans. They augmented specific, bounded tasks.


The Jobs Are Disappearing… But Why?

UK entry-level jobs have dropped 32% since ChatGPT launched in November 2022. Retail positions alone fell by 78%. IT entry-level roles down 55%.

Looks damning, right?

Not so fast. Even the Institute of Student Employers admits it’s “too early for AI to impact graduate job numbers.” The real culprits? Rising employer National Insurance contributions, flat economic growth, and post-pandemic labor market corrections.

Correlation isn’t causation. Companies are blaming AI for cuts they’d make anyway. It’s PR-friendly to say “we’re automating” rather than “we’re just cutting costs.”


The Economics Are Bonkers

Let’s visualize how the money actually flows:

flowchart TD

    subgraph Enterprises
      E[Actual Enterprise<br>Revenue Outside<br>Hyperscalers] -->|<$1B| F[???]
    end

    subgraph Hyperscalers
      A[OpenAI] -->|Buys chips| B[NVIDIA]
      B -->|Invests in| A
      A -->|Commits $300B| C[Oracle]
      C -->|Stock jumps $200B<br>on announcement| C
      A -->|$22.4B| D[CoreWeave]
      D -->|Depends on A<br>paying| D
    end

The infrastructure costs are staggering:

IBM’s CEO Arvind Krishna put it bluntly: the industry has announced roughly $8 trillion in planned capacity. That would require $800 billion in annual profit just to service the cost of capital.

Current AI industry profits: approximately zero.


The Metaverse Playbook (But Worse)

Remember when Meta was going to revolutionize work with virtual offices? After $73+ billion in losses from Reality Labs, they’re now cutting 30% of metaverse budgets and pivoting to… AI.

And it’s getting worse. In January 2026, Meta shut down Horizon Workrooms entirely - their flagship “future of work” VR collaboration tool - effective February 16th. They’re also killing business sales of Quest headsets and recommending users switch to… Microsoft Teams and Zoom. The very platforms they claimed to be disrupting.

graph TD

    subgraph "AI Industry 2023-?"
      A1[Vision: AGI Everywhere] --> A2[$1.4T in Commitments]
      A2 --> A3[95% Zero ROI]
      A3 --> A4[???]
    end

    subgraph "Meta's Metaverse 2021-2025"
      M1[Vision: Future of Work] --> M2[$73B+ Losses]
      M2 --> M3[Horizon Worlds<br>~100K users]
      M3 --> M4[Mass Layoffs 2026]
      M4 --> M5[Pivot to AI]
    end

But here’s why AI’s bubble is potentially worse:

FactorMetaverseAI
Escape hatchMeta still has ads empireOpenAI, Anthropic = pure AI plays
Systemic exposureOne company’s problemMicrosoft, Google, Amazon ALL committed trillions
Revenue claimsOpenly speculative R&DClaiming near-term profitability
Correction timelineSlow fade over yearsEarnings pressure NOW

Meta could absorb $50B+ in metaverse losses because Instagram prints money. OpenAI has no backup plan.


What LLMs Actually Are (And Aren’t)

Let’s be precise about the technology you’re betting your workforce on:

What they do: Predict the most statistically likely next word based on training data. Do this very, very well.

What they don’t do:

The improvement curve has flattened. Post-GPT-4, benchmark gains have been marginal. The transition from GPT-3.5 to GPT-4 was massive. From GPT-4 to GPT-5? According to multiple analyses, plateau territory.

As one researcher noted: “The easy gains from merely scaling model size have been largely exhausted.”


The Diseconomy of Scale

Traditional SaaS scales beautifully: more users, same infrastructure, bigger margins.

AI is inverted:

Every time someone asks follow-up questions to fix a wrong answer, OpenAI loses money. At scale, this is a margin-destroying machine.


The Trigger: Q4 Earnings

Here’s what’s coming:

Companies have been vague about actual AI revenue contributions. “AI-powered” features get announced. Revenue attribution stays fuzzy.

Starting this quarter, earnings calls require specificity. Investors will ask: “What’s the actual revenue from AI, not AI-adjacent or AI-enabled?”

If the 95% failure rate shows up in margin compression, the narrative breaks. And unlike the metaverse, which one company could quietly wind down, a recalibration hits every major tech firm simultaneously.


What Smart Leaders Actually Do

So you want to use AI without joining the 95% failure club? Here’s what the 5% do differently:

  1. Self-host or stay abstracted. The original research shows external partnerships achieve 66% deployment success vs. 33% for internal builds - but there’s a catch. API providers retire models on their schedule, not yours. When OpenAI deprecates your model, you get 90 days to migrate, re-evaluate, and pray your prompts still work. Self-hosted open models (Llama, Mistral, Qwen) let you freeze what works. A validated 2023 model running bounded tasks is better than a 2026 frontier model you haven’t evaluated. If you must use APIs, wrap them in an abstraction layer so switching providers is a config change, not a rewrite.

  2. Augment bounded tasks. Data entry, first-draft writing, code scaffolding. Not strategic thinking. Well-designed workflows with defined inputs and predictable outputs can run on stable, older models indefinitely - no forced upgrades, no surprise behavior changes.

  3. Budget for evaluation. Every model change - even the same architecture with different quantization - requires full evaluation. New model version? Eval. Prompt tweak? Eval. Provider forces migration? Emergency eval. If you can’t systematically test your AI workflows, you can’t safely run them in production. This cost almost never appears in vendor ROI projections.

  4. Measure business outcomes. Not “AI adoption rate.” Not “prompts per day.” Actual P&L impact.

  5. Keep the humans. The companies seeing value use AI to make workers 20% more productive. They don’t fire the workers.

  6. Budget for reality. That junior employee you’re considering replacing costs maybe €40K/year. An enterprise AI implementation with integration, customization, and ongoing costs? Substantially more, with no guarantee it works.


The Bottom Line

You can fire your entry-level staff and announce an AI transformation. It’ll look innovative in the press release.

Twelve months later, when:

You’ll be hiring consultants to help you figure out what went wrong.

The consultants will probably use AI to write their findings.

Or you could work with teams that have been building stable, self-hosted AI systems since before the hype peaked - designing for predictable outputs and operational control rather than chasing the latest model announcement. Some of us saw this coming.


TL;DR for Ones Who Skipped to the End

Before you replace your workforce, replace your hype with arithmetic.


This article contains no AI-generated hallucinations. Just uncomfortable mathematics.

#Ai #ROI #En