GeneralFeatured

The $100B Deal on Ice: Why AI’s Real Moat Is Megawatts, Not Models

February 1, 2026
5
TechAIAI InfrastructureData CentersGPUsMegawattsOpenAI FundingNvidiaProject FinanceCloudHyperscalersSupply ChainGCCOmanEnergyRisk
The $100B Deal on Ice: Why AI’s Real Moat Is Megawatts, Not Models

Sometimes the headline isn’t about “how much money” — it’s about how many megawatts.

What keeps me up at night in AI right now isn’t model quality. It’s that every serious conversation ends at the same wall: power.

And right when people started treating nine-figure and ten-figure rounds like background noise, the coldest signal hit: a rumored $100B mega-deal between the world’s top GPU maker and the most famous AI lab… is “on ice.”

Not gossip. A reset of reality.


What actually happened (with dates + numbers)

On September 22, 2025, OpenAI publicly announced a strategic partnership focused on building and deploying up to 10 gigawatts (GW) of AI data center capacity, with an investment plan that could reach $100B over time as each gigawatt comes online. The statement also pointed to the first gigawatt targeted for the second half of 2026, aligned with NVIDIA’s then-upcoming platform roadmap (Rubin / Vera Rubin).

In January 2026, reports described the $100B “mega-deal” as paused/on ice, and — crucially — suggested it was never a binding, finalized agreement in the way social media framed it.

Then on January 31, 2026, Jensen Huang publicly pushed back on the “unhappy” narrative, framing the planned investment as huge, while downplaying the idea it would necessarily exceed $100B.

Parallel threads matter here, because they explain why this isn’t a simple “deal fell apart” story:

  • January 29, 2026: reports surfaced about talks for an investment that could be up to $50B from Amazon.
  • Around the same period: coverage said SoftBank could be exploring an additional $30B commitment, with interest from major funds including Middle East capital (e.g., Abu Dhabi-linked vehicles).

And then there’s the number that makes every other figure look small: Sam Altman has discussed AI infrastructure commitments on the order of $1.4 trillion over time, and the idea that each gigawatt can demand tens of billions in capital depending on design, supply chain, and power delivery.

That’s the real backdrop.


Why this matters more than it sounds

Because we’ve crossed into a new phase:

AI isn’t just software anymore. It’s infrastructure — closer to power plants and ports than apps and landing pages.

What changed isn’t that models got smarter. What changed is the power structure:

  • Whoever controls the chips can dictate financing terms.
  • Whoever controls the financing can dictate where capacity gets built.
  • Whoever controls the electricity decides who gets to run the future.

A lot of “investment in a company” headlines are, in practice, financing for fleets of data centers — with capacity reservation rights, priority access, long-term supply commitments, and implicit risk-sharing across years.

Here’s the non-obvious part: putting a mega-deal “on ice” doesn’t necessarily mean the relationship is broken. It often means the parties are re-pricing risk and restructuring the deal around the only thing that matters at that scale:

  • Is this equity?
  • Is this project finance?
  • Is this long-term capacity leasing with take-or-pay clauses?
  • Who eats the downside if demand cools, regulation tightens, or technology shifts faster than expected?

Those questions separate “AI hype” from “AI utilities.”


Compute reality check: megawatts aren’t branding — they’re physics

Let’s translate 10GW into something real:

  • 10GW = 10,000 megawatts running continuously.
  • Over a year: 10 × 8,760 hours = 87.6 terawatt-hours (TWh) annually.

That’s not “a big company.” That’s the kind of load that can resemble a nation-level footprint in energy terms.

Now the capital side: if “each gigawatt” can require $40B+ in capital depending on the buildout, then 10GW implies a theoretical envelope in the hundreds of billions — before you count land, grid upgrades, cooling infrastructure, permitting, redundancy, and security.

This is why a single $100B round, even if it happens, doesn’t “solve” the roadmap by itself. The market naturally migrates toward infrastructure-native structures:

  • project finance and large debt facilities,
  • consortium deals,
  • sovereign co-investment,
  • long-term capacity purchase agreements (think energy-style take-or-pay, but for compute).

And money isn’t the only constraint:

  • Advanced GPU supply is still tight, and dependency on a few manufacturing chokepoints (e.g., leading-edge fabrication and advanced packaging) makes delays systemic, not local.
  • Cooling is no longer a secondary design detail; AI rack densities force new thermal architectures.
  • Time is the silent killer: even with capital ready, bringing 1GW online is rarely a “quarterly plan.” Depending on location, permitting, grid availability, and construction capacity, it can be 18–36 months or more across multi-site programs.

My take: the pause is healthy — and the message is brutal

A mega-deal cooling off (even temporarily) is healthy because it kills the fantasy that “funding solves everything.”

Physics doesn’t negotiate.

AI today feels like early telecom. The winners weren’t just the companies making the nicest handset — they were the ones building the network, setting the standards, and collecting value from traffic flowing through it.

My prediction for 2026:

More AI deals will formally become infrastructure deals — project finance, long-duration capacity contracts, stricter operating and security standards, and far more investor skepticism toward commitments that aren’t backed by visible cash flows.

The risk is obvious: overbuilding training capacity that doesn’t find enough paying demand, or business models that assume infinite growth and stable pricing.

The opportunity is equally obvious: teams that master unit economics and efficiency will build companies that make money while others burn capital like fuel.


What founders/teams should do next week (real, tactical moves)

  1. Turn “AI” into operational numbers: cost per 1,000 inferences, cost per training hour, monthly run-rate, and forecast sensitivity. If you can’t measure it, you’re not managing it.

  2. Separate training from inference: treat them as different businesses. Inference can be aggressively optimized (batching, caching, quantization, routing). Training is slower, rarer, and far more expensive.

  3. Plan for multi-provider reality: not for ideology — for continuity. Capacity shortages and pricing swings are real.

  4. Negotiate compute like rent: lock what you truly need, avoid oversized commitments, and prioritize terms: scale pricing, outage penalties, data locality, portability rights.

  5. Prove value with smaller models first: most GCC products don’t need frontier-scale training. They need a reliable model aligned to your domain, language, and compliance constraints.

  6. Build AI observability: track latency, P95 tail behavior, cost by endpoint, error patterns, and drift. “It works” isn’t a metric.

  7. Treat cloud IAM as the real security perimeter: most real-world incidents aren’t sci-fi model hacks — they’re leaked keys, misconfigured permissions, and sloppy secrets handling.

  8. Budget for price volatility: assume periods of higher cost and periods of sudden discounts when new capacity lands. Make your unit economics survive both.

  9. Tie AI outputs to business outcomes: reduce time, reduce cost, reduce risk — in numbers. “Powered by AI” is not a moat.

  10. Prepare for compliance acceleration: data residency, audit trails, retention policies, deletion rights. Regulation tightens as infrastructure scales.

  11. Invest in data pipelines more than model churn: the biggest gains I see in real companies come from data quality and workflow design, not model swapping.

  12. Design an “economy mode”: a fallback pathway if capacity is constrained — smaller model, stricter quotas, degraded-but-safe functionality.

A simple policy pattern that forces discipline:

ai_runtime_policy:
  environments:
    prod:
      max_monthly_inference_usd: 25000
      max_latency_ms_p95: 1200
      model_fallback_order: ["small-local", "mid-cloud", "large-cloud"]
      secrets_rotation_days: 30
      data_residency: "gcc"
    staging:
      max_monthly_inference_usd: 3000
      model_fallback_order: ["small-local", "mid-cloud"]
      data_residency: "gcc"

Oman/GCC implications: massive upside — with real constraints

The GCC has a rare advantage: it can finance infrastructure and move fast when leadership aligns. But “advantage” only matters if you respect the bottlenecks.

1) Power and the grid are the actual gating factor

When you hear gigawatts, don’t think “a big data center.” Think a power-sector-scale initiative.

For Oman specifically, publicly available planning/forecast references point to peak demand in the single-digit gigawatt range in recent years and growth projections into the next decade. That means any serious “sovereign compute” plan must be integrated with generation, transmission, and long-term grid resilience — not treated as an IT project.

2) Connectivity is a strategic lever

Oman’s submarine cable geography and regional connectivity can be a real differentiator — not just for latency, but for positioning as a data corridor if paired with data centers, clear governance, and security standards.

3) Cloud localization changes the game

Additional in-country cloud regions and stronger local hosting options reduce latency, improve compliance posture, and unlock use cases in finance, healthcare, and government that can’t tolerate cross-border ambiguity.

4) The market isn’t 10GW — but it doesn’t need to be

The real near-term GCC win isn’t competing on mega-scale training overnight. It’s building regional inference capacity: Arabic-first products, enterprise-grade compliance, and secure, low-latency deployment close to users.

Opportunity: become the region’s most trusted inference layer — faster deployment, better governance, lower latency, stronger security, and predictable pricing. Risk: jumping straight into frontier-scale training without power, cooling, procurement maturity, and long-term contracted demand — that’s how you burn capital before the GPUs even warm up.


FAQs

Q: Does “the deal is on ice” mean the AI market is collapsing? No. It usually signals restructuring — risk allocation, financing mechanics, and capacity terms getting renegotiated for infrastructure-scale reality.

Q: Is a $100B round realistic? Possible, but it won’t behave like traditional venture capital. It looks more like a blended structure: equity + debt + project finance + long-term capacity commitments.

Q: Why would sovereign funds care? Because compute is becoming strategic infrastructure — tied to competitiveness, security, data sovereignty, and economic positioning.

Q: What’s the most important move for GCC product teams right now? Get inference costs under control and prove business value per dollar of compute. If you can’t defend unit economics, you can’t defend your company.

Q: Is the next race about better models or bigger data centers? Both — but whoever controls data center capacity effectively sets the ceiling on who can train what in the first place.


Closing

I don’t see a “deal on ice” as entertainment. I see it as a giant warning sign: AI has entered its utilities era.

If you’re building a company right now, don’t only ask: “Which model should we use?” Ask: “How much capacity do we really need, how do we reserve it responsibly, and what happens if pricing swings or supply tightens?”

The teams that answer those questions early won’t just survive the next cycle — they’ll own the boring, profitable layer that everyone else depends on.

If this hit home, share it with the person who controls cloud budgets or is shipping AI features this quarter — because the next board conversation won’t be about “cool demos.” It’ll be about who owns capacity, and who pays the bill.


Sources

  • Reuters — “Nvidia CEO Huang denies he is unhappy with OpenAI, says ‘huge’ investment planned” Source: reuters.com
  • The Wall Street Journal — “The $100 Billion Megadeal Between OpenAI and Nvidia Is on Ice” Source: wsj.com
  • Financial Times — “SoftBank close to agreeing additional $30bn investment in OpenAI” Source: ft.com
  • OpenAI — “OpenAI and NVIDIA announce strategic partnership…” Source: openai.com
  • Reuters — “Amazon in talks to invest as much as $50 billion in OpenAI, source says” Source: reuters.com
  • Reuters — “Altman touts trillion-dollar AI vision as OpenAI restructures…” Source: reuters.com
  • Oman News Agency — “Omantel invests over RO 500 million in digital infrastructure” Source: omannews.gov.om
  • DataCenterDynamics — “Google announces plans for subsea cable linking Oman…” Source: datacenterdynamics.com
  • Global Transmission — “Oman: Planned Generation and Transmission Capacity” Source: globaltransmission.info
  • Emirates NBD Research — “UAE data center capacity to surge 165% by 2028” Source: emiratesnbdresearch.com

Related Articles

Discover more articles related to this topic

More articles coming soon...

Explore All Articles