Foxorox AI Market Forecast – 2025-12-06

AI-generated analysis combining predictive modeling, mega-cap tech sentiment and deep AI datacenter economics.

Our simply caclulation shows that part of companies mentioned below that participate in AI RACE would go with hard cash problems during one-two years or so. Possible crash is coming of this AI hype

πŸ“Š Market Focus: US Tech Leaders

Microsoft (MSFT)

Amazon (AMZN)

Alphabet A (GOOGL)

NVIDIA (NVDA)

GOOGLE, AMAZON, X, TESLA, META, ORACLE, MICROSOFT, NEBIUS AND BLACKROCK FUNDS,

Gap: 1.77%   Candle: 69.79% (black)

Capital markets are increasingly pricing not only earnings, but also who can afford to build and operate multi-gigawatt AI datacenter infrastructure. Below is a full, bottom-up breakdown of what it means, in hard numbers, to run a 1-GW AI datacenter β€” the kind of asset that sits behind the tickers above.

🧠 Building a 1-Gigawatt AI Datacenter

Bottom-up cost estimation from GPU power β†’ full facility β†’ operational cost β†’ required revenue

Here is the same breakdown, fully in English, calculated from the bottom up based on GPU power, cooling, infrastructure and operational costs – no local currencies, only USD/EUR.

1. Base assumptions

We model a large-scale AI-training datacenter using modern accelerators like NVIDIA H100-class GPUs:

Item Assumption
Power per GPU ~700 W (0.7 kW)
Price per GPU ~30,000 USD
Target compute power 1 GW of IT load (only GPUs)
Datacenter efficiency PUE = 1.2 (high-end liquid cooling)
Electricity price (wholesale long-term) ~0.15 EUR/kWh
GPUs are ~50% of total CAPEX Rough industry average for AI datacenters

We calculate from 1 GW IT load, meaning GPU power only. Facility power will be higher.

2. Number of GPUs required for 1 GW

GPU count = 1,000,000 kW / 0.7 kW/GPU β‰ˆ 1,428,571 GPUs

πŸ‘‰ ~1.43 million GPUs.

This scale aligns with public mega-cluster discussions (e.g. β€œmillion GPU clusters”).

3. CAPEX β€” GPU hardware cost

GPU CAPEX β‰ˆ 1,428,571 Γ— 30,000 USD β‰ˆ 42.9 billion USD

πŸ‘‰ Just the GPUs: ~43 billion USD

4. Total infrastructure cost

A datacenter needs much more than GPUs:

Industry ratio: GPUs β‰ˆ 50% of total CAPEX for AI-focused hyperscale builds.

Total CAPEX β‰ˆ 2 Γ— 43 B USD β‰ˆ 86 B USD

πŸ‘‰ Estimated total build cost: ~80–90 billion USD

(Lower PUE or cheaper GPUs can drop this, advanced high-redundancy build could raise it.)

5. Facility power requirement (including cooling)

IT load = 1 GW
With PUE = 1.2:

Total power consumption = 1 GW Γ— 1.2 = 1.2 GW

Split approx:

6. Annual electricity consumption & cost

Energy/year = 1.2 GW Γ— 8760 h β‰ˆ 10.5 TWh/year

Annual electricity cost:

10.5Γ—10⁹ kWh Γ— 0.15 EUR/kWh β‰ˆ 1.6 billion EUR/year

πŸ‘‰ Electricity alone β‰ˆ 1.5–2.0 billion EUR per year

If energy price is 0.10–0.20 EUR/kWh, cost scales proportionally.

7. Operational expenses (OPEX beyond electricity)

a) Personnel

Assume ~500 employees (engineers, ops, tech, security, support)

500 Γ— 100,000 USD/year β‰ˆ 50 million USD/year

πŸ‘‰ ~50 million USD/year for staffing

Tiny compared to power & hardware churn.

b) Hardware maintenance & replacement

Hyperscale rule of thumb:

4–6% of hardware value per year for replacements/maintenance.

If IT hardware β‰ˆ 60B USD of total CAPEX:

0.05 Γ— 60 B USD β‰ˆ 3 B USD/year

πŸ‘‰ Maintenance & refresh: ~3 billion USD/year

Covers dead GPUs, swapping servers, new generations every 3–5 years.

8. Summary

Build cost (CAPEX)

Category Est. cost
GPUs (~1.43M units) ~43B USD
Servers, cooling, power infra, network, buildings ~40–50B USD
Total CAPEX β‰ˆ 80–90B USD

Annual operating cost (OPEX)

Category Yearly cost
Electricity (~10.5 TWh/year) ~1.6B EUR/year
Hardware servicing & refresh ~3B USD/year
Staff & operations ~50M USD/year
Total OPEX several billion USD/EUR annually

9. Equations you can reuse

N_GPU = P_IT(kW) / P_GPU
GPU CAPEX = N_GPU Γ— GPU price
Total CAPEX β‰ˆ GPU CAPEX / (GPU share of CAPEX)
E_year = P_IT Γ— PUE Γ— 8760
Energy cost/year = E_year Γ— energy price

You can use these to recalculate costs for smaller or larger systems, different GPUs, PUE values or power prices.

10. Public companies planning gigawatt-scale AI data centers

Below is an extra section – a table showing which listed companies are planning / building gigawatt-scale AI data centers, with power, timing and location.

Note: most hyperscalers are listed on NASDAQ, not NYSE. I’ve included the big US-listed players (NYSE + NASDAQ) and highlighted the large NYSE-linked consortium via BlackRock.

Company (ticker / exchange) Planned / announced AI DC power (approx.) When (online / build) Where / project description
Microsoft (MSFT, NASDAQ) Up to ~3.3 GW facility power for one campus Fairwater campus reaching ~3.3 GW by late 2027 Fairwater AI datacenter, Mount Pleasant, Wisconsin – a multi-building AI campus projected to consume ~3.3 GW of power by 2027; part of a wider β€œAI superfactory” network including a similar architecture site in Atlanta.
Amazon / AWS (AMZN, NASDAQ) ~1.3 GW new AI/HPC capacity (federal cloud) Construction expected to start 2026 AWS plans to invest up to $50B to add nearly 1.3 GW of AI & high-performance computing capacity across AWS GovCloud, Secret and Top Secret regions for U.S. government customers.
Meta Platforms (META, NASDAQ) 1–1.4 GW per mega-campus; >1 GW AI power overall by 2026 2026–2028 Meta is building multiple 1-GW-class AI campuses: El Paso, Texas data center (Meta’s 29th) is designed to scale to a 1-GW site by ~2028; its Prometheus campus is expanding from ~319 MW to ~1.36 GW by Oct 2026. Meta also plans to bring over 1 GW of AI computing power online by 2026, supported by over 1.3M GPUs.
Artificial Intelligence Infrastructure Partnership (AIP) / Aligned Data Centers – led by BlackRock (BLK, NYSE), Nvidia (NVDA, NASDAQ), Microsoft, xAI ~5 GW operational + planned capacity Deal announced Oct 2025, closing expected in H1 2026 AIP (BlackRock, Nvidia, Microsoft, xAI and others) agreed to acquire Aligned Data Centers for ~$40B. Aligned operates about 80 data centers with ~5 GW of current and planned capacity across ~50 campuses in the U.S., Brazil, Mexico and Chile, explicitly aimed at AI infrastructure.
Nebius Group N.V. (NBIS, NASDAQ) 2.5 GW power capacity target By 2026 Netherlands-based, Nasdaq-listed β€œneocloud” provider. Nebius, backed by major contracts with Microsoft and Meta, plans to secure 2.5 GW of power capacity by 2026 for AI-intensive cloud services, with data centers across Europe and strong presence in the U.S. market.
Alphabet / Google (GOOGL, NASDAQ) Indirect: 8 GW of clean-energy generation contracts for its DC fleet Contracts signed in 2024, projects come online through late-2020s Google is one of the biggest data-center operators and corporate clean-energy buyers. In 2024 it signed contracts to purchase ~8 GW of clean energy generation capacity to power its data centers globally. This doesn’t map 1:1 to IT load, but it illustrates multi-GW-scale infrastructure behind its AI & cloud data centers.

11. Payback, required revenue and a 10% annual return

Now, given such massive capital expenditure, what does the business need to earn each year so that the investment β€œpays back”? Let’s build a simple financial model assuming:

11.1. Capital base and hardware share

From earlier sections:

11.2. Required annual return on capital (10% ROI)

With a 10% target rate of return on the total capital invested:

Required return (ROI) = 10% Γ— 86B USD β‰ˆ 8.6B USD per year

This is what investors would want to earn on top of merely covering operating costs and hardware replacement.

11.3. Hardware amortization at 10% per year

If we assume the hardware base of ~60B USD is amortized over 10 years (straight-line):

Hardware amortization = 10% Γ— 60B USD β‰ˆ 6.0B USD per year

This 6B USD/year represents the β€œbudget” for replacing and upgrading IT hardware on a 10-year schedule. In practice, GPUs may be refreshed faster (3–5 years), but other equipment lasts longer, so 10% is a clean high-level assumption.

11.4. Operating expenses (cash costs)

From our previous estimates:

Power OPEX β‰ˆ 1.6B EUR Γ— 1.1 β‰ˆ 1.8B USD per year

Putting those operating costs together:

Operating costs (cash) β‰ˆ 1.8B + 0.05B + 1.0B β‰ˆ 2.85B USD/year

(You can tweak this number depending on real-world tax regimes, labor markets, and energy contracts.)

11.5. Total annual β€œnut” – how much revenue is needed?

To hit the 10% return target and keep the hardware on a 10-year refresh cycle, the datacenter has to cover:

Total required "economic cost" per year
β‰ˆ 2.85B + 6.0B + 8.6B
β‰ˆ 17.45B USD/year

πŸ‘‰ So, a 1-GW AI datacenter on this model needs to generate on the order of 17–18 billion USD in revenue per year to:

11.6. Revenue per GPU and per month

We previously found that a 1-GW AI facility corresponds to about 1.43M GPUs.

Required revenue per GPU per year
β‰ˆ 17.5B USD / 1,428,571 GPUs
β‰ˆ 12,250 USD per GPU per year

On a monthly basis:

β‰ˆ 12,250 USD / 12 β‰ˆ ~1,020 USD per GPU per month

So very roughly, each GPU in the cluster must generate on the order of ~1,000 USD/month in revenue to make the economics work under these assumptions (10% return on 86B USD CAPEX and 10% annual hardware amortization).

Of course, real projects will vary: some will aim for higher returns, some will accept lower; energy prices, hardware prices and utilization rates will also move these numbers significantly. But this gives a clear order-of-magnitude view of what a 1-GW AI super-datacenter must earn to β€œpay for itself”. Using futrher simply math. It looks that they need for each center at least 17bln USD revenues to keep it going so assuming USD10 mothnly bill per user they need 141mln users per one 1GW of AI Center. All toghether they plan to build 22GW AI centers so simply it looks that AI Centers need to have 3.116bln users which is half o the Earth population.

SO, YOU READER JUDGE IF IT IS POSSIBLE.