AI-generated analysis combining predictive modeling, mega-cap tech sentiment and deep AI datacenter economics.
Our simply caclulation shows that part of companies mentioned below that participate in AI RACE would go with hard cash problems during one-two years or so. Possible crash is coming of this AI hype
Gap: 1.77% Candle: 69.79% (black)
Capital markets are increasingly pricing not only earnings, but also who can afford to build and operate multi-gigawatt AI datacenter infrastructure. Below is a full, bottom-up breakdown of what it means, in hard numbers, to run a 1-GW AI datacenter β the kind of asset that sits behind the tickers above.
Bottom-up cost estimation from GPU power β full facility β operational cost β required revenue
Here is the same breakdown, fully in English, calculated from the bottom up based on GPU power, cooling, infrastructure and operational costs β no local currencies, only USD/EUR.
We model a large-scale AI-training datacenter using modern accelerators like NVIDIA H100-class GPUs:
| Item | Assumption |
|---|---|
| Power per GPU | ~700 W (0.7 kW) |
| Price per GPU | ~30,000 USD |
| Target compute power | 1 GW of IT load (only GPUs) |
| Datacenter efficiency | PUE = 1.2 (high-end liquid cooling) |
| Electricity price (wholesale long-term) | ~0.15 EUR/kWh |
| GPUs are ~50% of total CAPEX | Rough industry average for AI datacenters |
We calculate from 1 GW IT load, meaning GPU power only. Facility power will be higher.
GPU count = 1,000,000 kW / 0.7 kW/GPU β 1,428,571 GPUs
π ~1.43 million GPUs.
This scale aligns with public mega-cluster discussions (e.g. βmillion GPU clustersβ).
GPU CAPEX β 1,428,571 Γ 30,000 USD β 42.9 billion USD
π Just the GPUs: ~43 billion USD
A datacenter needs much more than GPUs:
Industry ratio: GPUs β 50% of total CAPEX for AI-focused hyperscale builds.
Total CAPEX β 2 Γ 43 B USD β 86 B USD
π Estimated total build cost: ~80β90 billion USD
(Lower PUE or cheaper GPUs can drop this, advanced high-redundancy build could raise it.)
IT load = 1 GW
With PUE = 1.2:
Total power consumption = 1 GW Γ 1.2 = 1.2 GW
Split approx:
Energy/year = 1.2 GW Γ 8760 h β 10.5 TWh/year
Annual electricity cost:
10.5Γ10βΉ kWh Γ 0.15 EUR/kWh β 1.6 billion EUR/year
π Electricity alone β 1.5β2.0 billion EUR per year
If energy price is 0.10β0.20 EUR/kWh, cost scales proportionally.
Assume ~500 employees (engineers, ops, tech, security, support)
500 Γ 100,000 USD/year β 50 million USD/year
π ~50 million USD/year for staffing
Tiny compared to power & hardware churn.
Hyperscale rule of thumb:
4β6% of hardware value per year for replacements/maintenance.
If IT hardware β 60B USD of total CAPEX:
0.05 Γ 60 B USD β 3 B USD/year
π Maintenance & refresh: ~3 billion USD/year
Covers dead GPUs, swapping servers, new generations every 3β5 years.
| Category | Est. cost |
|---|---|
| GPUs (~1.43M units) | ~43B USD |
| Servers, cooling, power infra, network, buildings | ~40β50B USD |
| Total CAPEX | β 80β90B USD |
| Category | Yearly cost |
|---|---|
| Electricity (~10.5 TWh/year) | ~1.6B EUR/year |
| Hardware servicing & refresh | ~3B USD/year |
| Staff & operations | ~50M USD/year |
| Total OPEX | several billion USD/EUR annually |
N_GPU = P_IT(kW) / P_GPU
GPU CAPEX = N_GPU Γ GPU price
Total CAPEX β GPU CAPEX / (GPU share of CAPEX)
E_year = P_IT Γ PUE Γ 8760
Energy cost/year = E_year Γ energy price
You can use these to recalculate costs for smaller or larger systems, different GPUs, PUE values or power prices.
Below is an extra section β a table showing which listed companies are planning / building gigawatt-scale AI data centers, with power, timing and location.
Note: most hyperscalers are listed on NASDAQ, not NYSE. Iβve included the big US-listed players (NYSE + NASDAQ) and highlighted the large NYSE-linked consortium via BlackRock.
| Company (ticker / exchange) | Planned / announced AI DC power (approx.) | When (online / build) | Where / project description |
|---|---|---|---|
| Microsoft (MSFT, NASDAQ) | Up to ~3.3 GW facility power for one campus | Fairwater campus reaching ~3.3 GW by late 2027 | Fairwater AI datacenter, Mount Pleasant, Wisconsin β a multi-building AI campus projected to consume ~3.3 GW of power by 2027; part of a wider βAI superfactoryβ network including a similar architecture site in Atlanta. |
| Amazon / AWS (AMZN, NASDAQ) | ~1.3 GW new AI/HPC capacity (federal cloud) | Construction expected to start 2026 | AWS plans to invest up to $50B to add nearly 1.3 GW of AI & high-performance computing capacity across AWS GovCloud, Secret and Top Secret regions for U.S. government customers. |
| Meta Platforms (META, NASDAQ) | 1β1.4 GW per mega-campus; >1 GW AI power overall by 2026 | 2026β2028 | Meta is building multiple 1-GW-class AI campuses: El Paso, Texas data center (Metaβs 29th) is designed to scale to a 1-GW site by ~2028; its Prometheus campus is expanding from ~319 MW to ~1.36 GW by Oct 2026. Meta also plans to bring over 1 GW of AI computing power online by 2026, supported by over 1.3M GPUs. |
| Artificial Intelligence Infrastructure Partnership (AIP) / Aligned Data Centers β led by BlackRock (BLK, NYSE), Nvidia (NVDA, NASDAQ), Microsoft, xAI | ~5 GW operational + planned capacity | Deal announced Oct 2025, closing expected in H1 2026 | AIP (BlackRock, Nvidia, Microsoft, xAI and others) agreed to acquire Aligned Data Centers for ~$40B. Aligned operates about 80 data centers with ~5 GW of current and planned capacity across ~50 campuses in the U.S., Brazil, Mexico and Chile, explicitly aimed at AI infrastructure. |
| Nebius Group N.V. (NBIS, NASDAQ) | 2.5 GW power capacity target | By 2026 | Netherlands-based, Nasdaq-listed βneocloudβ provider. Nebius, backed by major contracts with Microsoft and Meta, plans to secure 2.5 GW of power capacity by 2026 for AI-intensive cloud services, with data centers across Europe and strong presence in the U.S. market. |
| Alphabet / Google (GOOGL, NASDAQ) | Indirect: 8 GW of clean-energy generation contracts for its DC fleet | Contracts signed in 2024, projects come online through late-2020s | Google is one of the biggest data-center operators and corporate clean-energy buyers. In 2024 it signed contracts to purchase ~8 GW of clean energy generation capacity to power its data centers globally. This doesnβt map 1:1 to IT load, but it illustrates multi-GW-scale infrastructure behind its AI & cloud data centers. |
Now, given such massive capital expenditure, what does the business need to earn each year so that the investment βpays backβ? Letβs build a simple financial model assuming:
From earlier sections:
With a 10% target rate of return on the total capital invested:
Required return (ROI) = 10% Γ 86B USD β 8.6B USD per year
This is what investors would want to earn on top of merely covering operating costs and hardware replacement.
If we assume the hardware base of ~60B USD is amortized over 10 years (straight-line):
Hardware amortization = 10% Γ 60B USD β 6.0B USD per year
This 6B USD/year represents the βbudgetβ for replacing and upgrading IT hardware on a 10-year schedule. In practice, GPUs may be refreshed faster (3β5 years), but other equipment lasts longer, so 10% is a clean high-level assumption.
From our previous estimates:
Power OPEX β 1.6B EUR Γ 1.1 β 1.8B USD per year
Putting those operating costs together:
Operating costs (cash) β 1.8B + 0.05B + 1.0B β 2.85B USD/year
(You can tweak this number depending on real-world tax regimes, labor markets, and energy contracts.)
To hit the 10% return target and keep the hardware on a 10-year refresh cycle, the datacenter has to cover:
Total required "economic cost" per year
β 2.85B + 6.0B + 8.6B
β 17.45B USD/year
π So, a 1-GW AI datacenter on this model needs to generate on the order of 17β18 billion USD in revenue per year to:
We previously found that a 1-GW AI facility corresponds to about 1.43M GPUs.
Required revenue per GPU per year
β 17.5B USD / 1,428,571 GPUs
β 12,250 USD per GPU per year
On a monthly basis:
β 12,250 USD / 12 β ~1,020 USD per GPU per month
So very roughly, each GPU in the cluster must generate on the order of ~1,000 USD/month in revenue to make the economics work under these assumptions (10% return on 86B USD CAPEX and 10% annual hardware amortization).
Of course, real projects will vary: some will aim for higher returns, some will accept lower; energy prices, hardware prices and utilization rates will also move these numbers significantly. But this gives a clear order-of-magnitude view of what a 1-GW AI super-datacenter must earn to βpay for itselfβ. Using futrher simply math. It looks that they need for each center at least 17bln USD revenues to keep it going so assuming USD10 mothnly bill per user they need 141mln users per one 1GW of AI Center. All toghether they plan to build 22GW AI centers so simply it looks that AI Centers need to have 3.116bln users which is half o the Earth population.
SO, YOU READER JUDGE IF IT IS POSSIBLE.