AI prompt:
Jargon. This was a note by an analyst yesterday: The bottleneck in AI has shifted from models to power, cooling & GPU capacity. Control of compute has become the real moat.. I understand the concept of "moat" when it comes to investing and I've recently read more about "the concept of "compute" in the current AI revolution. What does "compute" mean when used in that context?
AI reply:
1️⃣ What “compute” means in general
In the simplest sense, compute is short for computing power — the ability of hardware (chips, servers, and data centers) to perform calculations.
In traditional computing, this meant how fast your CPU could process instructions.
But in AI, “compute” refers to the total amount of computational capacity available to train and run large AI models — usually provided by high-end GPUs (graphics processing units) or specialized chips (like Google’s TPUs or NVIDIA’s H100s).
So when someone says “control of compute,” they mean control over the physical and infrastructural resources that make AI possible — the chips, the servers, the data centers, and the power to run them.
2️⃣ Why “compute” has become the new moat
In the early years of AI, the moat (the protective advantage) was about who had the best models or most data.
But today, state-of-the-art models (like GPT-5 or Gemini or Claude) require staggering amounts of computational power to train — thousands of GPUs running for weeks or months.
That means:
-
Access to enormous energy supplies,
-
Cooling systems to manage the heat,
-
And capital to build or lease specialized data centers.
Only a handful of companies — think NVIDIA, Microsoft, Amazon, Google, and Meta — can afford that scale of compute.
So “control of compute” has become the strategic bottleneck and advantage — the moat — because if you don’t control that capacity, you can’t build or run frontier AI models, no matter how good your ideas or data are.