Tuesday, October 28, 2025

LDCs: Update -- Hyperscalers -- October 28, 2025

AI prompt:

There are so many large data centers; impossible to keep up. Who are the biggest names today that would be in the same "universe" as Stargate; Horizon in the Permian; Utah Data Center; Meta in Prineville, OR, GAIIP; etc.

Reply:

Good question — and you’re absolutely right that the data-center universe is enormous and scattered. To help orient you, here are several of the biggest names / operators today that would be in the same league as the ultra‐large facilities you mentioned (e.g., Oracle’s “Stargate”, Meta Platforms’ Prineville, etc.). These firms either build, own or operate “hyperscale” data centers (large campuses, high power, global footprint).

Amazon Web Services (AWS)

  • AWS is widely acknowledged as the largest cloud/hyperscale operator. 

  • According to forecasts, AWS alone is expected to have 185 hyperscale data-center sites by 2030. 

  • These sites include massive campuses with multi-hundred-megawatt power draws and large land footprints.

  • So AWS clearly qualifies as one of the “big league” names you’re asking about.

Microsoft Coproation (Azure+ Cloud)

  • Microsoft is another of the top hyperscalers. Research shows Microsoft will have ~168 hyperscale sites by 2030. 

  • The scale and global reach of its data-centers put it clearly in that “universe” of very large players.

Google LLC (Google Cloud / GCP)

  • Google is similarly in the top tier. Research shows the three of AWS/Microsoft/Google account for ~60 % of all hyperscale data-center capacity. 

  • Their infrastructure, especially for AI, is at the very high end in both scale and sophistication.

Meta Platforms

  • Meta is also listed in major research as among the largest hyperscale operators. 

  • Its Prineville facility (which you mentioned) is a good example of a campus-scale deployment by one of these large operators.


Digital Realty Trust

  • Digital Realty is a major global data-center owner/operator (a REIT) with 300+ facilities in 25+ countries.While sometimes categorized as “colocation/wholesale” rather than exclusively cloud-owner, the scale is very large and global.

Equinix, Inc. 

  • Equinix operates 260 data centers in 33 countries (as of 2025).
  • While their model is more “carrier/colocation” oriented, they are still major players in the global data-center infrastructure landscape. 

Why These Matter 

  • These are the companies whose data-center infrastructure scale is in the same ballpark as the examples you cited: large land/energy footprints, multiple gigawatts of power, global campus builds.

  • If you think of the “universe” of major data-centers (hyperscale, cloud, AI-centric), these names sit at the core. Many smaller or regional players follow them.

  • Research shows that the total number of hyperscale data centers globally is large (~1,100+ by end of 2024) but the top handful (AWS, Microsoft, Google) dominate capacity. 

 *************************************

A table of the top 10 or 12 large data center operators:


LDC: GAIIP -- Allied Data Centers And CyrusOne -- October 28, 2025

 I track large data centers here. Somehow I missed this one: GAIIP.

Some speculators / influencers suggest Warren Buffett has invested in GAIIP. No evidence.



Has GAIIP invested in any large data centers yet?


Allied Data Centers:

Cyrus One: wiki.

Nvidia Announces Seven "New" Supercomputer Systems -- October 28, 2025

AI prompt

Jensen Huang / Nvidia today announced seven "new" supercomputers today at tech conference. Does this include previously announced "new" superconductors or are these truly seven new supercomputers?

Reply:

Good question — yes, NVIDIA (and Jensen Huang) did announce seven “new systems” (supercomputers) today. 


Here are the clarifications I can provide about what “new” means in this context:


✅ What we 

know

  • The press release from NVIDIA notes it is “accelerating seven new systems … across Argonne National Laboratory and Los Alamos National Laboratory (LANL).” 

  • At least two of the systems are singled out by name:

    • Solstice: described as “the U.S. Department of Energy’s largest AI supercomputer … will feature a record-breaking 100,000 NVIDIA Blackwell GPUs.” 

    • Equinox: another system with “10,000 NVIDIA Blackwell GPUs” expected in 2026. 

  • At LANL, two systems (Mission and Vision) will be built using the “Vera Rubin platform” and NVIDIA’s Quantum-X800 InfiniBand networking. 

  • The announcement is tied directly into the US national labs and the U.S. Department of Energy’s (DOE) AI-infrastructure mission.

 


⚠️ What 

we don’t

 know / what is ambiguous

  • It’s not entirely clear whether all seven systems are entirely new in the sense of “never-before-announced hardware builds,” or whether some are extensions/upgrades of previously announced projects.

  • Some systems like Solstice and Equinox may have been mentioned or planned previously, but today’s announcement seems to formalize deployment plans, partnerships, or scales.

  • The wording “accelerating seven new systems” might imply that these are new deployments (rather than incremental upgrades) but with embedded hardware (e.g., the Blackwell GPUs) that were in previous roadmaps.

  • There is a difference between “new supercomputer systems” vs “new GPUs/superchips” — this announcement is about the systems (supercomputers) built using those newer GPU generations.


🎯 My take


Based on the info, I believe the following is the most accurate summary:


NVIDIA is announcing seven new supercomputer systems (i.e., full installations of compute + networking + infrastructure) tied to U.S. national labs and major DOE projects. These are new launches of those systems (or at least new deployment announcements) rather than simply new GPU chips. The announcement uses previously-announced GPU architectures (like Blackwell, Vera Rubin) as building blocks, but the systems themselves are the fresh items.


So yes — you heard correctly that “seven new supercomputers” were announced — not just new chips or new “superconductors” (which perhaps was a mis-hearing) — the emphasis is on infrastructure (systems) rather than only semiconductor architecture.

 

The seven systems:


 

 

 


Monday, October 27, 2025

Electrification Of India -- Slow But Sure -- October 27, 2025

AI prompt: 

Electrification. India. I'm not sure how best to frame this question. I'm not interested in the investment opportunities or the investing story, but the question came up when I got a note from a reader who suggested investing in "electrification of India." The individual suggested that the current electrification of India is similar to what the US electrification was in the 1930s. That may be entirely accurate: that current electrification in India mirrors that of the US in the 1930s and there will be huge opportunities if one wants to invest in electricity build out in India. I would counter that argument with pace of electricity demand in India vs the United States. So, pace of the build out is very, very important. Second, the return on investment is hugely dependent on how much capital (CAPEX) is available to accomplish that build out. I would argue that the capital (CAPEX) available in the US is exponentially greater in the US than what India will have over the next thirty years. So, I don't know if I've framed that question / observation accurately but I'm curious what you can make of it.

Reply:

You’ve framed the question extremely well — and in fact, you’re hitting the key macroeconomic levers that determine whether “India’s electrification” is truly comparable to U.S. electrification in the 1930s.

Let’s unpack this carefully and translate your instinct into an analytical framework.


⚙️ 1️⃣ The Analogy: 

U.S. Electrification in the 1930s vs. India Today

At first glance, the analogy feels right — rural areas being connected, rising industrialization, and electricity as the backbone of modern growth.

But under the surface, the contexts are very different in pace, capital, and technological leapfrogging.

 


⚡ 2️⃣ The Pace of Demand Growth


You’re absolutely right to emphasize pace — it’s everything.

  • India’s electricity demand is expected to triple between 2020 and 2050, driven by air conditioning, industrialization, and EVs.

  • That’s fast, but it’s spread over 30 years, and the base is already large.

  • The U.S., by contrast, had a much smaller base and a shorter doubling time (roughly a decade during the 1920s–1930s) — meaning the relative acceleration in the U.S. was sharper.


→ India’s story is sustained growth, not explosive acceleration.


🏗️ 3️⃣ The Capital (CAPEX) Constraint


This is your other critical insight.

  • Electrification requires huge up-front CAPEX in generation, transmission, and distribution.

  • The U.S. in the 1930s could mobilize public capital (New Deal programs, rural electrification authorities) and later private capital through bond markets.

  • India’s CAPEX capacity is limited by:

    • Fiscal constraints (deficit ~6% GDP).

    • Domestic financial depth (banks already heavily exposed to infrastructure and public debt).

    • Cost of capital: higher than OECD peers (8–10% real).


So while foreign capital (e.g., from sovereign wealth funds or development banks) can help, India cannot match the capital intensity or speed of the U.S. build-out of the 1930s.


☀️ 4️⃣ A Major Difference — The Leapfrog Factor


One thing that does make India unique:

Unlike 1930s America, India doesn’t have to replicate the full centralized grid model.

  • Distributed renewables (solar rooftops, microgrids, battery storage) mean parts of India can “leapfrog” to modern, modular electrification.

  • In that sense, it’s more like the mobile phone revolution than 1930s U.S. electrification: many regions can skip legacy infrastructure.


So the analogy may work conceptually (electrification = modernization), but structurally it’s much closer to a hybrid of 1930s electrification + 1990s mobile leapfrogging.


💡 5️⃣ Summary: A Balanced Framing


If you wanted to frame it succinctly:

“India’s electrification is less a replay of America’s 1930s power boom and more a blend of the U.S. rural electrification program’s ambition with the mobile-phone revolution’s distributed leapfrog model. The opportunity is real, but the pace and capital base are slower, and the path is more decentralized. The comparison flatters the narrative but not the numbers.”

Here’s a concise version you could drop straight into your reply:

I like the analogy between India’s current electrification and America’s in the 1930s, but it’s not a perfect fit. The U.S. build-out happened at a faster pace and with far deeper capital markets, whereas India’s expansion is constrained by financing capacity and grid complexity. What’s happening in India is less a replay of 1930s America and more a hybrid—large-scale power build-out mixed with distributed, renewable “leapfrog” technologies. It’s a modernization story for sure, but the speed and scale will be shaped as much by capital availability as by engineering progress.

Saudi Arabia Pivots -- Neom -- Qualcomm -- October 27, 2025

Saudi Arabia is significantly pivoting its national investment strategy by
scaling back its massive NEOM megacity project and redirecting billions toward investments in artificial intelligence (AI), gaming, and high-tech manufacturing. This shift reflects a move away from costly, slow-returning construction projects toward faster-growing industries with quicker returns, in line with the kingdom's larger Vision 2030 goal of economic diversification. 
Factors behind the NEOM pivot
1. Financial and cost challenges
  • Declining cash reserves: The kingdom's sovereign wealth fund, the Public Investment Fund (PIF), saw its cash reserves drop significantly by the end of 2023.
  • Strained budget: Low oil prices and production have strained government finances, causing concern at the highest levels over the mounting trillions of dollars in costs for Vision 2030 initiatives.
  • Reduced investment: In 2024, the PIF reportedly withheld approval for NEOM's budget, forcing the project to seek new funding sources, including bond sales and a greater emphasis on attracting foreign investment. 

2. Scaled-back ambitions
  • "The Line" downsizing: The most visible change is the dramatic reduction in the scope of "The Line," the project's signature 170-kilometer-long linear city. In April 2024, it was reported that the government had lowered its 2030 target for The Line to just 2.4 kilometers in length and 300,000 residents, down from the original goal of 1.5 million.
  • Other projects face setbacks: Work has reportedly slowed on The Line's construction. Other NEOM sub-projects, such as the luxury island Sindalah and the mountain ski resort Trojena, have also experienced setbacks.
  • Cost-cutting measures: Some NEOM employees were reportedly laid off or relocated to Riyadh in mid-2025, with a cut to some benefits. 

3. Shift to high-tech sectors
  • Redeploying capital: Billions in capital are being shifted from NEOM into more focused, higher-return sectors like AI, gaming, and high-tech manufacturing.
  • Partnering with tech firms: Recent reports mention new collaborations with major tech companies like Nvidia and Amazon to boost Saudi Arabia's AI capabilities.
  • Accelerating economic returns: This redirection of funds aligns with the kingdom's desire to find more rapid and tangible returns on its investments, instead of pouring immense capital into construction projects that would take decades to become fully realized. 

How the pivot is communicated
Saudi officials have adopted a more transparent approach to communicating changes to the Vision 2030 program. 
  • Rethinking project timing: Instead of canceling projects outright, officials describe decisions as part of an ongoing adjustment for "optimal economic impact." Some initiatives are being slowed down to avoid overheating the economy.
  • Prioritizing certain investments: The message is that projects with the most promising investor interest and economic returns are being prioritized. This strategic positioning is designed to reassure investors that the kingdom is being economically prudent rather than overspending.
  • Focusing on high-tech appeal: At major summits like the Future Investment Initiative (FII), the Saudi sales pitch to global finance leaders now emphasizes investments in AI and tech over grand construction schemes. 

From the main blog:

TPUs: holy mackerel! Long segment on TPUs at the top of the hour, the Eisen Hour. Sara is off today. Reminder: TheMillionDollarWay was the first non-tech blog that requires no subscription and has no ads that mentioned TPUs. Link here

QCOM: why this story is important. Link here


 


 

 

Qualcomm, NPUs, And TPUs -- October 27, 2027

AI prompt:

Did I mis-hear or mis-read something being reported on CNBC about Qualcomm and TPUs and partnering with someone with TPUs. I was not aware that Qualcomm was making TPUs.

 

Reply:

Yes — you heard something real, but not quite in the way you interpreted it. To clarify:

  • Qualcomm hasn’t publicly launched a “TPU” (Tensor Processing Unit) in the sense that Google has (i.e., a large-scale cloud/training ASIC called a “TPU”).

  • What Qualcomm is doing: they are very much advancing their neural processing / AI acceleration hardware (NPUs) inside their Snapdragon/PC platforms and targeting both on-device and hybrid cloud/edge AI. For example, Qualcomm describes their dedicated NPU core in its chips: “the NPU is built from the ground-up for accelerating AI inference at low power”. 

  • They also recently made moves into the data-center / connectivity domain, e.g., acquiring Alphawave (a UK semiconductor firm for data-center / high-speed connectivity) to bolster their “AI technology” efforts. 


So what likely happened is you heard CNBC talking about Qualcomm’s AI chip ambitions, maybe referencing their “Neural Processing Unit (NPU)”, “AI PC”, “Snapdragon Summit” roadmap, or their push toward “cloud/edge” AI hardware — and the term “TPU” got conflated or mis-used (common in media).

Also from AI:

Qualcomm's AI hardware is part of a broader strategy to offer intelligent computing across many devices, from smartphones to data centers. 

Edge and mobile devices
  • Hexagon NPU: This is Qualcomm's dedicated AI processor, which is integrated into its Snapdragon systems-on-chip (SoCs) for mobile phones, PCs, and other edge devices.
  • Qualcomm AI Engine: This refers to the full suite of hardware and software that accelerates AI on Snapdragon platforms. It includes the Hexagon NPU, the Adreno GPU, and the Kryo/Oryon CPU, which work together to maximize performance and power efficiency for AI tasks.
  • On-device AI: Qualcomm's focus is on processing AI directly on the device rather than in the cloud. This approach enhances privacy, security, and response time. 

Data center and cloud
  • Qualcomm Cloud AI 100: This is an AI inference accelerator card designed for data centers and edge servers.
  • Newer offerings: In late 2025, Qualcomm announced new AI inference solutions for data centers, including the AI200 and AI250 accelerator cards, to compete with products from Nvidia and AMD. 

Developer tools
  • Qualcomm AI Stack: This unified software toolkit allows developers to create and deploy AI models across Qualcomm's hardware portfolio.
  • Qualcomm AI Hub: A cloud-based platform for developers to discover and optimize AI models for Qualcomm hardware. 

TPU vs. Qualcomm AI hardware
Feature 

Google TPUsQualcomm AI Hardware (Hexagon NPU)
Primary focusLarge-scale training and inference in the cloud.On-device, low-power AI for edge devices like smartphones and IoT.
ProductsGoogle Cloud TPUs for data centers.Snapdragon Hexagon NPUs, Cloud AI 100, AI200, and AI250.
AvailabilityAccessible almost exclusively via the Google Cloud Platform.Integrated into consumer products and sold as accelerator cards for servers.
Key benefitHigh performance and efficiency for massive, specific AI workloads in Google's ecosystem.Power efficiency, real-time performance, and enhanced privacy for edge and mobile devices.

 

 

Saturday, October 25, 2025

Apple Memory

 

For many years, Apple sourced SSD and HDD components from major manufacturers like Samsung, Toshiba, and SanDisk
. However, modern Macs with Apple Silicon processors (M1, M2, and later) use custom, integrated storage designed by Apple itself. 
Modern SSDs (Macs with Apple Silicon)
In current Apple Silicon Macs, the SSD chips are not a standard, removable component. The NAND flash memory chips are soldered directly onto the logic board and integrate with the Apple Silicon chip. 
  • Integrated design: Apple designs its own integrated storage system, which is a key part of the Apple Silicon's architecture.
  • Supplier relationships: While Apple designs its own storage architecture, it still relies on third-party memory suppliers for the actual NAND flash chips. Historically, these suppliers have included Samsung, Kioxia (formerly Toshiba), and Western Digital/SanDisk.
  • Performance and firmware: This integrated approach allows Apple to manage the entire storage stack, optimizing it for speed and efficiency with custom firmware. 
Legacy storage (Macs with Intel processors)
Before transitioning to Apple Silicon, Apple used more traditional storage methods, including SSDs and HDDs.
  • SSDs:
    • Older MacBooks used proprietary SSD modules with non-standard connectors. The flash memory on these modules was often manufactured by Samsung, SanDisk, and Toshiba.
  • HDDs (Hard Disk Drives):
    • Hard drives were phased out of most Mac product lines in the late 2000s and 2010s.
    • In the past, hard drives used by Apple were standard SATA drives but often came with proprietary firmware. These components were sourced from various manufacturers, including Seagate and Hitachi (HGST). 
External storage

Apple's own branded external storage, such as that offered through its online store, is supplied by well-known storage manufacturers. For example, some products are made in partnership with LaCie and SanDisk. 
For many years, Apple sourced SSD and HDD components from major manufacturers like Samsung, Toshiba, and SanDisk
. However, modern Macs with Apple Silicon processors (M1, M2, and later) use custom, integrated storage designed by Apple itself. 
Modern SSDs (Macs with Apple Silicon)
In current Apple Silicon Macs, the SSD chips are not a standard, removable component. The NAND flash memory chips are soldered directly onto the logic board and integrate with the Apple Silicon chip. 
  • Integrated design: Apple designs its own integrated storage system, which is a key part of the Apple Silicon's architecture.
  • Supplier relationships: While Apple designs its own storage architecture, it still relies on third-party memory suppliers for the actual NAND flash chips. Historically, these suppliers have included Samsung, Kioxia (formerly Toshiba), and Western Digital/SanDisk.
  • Performance and firmware: This integrated approach allows Apple to manage the entire storage stack, optimizing it for speed and efficiency with custom firmware. 
Legacy storage (Macs with Intel processors)
Before transitioning to Apple Silicon, Apple used more traditional storage methods, including SSDs and HDDs.
  • SSDs:
    • Older MacBooks used proprietary SSD modules with non-standard connectors. The flash memory on these modules was often manufactured by Samsung, SanDisk, and Toshiba.
  • HDDs (Hard Disk Drives):
    • Hard drives were phased out of most Mac product lines in the late 2000s and 2010s.
    • In the past, hard drives used by Apple were standard SATA drives but often came with proprietary firmware. These components were sourced from various manufacturers, including Seagate and Hitachi (HGST). 
External storage

Apple's own branded external storage, such as that offered through its online store, is supplied by well-known storage manufacturers. For example, some products are made in partnership with LaCie and SanDisk.