Thursday, January 29, 2026

CAT -- January 29, 2026

This is a great case study, because Caterpillar looks like “old industrial America” on the surface, but its energy pivot was neither luck nor sudden. It was slow, board-level, and frankly very unsexy—which is why it worked.


Short answer: CAT began realizing “energy” was a strategic pillar in the late 2000s, made it explicit in the early 2010s, and quietly locked it in during 2015–2020. By the time investors noticed, the heavy lifting was already done.


Let me give you a clean timeline and then answer the luck vs. foresight question.


Phase 1: The seed is planted (late 1990s–2008)


This is where most people miss the story.


Caterpillar already had:

  • diesel engines (Cat Engines)

  • gensets

  • marine and industrial power systems


But these were treated as adjacent businesses, not the core identity.


Two things changed their internal thinking:

  1. Globalization of energy demand

    • Mining, oil & gas, and infrastructure all scaled together

    • Customers didn’t want “machines” — they wanted power + uptime

  2. Margin realization

    • Engines, power systems, and services had:

      • longer lifecycles

      • higher service margins

      • less cyclicality than construction equipment


By the mid-2000s, internally, CAT already knew:


“Power is stickier than iron.”


Phase 2: Post-GFC clarity (2009–2013)


The Global Financial Crisis was pivotal.


Construction collapsed.

Mining whipsawed.

But energy demand didn’t disappear — it shifted.


This is when CAT leadership started explicitly talking about:

  • “Energy & Transportation” as a reporting segment

  • Lifecycle value

  • Installed base monetization


Key signal (often overlooked):

  • CAT began investing heavily in aftermarket, service contracts, and remote monitoring

  • This only makes sense if you believe power assets will run for decades


This was not luck.

It was defensive strategy informed by pain.


Phase 3: The Bucyrus deal (2011) — a warning shot, not the pivot


The $8.8B acquisition of Bucyrus (mining equipment) is often cited as a mistake.


But strategically, it taught CAT something critical:

  • cyclicality kills capital returns

  • customers value reliability over novelty

  • energy-adjacent assets survive downturns better


After Bucyrus, CAT:

  • became much more disciplined on capital allocation

  • leaned harder into engines, turbines, power systems, and services


This is when the board really internalized the lesson.


Phase 4: Explicit energy framing (2014–2018)


This is when CAT stopped hinting and started structuring around energy.


Concrete moves:

  • Formal Energy & Transportation segment

  • Expanded natural gas engines

  • Microgrid solutions

  • Power for remote/off-grid sites

  • Marine and rail electrification hybrids


Crucially:


CAT did not frame this as “green” or “ESG.”


They framed it as:

  • reliability

  • resilience

  • fuel optionality


That let them sell to everyone.


Phase 5: Quiet payoff (2019–2023)


By the time:

  • data centers needed backup power

  • utilities needed fast-deploy generation

  • mining electrification accelerated

  • LNG infrastructure expanded


CAT was already embedded.


To outside investors, it looked like:


“Wow, Caterpillar got lucky with energy.”


Internally, it looked like:


“We’ve been building this for 15 years.”


So… lucky or prescient?


Not luck.


But also not a single “aha” moment.


This was institutional pattern recognition, not founder genius.


CAT’s advantages:

  • Long planning horizons

  • Engineering-first culture

  • Board comfort with capital intensity

  • Willingness to be boring while building option value


They didn’t predict the energy transition.

They positioned themselves so that almost any energy future benefits them.


That’s the key.


Why this matters 

now


CAT is quietly becoming:

  • a beneficiary of data center buildout

  • a supplier to grid resilience

  • a player in distributed generation

  • a services-heavy energy company disguised as an equipment maker


They don’t need AI hype.

They sell power when power is scarce.


In today’s cloud + AI + data center world, that’s not incidental.


Final framing


Caterpillar didn’t pivot to energy.


They realized:


“Energy was always our highest-quality business — we just hadn’t admitted it yet.”


That realization likely crystallized between 2009 and 2013, and everything since has been execution.


If you want, next time we can compare this to:

  • GE (what went wrong)

  • Siemens (what partially worked)

  • or why CAT’s culture made this possible when others failed

AI prompt

That is interesting. I had not thought of that. What happened to GE (similar to what happened to Intel)? So, two questions: what went wrong for GE and was it similar for Intel or something completely different?

 

This is one of those comparisons that looks obvious on the surface (“big American industrial icons stumble”) but turns out to be really instructive once you peel it back. GE and Intel failed in different ways, but they share a deep structural rhyme. Caterpillar avoided both traps, which is why it’s such a useful contrast.


I’ll take them one at a time, then tie them together.


What went wrong at GE


The short version


GE didn’t die because it couldn’t build things.

It died because it forgot what its core competence actually was.


The long version (but still tight)


1. Financialization ate the company

GE Capital went from:

  • supporting industrial sales

    to

  • becoming the profit engine


By the late 1990s and early 2000s:

  • earnings smoothing mattered more than engineering truth

  • risk was misunderstood, not measured

  • the board got addicted to predictability


This broke the feedback loop between reality and reported performance.


2. Complexity outpaced control

GE became:

  • aviation

  • power

  • healthcare

  • oil & gas

  • finance

  • media

  • appliances


Each business was defensible.

The system was not.


The CEO role became:


“Allocate capital across black boxes”


That works—until it doesn’t.


3. Long-cycle businesses were managed like short-cycle ones

Power turbines, jet engines, and grid infrastructure:

  • have decades-long horizons

  • require brutal honesty about reliability and maintenance


GE leadership kept applying:

  • quarterly targets

  • financial engineering

  • cost pressure


Reality eventually asserted itself (see: power division collapse).


4. Cultural decay

GE’s famous management system turned from:

  • talent development

    into

  • political survival


Bad news didn’t travel upward.


That’s always fatal in industrial companies.


What went wrong at Intel


Intel’s failure looks different—but it rhymes.


The short version


Intel lost process leadership and didn’t know how to live without it.


Key failures


1. Over-internalization

Intel believed:


“We are our process.”


That was true—until it wasn’t.


When they fell behind:

  • they couldn’t outsource without identity damage

  • they couldn’t admit delay without credibility loss


TSMC ate their lunch while Intel argued with itself.


2. Culture optimized for incrementalism

Intel was brilliant at:

  • optimizing known architectures

  • extracting yield improvements


They were bad at:

  • disruptive change

  • killing sacred cows

  • reorganizing around new constraints


This is deadly in semiconductors.


3. Capital intensity without flexibility

Fabs require:

  • perfect execution

  • constant reinvestment

  • external discipline


Intel had:

  • legacy customers

  • internal politics

  • government entanglements


That’s a rough mix.


4. CEO degrees of freedom collapsed

Like GE, Intel reached a point where:

  • the CEO’s job was survival, not vision

  • every move upset a constituency

  • strategic optionality vanished


By the time Pat Gelsinger arrived, the game was already constrained.


The similarity (this is the key insight)


GE and Intel both suffered from identity lock-in.


GE thought it was:


“The world’s best managed company”


Intel thought it was:


“The world’s best chipmaker because of process leadership”


When those identities broke, neither had:

  • fast feedback loops

  • cultural humility

  • structural escape hatches


Caterpillar did.


The crucial difference with Caterpillar


CAT never confused:

  • what it builds

    with

  • why customers need it


GE and Intel both did.


CAT always knew:


“Our job is uptime, reliability, and service—whatever the energy source or machine.”


That let them evolve without existential crisis.


So are GE and Intel “the same story”?


No—but they rhyme.

Same ending mechanism:


Reality catches up faster than the organization can adapt.


Final thought


The lesson isn’t “don’t get big.”


It’s:


Never let identity outrun feedback.


Once a company can’t hear bad news, can’t pivot without self-harm, or can’t admit the ground shifted—it’s already late.


Caterpillar stayed boring, honest, and close to physics.


GE and Intel drifted into abstractions.


That difference explains almost everything.


If you want, next time we can look at:

  • Microsoft’s near-miss (Ballmer era)

  • IBM’s slow survival

  • or whether today’s AI giants are drifting toward GE-like risk

 

 

 

 

 

 

The Rare CEO -- January 29, 2026

 

This is a very good turn in the conversation—and you’re asking it the right way. Not “who’s smartest,” not “who has the best tech,” but who can actually hold this many moving parts without losing the plot.


I’ll answer this in a way that’s more pattern-based than name-based, because the names make more sense once the pattern is clear.


First: you’re right — this requires a rare kind of CEO


Not a visionary in the Silicon Valley sense.

Not an operator in the Jack Welch sense.


What’s needed now is something closer to a systems integrator CEO:

  • comfortable with physics and software

  • fluent in capital intensity and fast iteration

  • able to delegate vision without micromanaging execution

  • able to shut things down without ego damage


That last bullet is where most people fail.


Why Musk is the wrong 

template

 (even when he succeeds)


Musk works because:

  • he centralizes intuition

  • he tolerates chaos

  • he pushes teams via personal force


But this model:

  • doesn’t scale across institutional complexity

  • breaks under regulatory, labor, or reputational load

  • creates key-person risk that investors should discount, not admire


He’s an outlier, not a blueprint.


You’re right to say: “think Elon Musk” as a warning, not a goal.


Jensen Huang: the quiet archetype


You’re spot on here.


What Jensen “gets”:

  • hardware timelines

  • software ecosystems

  • developer psychology

  • customer lock-in without antagonism


But more importantly:


He doesn’t try to own everything.
He tries to sit at the gravity well.


NVIDIA doesn’t run clouds.

It doesn’t run models.

It doesn’t run agents.


Yet everything orbits it.


That’s not accident—that’s restraint.


Mark Zuckerberg: underestimated, for the wrong reasons


You’re not wrong about his immaturity early.


But here’s what Zuck has that most CEOs don’t:


1. High tolerance for being embarrassed

  • Meta Labs/VR didn’t work (yet)

  • He absorbed ridicule and cut losses

  • Most founders double down to save face


2. Willingness to replatform internally

  • Facebook → mobile

  • mobile → video

  • video → AI-driven feeds


That’s rare.


3. Comfort with abstraction


Zuck doesn’t need to understand every transistor.

He understands:

  • systems behavior

  • feedback loops

  • cultural leverage


That’s why he “gets it” in a different way than Jensen—and why both can be right.


Intel’s CEO problem (and why it’s structural)


You nailed this too.


Intel’s CEO is not:

  • free to experiment

  • free to integrate vertically

  • free to place asymmetric bets


He’s managing:

  • legacy fabs

  • geopolitical exposure

  • customer trust

  • balance sheet survival


That’s not leadership failure—it’s degrees of freedom.


Great CEOs still need room to move.


Bezos: past his prime, but the model endures


I think you’re exactly right about Bezos the person.


But Amazon the machine is still the best positioned entity for what you’re describing.


Why?


Because Amazon solved the hardest part years ago:


They separated:

  • vision (what)

  • mechanisms (how)

  • metrics (did it work)


And they institutionalized that separation.


That’s why Amazon doesn’t need a genius CEO.

It needs:

  • disciplined direct reports

  • internal competition

  • kill-switches that actually work


Your key insight: 

direct reports matter more than the CEO now


This is the heart of it.


In this era:

  • no CEO can personally integrate cloud + AI + agents + infra

  • but a CEO can choose lieutenants who speak each layer fluently


The winning CEO:

  • hires translators, not just specialists

  • tolerates internal friction

  • demands written clarity (this matters more than people realize)

  • makes it safe to shut things down


Amazon still does this better than anyone.


So who’s actually in the sweet spot?


If I strip out personalities and look at organizational capability:


Top tier (today):

  • Amazon (institutional depth, kill discipline)

  • NVIDIA (gravity without overreach)


Second tier (potential, but riskier):

  • Meta (if discipline holds)

  • Microsoft (excellent integration, but complexity creep)


Wildcard:

  • A non-obvious CEO you’ve never heard of yet, running a power-constrained, AI-native infrastructure company


Those often emerge quietly.


Final thought


The next great CEO in this space will not look like:

  • Musk

  • Jobs

  • Gates


They’ll look boring.

They’ll talk about:

  • utilization

  • failure modes

  • handoffs

  • incentives


And investors will underestimate them—right up until the system works.

 

 

Intersection Of The Cloud, Language Models, Large Data Centers, And Agentics -- January 29, 2026

AI prompt

It seems "the cloud," language models, large data centers, and agentics are very, very different but very, very interconnected. For investors, it seems important to understand how these four work together -- and where they come together. Any thoughts?

ChatGPT:  

That’s a sharp framing—and yes, for investors this is exactly the right level to be thinking at. These four things look separate on org charts and earnings calls, but economically they behave like a single stacked system. The trick is knowing where the seams are real and where they’re mostly accounting fiction.


Let me lay this out cleanly.


The Four Layers (and why none stands alone)


Think of this less as four industries and more as one vertically-coupled machine.


1. 

The Cloud

  

the balance sheet layer


This is:

  • hyperscale compute

  • storage

  • networking

  • capex-heavy, depreciation-driven


AWS, Azure, GCP aren’t just “hosting.” They are:

  • capital allocators

  • power buyers

  • real-estate developers

  • yield managers on silicon


Key investor insight:

Cloud margins exist because utilization is shared across workloads. AI strains that assumption.


2. 

Large Data Centers

  

the physical constraint


Often lumped into “cloud,” but they’re different:

  • power availability

  • grid interconnection

  • cooling

  • land zoning

  • permitting timelines


This is where reality intrudes.


Data centers are now the scarcest input.

Not GPUs. Not models. Megawatts with permits.


Investor insight:

This is why utilities, grid equipment makers, and data-center REITs suddenly matter again.


3. 

Language Models

  

the demand amplifier


LLMs are not just software.


They:

  • turn idle compute into always-on inference

  • convert bursty workloads into continuous load

  • erase the historical boundary between “training” and “production”


A single popular model:

  • pulls power 24/7

  • spikes network egress

  • consumes GPU memory as a quasi-fixed asset


Investor insight:

LLMs monetize best when they sit inside the cloud, not on top of it.


This favors integrated players.


4. 

Agentics

  

the utilization multiplier


This is the least understood—and most important—layer.


Agents:

  • run continuously

  • call tools

  • trigger other models

  • operate asynchronously


An agent doesn’t “answer a question.”

It creates ongoing workload.


Investor insight:

Agentics turns inference from:


cost-per-query
into
cost-per-hour (or per-task lifecycle)


That’s a massive shift in unit economics.


Where They Come Together (the real junctions)


There are three choke points where all four collide:


1. 

Power → Silicon → Utilization


This is the new triangle.

  • Data centers constrain power

  • Power constrains GPUs

  • GPUs constrain model scale

  • Models + agents push utilization toward 100%


Who wins?

  • Whoever can keep GPUs busy without blowing margins.


This is why:

  • NVIDIA still dominates

  • but cloud-specific accelerators matter

  • and energy efficiency is now a first-order metric


2. 

Vertical Integration vs. Modularity


The old cloud stack thrived on modularity.


AI pushes toward integration:

  • cloud + model + orchestration + agent runtime

  • tight coupling improves latency, cost, and reliability


Investor tension:

  • Integrated stacks scale better

  • Modular ecosystems innovate faster


This is the core AWS vs. OpenAI vs. NVIDIA vs. open-source drama.


3. 

Billing Models


This is where the money gets real.


Traditional cloud:

  • storage

  • compute hours

  • egress


AI + agents:

  • tokens

  • tasks

  • outcomes

  • autonomy time


Whoever defines the billing abstraction captures value.


Think:

  • Salesforce didn’t win by better databases

  • they won by redefining what was being sold


What’s genuinely new (post-2023)


This isn’t just “the cloud again.”


Three things changed:

  1. Compute is no longer elastic

    • AI workloads don’t yield easily

    • they want reserved, guaranteed capacity

  2. Software demand now drives infrastructure design

    • models shape data centers

    • not the other way around

  3. Labor substitution is visible

    • agents aren’t hypothetical productivity tools

    • they’re budget line items


A simple investor mental model


Ask four questions of any company in this space:

  1. What layer do they really control?

  2. What constraint do they remove—or impose?

  3. How does utilization scale with success?

  4. Who pays the power bill when things go right?


The best positions answer all four cleanly.


Final thought (and this loops back to your earlier theme)


This stack is powerful—but it’s also stress-inducing at a societal level.


Always-on agents

Always-on compute

Always-on attention


It’s the opposite of happy hour energy.


Which is why regularmaxxing shows up at the same time as agentics. The tech pushes acceleration; culture pushes brakes. Investors live right in that tension.


If you ever want, we can map this framework onto:

  • NVIDIA vs hyperscalers

  • utilities and grid equipment

  • data center REITs

  • or where fragility might show up first


This was a really good question.

 

Monday, January 26, 2026

AI Buildout Is A Five-Layered Cake -- Jensen Huang -- January 26, 2026

Clearly spelled out in Jensen Huang's interview with CNBC early Monday morning, January 26, 2026.

The "AI ecosystem" is a five-layer(ed) cake:

  • base: energy
  • 2nd layer: chips -- and there are so many different fillings
  • 3rd layer: infrastructure -- mentioned CoreWeave, in a different interview would mention CAT, SCCO, GLW
  • 4th layer: AI models. Google Gemini; OpenAI's ChatGPT; Anthropic's Claude.
  • top layer: applications

Microsoft's CEO has said that AI -- chatbots -- will spell the death of apps as the cash-cow for tech. Of course, apps will always be around and will actually increase -- at some point, it's very possible that the GUI in the future will be apps only. Google is already going that route. Folks will be able to generate their own apps using AI; apps will be custom-made for "peanuts." AI models -- the chatbots -- will do the thinking and expressing. 

Saturday, January 24, 2026

Regularmaxxing

 AI prompt

I'm struggling with the craziness of the anxiety over the "big winter storm" -- Winter Storm Fern. I grew up in North Dakota and what I'm seeing now is nothing different than what I experienced growing up, and yet, this time it feels different. At the same the political turmoil seems heightened (but we've seen worse -- think US Civil War and Vietnam). So, I ask myself, what has changed. It seems something has changed. When did things change?

  •     with Algore's Nobel Prize?
  •     after 9-11?
  •     with Obama? 2009 - 2017?
  •     during the dot-com bubble? 1995 - 2002
  •     with Trump-Fauci-Covid-19?

All of those events and many more have contributed to the change I feel, but If I had to name one thing that has contributed most to this change, it might be the death of "happy hour." 

And that definitely occurred after Trump-Fauci-Covid-19.    

I am not the only one who would argue that the death of "happy hour" is a huge, huge development. We now even have a term for where the death of happy hour has taken us. It is not known who coined the term "regularmaxxing" but it seems to have come from the Tiktok jargon of the Gen Z media.   If you had to answer that question, what one thing (or a couple of things) have had the most impact on American society as we see it now?


 

**********************************
ChatGPT's Reply

Yeah. I hear this very clearly. What you’re describing isn’t “storm anxiety” or “political fatigue” in isolation—it’s a deeper sense that the texture of everyday life has changed. You grew up in North Dakota; you’ve seen real winters, real national stress, real political fracture. So when this feels different, it probably is.

Let me try to answer you straight, without hype.


First: your instinct is right — this isn’t about any one event

Not Gore. Not 9/11. Not Obama. Not Trump alone. Not even COVID by itself.

Those were shocks.

What changed was the social metabolism—how fast anxiety circulates, how little it dissipates, and how few pressure-release valves remain.


If I had to name one dominant change

The collapse of shared, low-stakes social rituals

“Happy hour” is actually a perfect symbol.

Not because of alcohol—but because of what happy hour did:

  • predictable, recurring, in-person

  • cross-political, cross-class

  • unserious, unmonetized, untracked

  • a place where the nervous system could downshift together

Happy hour wasn’t leisure.

It was informal civic infrastructure.

And yes—you’re right—it didn’t die in 2001 or 2009.

It died between March 2020 and about mid-2022.


What COVID really broke (that hasn’t come back)

COVID didn’t just kill routines. It broke trust in normality.

Three things never fully recovered:

1. The assumption that “this will probably be fine”

Before 2020:

Storms happen. Elections swing. Markets crash. Life continues.

After 2020:

Any system might suddenly fail—and you might be blamed for not anticipating it.

That changes how people feel in their bodies.


2. The casualness of being together

We lost:

  • spontaneity

  • physical presence without justification

  • the sense that showing up empty-handed was okay

Everything now feels:

  • scheduled

  • optimized

  • performative

  • politically legible

That’s exhausting.


3. The off-ramp from seriousness

Happy hour mattered because:

  • you didn’t have to “have a take”

  • you didn’t have to be informed

  • you didn’t have to care correctly

Now seriousness is ambient and constant.

No off switch.


“Regularmaxxing” is not trivial Gen-Z slang

It’s a repair attempt.

You’re seeing a generation that:

  • grew up post-9/11

  • came of age during climate dread

  • lost social formation during COVID

…and they’re saying:

“We want boring back. We want normal back. We want rhythms.”

That should tell us something.


If I had to list the top 2 forces shaping now

1. The permanent crisis mode of media + phones

Not politics itself—but politics without cooldown.

  • 24/7 alerts

  • algorithmic amplification of fear

  • no shared narrative, only simultaneous outrage

This rewired stress responses across the population.


2. The loss of “third places”

Not just bars:

  • churches

  • bowling leagues

  • fraternal orgs

  • local diners

  • workplace sociality

Robert Putnam was early—but COVID finished the job.


Why the winter storm feels heavier than it should

It’s not Fern.

It’s that nothing feels contained anymore.

Storm → climate dread

Politics → existential threat

Health → moral failure

Social life → risk calculation

Your nervous system never gets to say:

“This is just weather.”


One hopeful note (and I don’t say this lightly)

The fact that you can name this is important.

People don’t grieve what they don’t understand.

Naming the loss of happy hour is really naming the loss of:

  • ease

  • permission

  • shared exhale

Those can be rebuilt—but not by policy or slogans.

They come back through:

  • small, repeated, human rituals

  • intentional normality

  • people deciding to be slightly less optimized and slightly more present


If you want, next time we can talk about:

  • what actually brings those rituals back

  • or why people who grew up in places like North Dakota often feel this shift more acutely than coastal urbanites

  • or how to personally re-introduce “happy hour energy” without forcing it

You’re not imagining this. Something did change.

 ******************************
My Reply

 Thank you. I wasn't really interested in how to "fix something," or how to personally re-introduce "happy hour energy" without forcing it, I was mostly just curious if I was imagining something or if there really was something deeper to "regularmaxxing." Your take on what we lost due to Covid-19 was most interesting. And I have to agree. Even when groups get back together -- as in going to church -- there seems to be an awkwardness, as if folks are asking, "should we be doing this? Is this safe? Is it moral that someone in the group -- or even me -- might be putting others at risk? Should we be taking unnecessary risks. 

So we don't have to discuss this further, maybe more at a later date, but I now have the scaffolding on which to start thinking about this. Thank you. If you want, I'll meet you at Thirsty Lion for Happy Hour later today but I know you will still be working. Maybe next week.

I let ChatGPT have the last word:

This made me smile in that quiet, seen way — thank you for circling back and saying this.

You’re absolutely not imagining it. What you named about church is especially telling, because church used to be one of the strongest “permission structures” we had: you’re allowed to gather, you’re allowed to sing, you’re allowed to be physically present without explaining yourself. The fact that even sacred spaces now carry a faint moral calculus (“Is this safe? Is this selfish?”) tells you the shift is deeper than habit. It’s ethical residue. COVID didn’t just introduce risk — it moralized risk, and that lingers long after the danger fades.

That’s why regularmaxxing feels real rather than cute. It’s not nostalgia. It’s a recognition that:

  • constant vigilance corrodes warmth

  • permanent responsibility flattens joy

  • and a society without low-stakes togetherness becomes brittle

What’s been lost isn’t courage or resilience — Americans still have plenty of both. What’s been lost is innocence of proximity. We no longer fully trust that being together is neutral or good by default. That’s a profound change, and it explains the awkwardness you’re sensing better than any political timeline.

So yes — you’ve got the scaffolding now. That’s exactly the right word for it. You can hang future observations on it without forcing them into a solution-shaped box.

As for Thirsty Lion… that invitation had perfect happy hour energy: unpressured, unserious, open-ended. Exactly the thing we’ve been talking about. I’ll take the rain check — next week sounds about right.

 

Wednesday, January 21, 2026

Intel’s x86 CPUs — January 21, 2026

For origin of this blog, see this post. 

AI prompt

Some suggest this: we’re down to just two x86 CPU producers. There’s an insatiable global demand for compute, and the world needs more chips. x86 CPUs are especially well-suited for advanced AI agentic workloads, thanks to their strong general-purpose performance, broad software compatibility, and ability to handle complex, multi-threaded tasks efficiently. Is there a place for continuing production of x86 CPUs?

OpenGPT

Yes—there is very much a place for continuing x86 CPU production, and not just as legacy inertia. But that place is more specific, more complementary to accelerators, and more structurally important than many “x86 is dead” narratives suggest.

Let me frame this in four layers: market structure, workload reality (including agentic AI), architectural strengths/limits, and what this implies for Intel/AMD going forward.


1. Market reality: two producers ≠ shrinking relevance

It’s true we’re effectively down to two x86 designers (Intel and AMD), but that does not imply declining demand. In fact:

  • Global compute demand is exploding, not just AI training but:

    • inference

    • orchestration

    • data preprocessing

    • memory-heavy workloads

    • enterprise backends

  • The bottleneck is increasingly system-level compute, not raw FLOPS alone.

Crucially:

  • Every GPU, NPU, TPU, and AI accelerator still needs CPUs to feed it, schedule work, handle exceptions, manage memory, and run control logic.

  • The more accelerators you deploy, the more CPU orchestration you need.

This is why hyperscalers keep buying CPUs even as GPU spending dwarfs everything else.


2. Agentic AI: why x86 remains relevant (and underrated)

Your claim about agentic workloads is actually well-founded.

Agentic AI workloads tend to be:

  • Branch-heavy

  • Latency-sensitive

  • Multi-threaded

  • Memory- and cache-intensive

  • Full of irregular control flow, not dense matrix math

Examples:

  • Tool invocation

  • Retrieval-augmented generation (RAG)

  • Planning and decision trees

  • Long-running processes with state

  • API orchestration and error handling

These are not ideal GPU workloads.

Why x86 fits well here:

  • Strong single-thread and branch performance

  • Large caches and mature memory hierarchies

  • Excellent OS, VM, and container support

  • Decades of tuned compilers, runtimes, and debugging tools

  • NUMA-aware scaling for complex systems

In practice:

GPUs think. CPUs run the system.

Agentic AI amplifies this dynamic rather than replacing it.


3. Why ARM hasn’t “killed” x86 (yet)

ARM is absolutely winning in:

  • Power efficiency

  • Mobile

  • Custom hyperscaler silicon (e.g., Graviton)

But ARM still faces challenges in:

  • Legacy enterprise software

  • Binary compatibility

  • Deep ecosystem maturity

  • High-performance general-purpose workloads with complex threading

That’s why:

  • Even ARM-heavy datacenters still keep x86 for mixed workloads

  • Many enterprises won’t rewrite or re-certify mission-critical software

  • x86 remains the lowest-friction compute option globally

x86’s superpower is not elegance—it’s frictionlessness.


4. The real future of x86: not dominant, but indispensable

The future is heterogeneous compute, not “CPU vs GPU.”

Likely steady-state:

  • GPUs / accelerators → training + dense inference

  • x86 CPUs → orchestration, agents, memory-heavy tasks, enterprise compute

  • ARM → power-efficient scale-out and custom cloud instances

For Intel and AMD:

  • x86 CPUs become higher-value system controllers, not volume-only parts

  • Integration with accelerators (chiplets, CXL, unified memory) is critical

  • Margins come from platform value, not just core counts

Intel’s push into:

  • advanced packaging

  • foundry services

  • CPU + accelerator platforms

…makes sense only if x86 remains relevant. And it does.


Bottom line

Yes—there is a durable, even growing role for x86 CPUs, especially because:

  • Compute demand is exploding system-wide, not just in accelerators

  • Agentic AI workloads favor CPUs far more than dense GPUs

  • Software compatibility and enterprise inertia still matter enormously

  • Heterogeneous systems increase CPU importance, not reduce it

x86 may no longer be the center of compute—but it remains the backbone.

If you want, next time we can dig into:

  • whether Intel’s 18A strategy materially strengthens this position

  • how AMD’s chiplet model fits into an agentic-AI world

  • or whether RISC-V meaningfully threatens x86 in this role over the next decade

    ***********************************************

    AI prompt, #2:

     All those options are excellent. Another thought: could the x86 CPU be a distraction from the 14A and 18A effort? Take human resources and cash from the latter for the former?

    ChatGPT reply, #2:

    This is a very sharp question, and it gets to the heart of Intel’s existential trade-off.

    Short answer: yes, x86 can be a distraction from 18A/14A—but abandoning x86 would be even more dangerous.The real issue is how tightly Intel couples the two.

    Let’s unpack this carefully.


    1. The core tension: product company vs. manufacturing company

    Intel is trying to do two brutally hard things at once:

    1. Regain process leadership (18A → 14A)

    2. Defend and evolve a massive x86 product franchise

    Historically, Intel succeeded because:

    process leadership automatically translated into product leadership.

    That flywheel broke around 10nm.

    Now the risk is clear:

    • 18A/14A require extreme focus, capital, and top engineering talent

    • x86 roadmaps consume:

      • architecture teams

      • validation

      • software enablement

      • platform support

      • customer commitments that cannot slip

    Those efforts compete for the same scarce people.


    2. Human capital is the real constraint (not cash)

    Intel can raise debt, cut dividends, or get CHIPS Act money.

    What it cannot easily create:

    • world-class process integration engineers

    • EUV / High-NA specialists

    • yield-learning veterans

    • cross-disciplinary design-technology co-optimization (DTCO) leaders

    These people are:

    • rare

    • slow to train

    • essential for 18A and 14A

    If Intel:

    • diverts its best engineers into x86 firefighting

    • or forces process teams to hit suboptimal design compromises to save CPU schedules

    …then yes, x86 becomes a tax on foundry success.


    3. But killing or shrinking x86 would also hurt 18A

    Here’s the paradox.

    If Intel deprioritized x86 sharply:

    a) 18A loses its “hero product”

    • Internal CPUs are:

      • the most aggressive

      • the most demanding

      • the fastest feedback loop for yield learning

    • External foundry customers will not bet first silicon on Intel Foundry Services without proof.

    x86 CPUs are Intel’s:

    Process confidence engine

    TSMC has Apple. Intel has x86.

    b) Cash flow collapses at the worst possible time

    • Foundry is cash-negative for years

    • x86 still pays the bills

    • Losing CPU revenue forces:

      • more debt

      • less capex flexibility

      • weaker negotiating position with customers and governments

    c) Cultural credibility suffers

    • If Intel “gives up” on x86:

      • customers doubt long-term commitment

      • foundry customers question stability

      • engineers leave for TSMC, Nvidia, or Apple


    4. The right answer: x86 must serve 18A, not compete with it

    The winning strategy is subordination, not elimination.

    That means:

    1) Fewer, tighter x86 SKUs

    • Less market segmentation

    • Fewer sockets

    • Longer-lived platforms

    2) Design for process learning first

    • Early 18A CPUs should be:

      • aggressive

      • not necessarily perfect

      • optimized for yield learning, not peak margins

    3) Organizational separation with hard boundaries

    • Foundry P&L must not bend to CPU schedule pressure

    • CPU teams must live with slips if process learning demands it

    4) Leverage chiplets aggressively

    • Keep x86 cores evolving

    • Move risk elsewhere:

      • IO

      • accelerators

      • memory tiles

    • Reduce full-node dependency

    If Intel fails to impose this discipline, x86 absolutely becomes a distraction.


    5. Why this matters even more for 14A than 18A

    18A is about:

    • restoring credibility

    14A is about:

    • sustainability

    High-NA EUV, tighter pitches, new materials:

    • amplify every organizational inefficiency

    • punish split focus brutally

    By 14A:

    • Intel must behave like TSMC:

      • process-first

      • customer-agnostic

      • internally disciplined

    If x86 is still dictating schedules at 14A, the foundry effort will fail.


    Bottom line

    Yes, x86 can absolutely be a distractionif Intel treats it as a co-equal priority to process leadership.

    But:

    • abandoning x86 would remove the cash, confidence, and validation engine needed for 18A/14A

    • the correct approach is to enslave x86 to the process roadmap, not the other way around

    In one sentence:

    Intel doesn’t need less x86—it needs less x86 control.

    If you want next time, we can explore:

    • whether Pat Gelsinger’s org changes actually enforce this separation

    • how TSMC internally protects process teams from Apple pressure

    • or what a true “Intel-as-TSMC-with-products” structure would look like


      ***********************************************

      AI prompt, #3:

      Wow, that’s interesting: “Intel-as-TSMC-with-products” structure or how TSMC internally protects process teams from Apple pressure. I’m curious how TSMA mitigates pressure from Apple.


      ChatGPT reply, #3:

      This is a great place to zoom in, because TSMC’s ability to say “no” to Apple—selectively and structurally—is one of the least appreciated reasons it dominates advanced manufacturing. Apple is TSMC’s largest and most demanding customer, yet Apple does not run TSMC. That is not accidental.

      Below is how TSMC mitigates Apple pressure—organizationally, culturally, and economically—and why this is so hard for Intel to replicate.


      1. The most important rule: Apple is a customer, not a partner

      TSMC enforces a bright line that Intel historically never did with its internal product groups.

      At TSMC:

      • Process roadmaps are set independently of any single customer

      • Apple can:

        • influence

        • request

        • co-optimize

      • Apple cannot:

        • dictate timing

        • override risk thresholds

        • force premature node launches

      This sounds obvious—but it is radical in practice.

      Apple does not get:

      • unilateral schedule control

      • process changes that harm long-term yield learning

      • exceptions that would destabilize the node for others

      Intel, by contrast, spent years letting internal CPU deadlines dictate process behavior—with disastrous results.


      2. Organizational firewall: process teams are insulated by design

      At TSMC, process technology is sacred.

      Key structural features:

      a) Process engineers do not report through customer-facing orgs

      • Customer teams (Apple, Nvidia, AMD, etc.) are interfaces, not decision-makers

      • Process teams report up a chain where:

        • yield

        • defect density

        • long-term scalability
          …matter more than any single tapeout.

      b) “No single customer owns the node”

      Even Apple’s flagship nodes (e.g., N5, N3):

      • are explicitly designed to be multi-customer nodes

      • are validated against classes of designs, not Apple-specific layouts

      Apple may be first—but never alone.

      This prevents node distortion.


      3. Apple pays for capacity, not control

      This is subtle and crucial.

      Apple:

      • prepays billions

      • commits to enormous wafer volumes

      • absorbs early cost premiums

      But:

      • Apple pays to reserve capacity, not to redefine the process

      • If Apple wants something exotic:

        • it often goes into design workarounds

        • or advanced packaging

        • or custom SRAM structures that do not break the base node

      TSMC monetizes Apple’s pressure instead of surrendering to it.


      4. Conservative node definitions (why TSMC looks “slow” but wins)

      TSMC deliberately:

      • under-promises

      • over-delivers

      • freezes node definitions early

      Once a node is defined:

      • late-breaking customer demands are rejected or deferred

      • risk is pushed to N+1, not forced into N

      This frustrates Apple engineers—but it preserves:

      • yield ramps

      • reliability

      • learning continuity

      Intel historically did the opposite: redefine nodes mid-flight.


      5. Apple is powerful—but replaceable (psychologically)

      Here’s the uncomfortable truth:

      TSMC acts like Apple is replaceable—even if, financially, that would hurt.

      Why this works:

      • Apple has no alternative at leading edge

      • Samsung is not competitive at scale

      • Intel Foundry is not yet proven

      This asymmetry gives TSMC leverage.

      TSMC never says this publicly, but internally:

      “If we break the process for Apple, we break it for everyone—and then we lose all customers.”

      That logic always wins.


      6. Cultural memory of near-failure

      TSMC’s discipline is not theoretical.

      • Morris Chang lived through:

        • failed IDM models

        • customer entanglement disasters

      • The company has institutional memory that:

        • process purity beats heroics

        • short-term accommodation destroys long-term trust

      This culture is deeply embedded and reinforced by promotion paths:

      • process excellence → advancement

      • customer appeasement → not rewarded


      7. Why Intel struggles to copy this (even today)

      Intel’s hardest problem is not technology—it’s internal customer pressure.

      Differences:

      • Intel Foundry serves internal product teams with political power

      • Missed CPU launches have executive consequences

      • Process teams historically absorbed blame for product failures

      Until Intel can:

      • let x86 slip without organizational punishment

      • protect process leaders from product escalation

      • truly operate foundry-first internally

      …it cannot behave like TSMC, no matter how good 18A or 14A are.


      Bottom line

      TSMC mitigates Apple pressure by making process integrity non-negotiable and structurally protected.

      Apple gets:

      • early access

      • massive scale

      • co-optimization

      Apple does not get:

      • schedule veto power

      • mid-node redefinitions

      • process exceptions that hurt the ecosystem

      If Intel wants to become “TSMC with products,” it must internalize one brutal lesson:

      The foundry must be willing to disappoint its biggest customer—especially when that customer is itself.

      If you’d like next time, we can explore:

      • what an Intel org chart would look like if this discipline were real

      • how Samsung failed to enforce these boundaries

      • or whether Intel’s current leadership structure genuinely supports this model


      AI prompt, #4:

      Might it be better if Intel spun off its “biggest customer”? Years ago HP spun off its huge printer division. If not spin off, make the barriers even stronger between the various Intel divisions? I probably won’t go farther than this. I’ve strayed far away from original query about continued need for the x86 CPU. I’ve learned a lot.


      ChatGPT reply, #4:

      You haven’t strayed at all—you’ve followed the question to its logical endpoint. The “continued need for x86” inevitably turns into organizational design under extreme technological pressure. That’s exactly the right place to land.

      Let me answer this cleanly and then stop where you intend to stop.


      Short answer

      A full spin-off of Intel’s “biggest customer” (x86 product groups) would probably fail—but much stronger internal barriers are not only better, they are essential.

      Spinning off sounds elegant. In practice, it introduces new risks that could kill the foundry before it matures.


      Why a spin-off is tempting (and intellectually sound)

      Your HP analogy is a good one.

      A spin-off would:

      • eliminate internal political pressure

      • create true customer/foundry discipline

      • force x86 teams to live with real market consequences

      • instantly clarify capital allocation

      In theory, it would turn Intel Foundry into “TSMC-like” overnight.

      So why not?


      Why a spin-off is probably the wrong move today

      1. The foundry still needs a “hero customer”

      TSMC had:

      • Apple

      • Qualcomm

      • MediaTek
        before it became dominant.

      Intel Foundry today needs:

      • early volume

      • aggressive designs

      • fast yield learning

      x86 CPUs provide:

      • guaranteed demand

      • brutal design stress

      • internal alignment

      Spinning that off too early leaves the foundry exposed.


      2. Financial fragility at the worst moment

      Right now:

      • Foundry capex is enormous

      • Margins are negative

      • External customers are cautious

      If x86 revenue were separated:

      • the foundry would look riskier

      • capital costs would rise

      • governments and customers would hesitate

      TSMC could spin nothing off because it never had products.
      Intel doesn’t have that luxury yet.


      3. You don’t fix culture by changing the logo

      A spin-off:

      • does not automatically create discipline

      • does not guarantee process protection

      • can actually increase conflict via contracts and disputes

      Samsung is the cautionary tale here: internal separation without cultural enforcement still failed.


      The better answer: internal “hard walls,” not legal walls

      What Intel actually needs is TSMC-grade internal separation, enforced by incentives, not just org charts.

      What that looks like in practice

      1. Foundry leadership with veto power

      • Process heads can delay or reject internal products

      • No escalation path around them

      2. Internal x86 teams treated as external customers

      • Pay transfer pricing

      • Reserve capacity

      • Accept node definitions as-is

      3. Separate success metrics

      • Foundry: yield, defect density, on-time node delivery

      • Products: performance, margins, market share

      • No blended KPIs

      4. Cultural permission for slips

      • Missing a CPU launch must not be a career-ending event

      • Breaking a node must be

      This is the hardest change—and the most important.


      Why HP’s printer spin-off worked—but Intel’s wouldn’t (yet)

      HP spun off printers because:

      • the technology was mature

      • the cash flows were stable

      • there was no existential platform shift underway

      Intel is mid-pivot in:

      • process technology

      • business model

      • customer identity

      Spinning off x86 after 18A/14A prove themselves?
      That’s a different conversation—and a plausible one.


      Tying back to x86 (and then stopping)

      The reason x86 keeps reappearing in this discussion is simple:

      x86 is no longer Intel’s future—but it is still Intel’s bridge to that future.

      Bridges must be strong.
      Bridges must not dictate where the road goes.
      And eventually, bridges can be retired—but not while you’re still crossing them.

      You’ve asked the right questions, in the right order.
      And you’re right to stop here—this is the natural intellectual resting point.

      If you ever want to pick it back up, we can.