Friday, November 7, 2025

The End Of The F-150 Lightning -- November 7, 2025

If this becomes a story, we'll track it here.

Right now, this story is absolutely everywhere. I kind of don't get it -- why it's such a big story. EVs may be a thing but EV-pickups are not.

Link here

I see dozens of pickup trucks every day in my area of north Texas, DFW-area. I almost never see any pickup truck towing anything except landscaping. 

Know what people use to haul stuff? U-Haul. Busiest U-Haul centers in the country are in Texas. 

Link to WSJ.  

Ford executives are in active discussions about scrapping the electric version of its F-150 pickup, according to people familiar with the matter, which would make the money-losing truck America’s first major EV casualty. 

The Lightning, once described by Ford as a modern Model T for its importance to the company, fell far short of expectations as American truck buyers skipped the electric version of the top-selling truck. Ford has racked up $13 billion in EV losses since 2023.

Overall EV sales, already falling short of expectations, are expected to plummet in the absence of government support. And big, electric pickups and SUVs are the most vulnerable.

“The demand is just not there” for F-150 Lightning and other full-size trucks, said Adam Kraushaar, owner of Lester Glenn Auto Group in New Jersey. He sells Ford, GMC, Chevy and other brands. “We don’t order a lot of them because we don’t sell them.”

No final decision has yet been made, according to people familiar with the discussions, but such a move by Ford could be the beginning of the end for big EV trucks.

 

PDX Routes To Be Affected By Government Shutdown — ChatGPT — November 7, 2025

Safest


PDX–DFW is one of the safest routes in a 10% cancellation scenario.

It wasn’t listed in the original list  because it falls into the "unlikely to be cut" category along with routes like:

  • PDX–SEA

  • PDX–SFO

  • PDX–DEN

  • PDX–PHX

  • PDX–LAX

Original List

Most likely to be cancelled first (High vulnerability — low frequency / seasonal / single-carrier)

  1. PDX — Keflavík (KEF), Iceland — seasonal transatlantic or limited-season service; typically few weekly departures. (Seasonal / leisure). Portland Monthly

  2. PDX — London (LHR) — long-haul transatlantic with limited weekly frequencies compared to domestic flows (1–7/wk historically). (Long-haul / low frequency). FlightConnections+1

  3. PDX — Amsterdam (AMS) — long-haul, often limited weekly schedule. (Long-haul / low frequency). FlightConnections

  4. PDX — Cancún (CUN) — seasonal leisure route (typically concentrated in high-travel months). Portland Monthly

  5. PDX — Guadalajara (GDL) / other Mexico leisure cities — often fewer frequencies than major domestic routes; sometimes seasonal. FlightsFrom

  6. PDX — Bozeman (BZN) — small non-hub mountain/leisure city, limited daily flights (often 1–2/day). FlightConnections

  7. PDX — Missoula (MSO) — small city non-hub, typically low frequency (1–2/day or less). FlightConnections

  8. PDX — Kalispell / Glacier (FCA) — seasonal/leisure, low frequency. FlightConnections

  9. PDX — Provo (PVU) — Allegiant style service (inherently lower-frequency/low-cost leisure). FlightConnections

  10. PDX — Idaho Falls / Jackson-area — small regional leisure markets, low frequency. FlightConnections

Medium vulnerability (one carrier or limited daily frequency but not strictly seasonal)

  1. PDX — Guadalajara (GDL) (if served at reduced frequency) — (Mexico business/leisure mix). FlightsFrom

  2. PDX — New York (EWR / JFK) when served by only one carrier on certain city-pairs (some flights can be 1–2/day depending on season). FlightsFrom

  3. PDX — Minneapolis (MSP) (some frequencies by multiple carriers but spare flights are less essential than core West Coast trunk routes). FlightsFrom

  4. PDX — Nashville (BNA) — usually fewer daily flights than big hubs (single-carrier patterns). FlightConnections

  5. PDX — Some Florida cities (e.g., Fort Lauderdale, Tampa) — often medium vulnerability depending on season. FlightConnections

Lower vulnerability (unlikely to be cut early — high frequency / hub links)

  1. PDX — Seattle (SEA) — very high frequency (largest route from PDX; huge weekly departures). Airlines will preserve this. FlightsFrom

  2. PDX — San Francisco (SFO) / San Jose (SJC) / LAX — high-frequency West Coast trunk routes; lower chance of cancellation in a 10% cut. FlightConnections

  3. PDX — Phoenix (PHX) / Denver (DEN) — hub connectors with multiple daily frequencies; less vulnerable. FlightConnections

  4. PDX — Las Vegas (LAS) — multiple carriers and many daily flights; lower vulnerability. FlightConnections

  5. PDX — Honolulu / Kahului / Kona (HNL/OGG/KOA) — Hawaiian routes are important leisure routes but often multi-daily; vulnerability depends on season & carrier. FlightConnections


Wednesday, November 5, 2025

How Did AWS Become The Leader In Cloud?

 

Updates

November 5, 2025: corporate overhaul. Link here. AWS far ahead in cloud services, but Microsoft and Google now growing faster. OpenAI signs seven-year deal with AWS to scale ChatGPT.

Original Post 

AI prompt

Above, in a visual you show AWS, CoreWeave, Google and Microsoft in the "Cloud Layer" box. On a percentage basis I saw somewhere that AWS is still in the lead and by quite a bit. How in the world did AWS did this coming? How i the world did AWS become the leader in cloud? Jeff Bezos must have seen something or he had some really prescient people on his team. Thoughts?

 AI reply:

Yes — AWS’s rise is one of the most remarkable business foresights in modern tech history, and it happened largely because Jeff Bezos and a small internal team saw a need no one else recognized at scale.


Here’s the short version of how and why it happened:


🧩 1. The accidental discovery


In the early 2000s, Amazon realized it was repeatedly building the same internal infrastructure — databases, storage systems, and compute power — every time a new product launched (for example, Prime, Marketplace, Kindle). An internal task force proposed standardizing these services as modular, reusable tools.


That insight — “if we need this infrastructure, so do others” — was the birth of AWS. It was not initially about making money; it was about solving Amazon’s own scaling pain.


🚀 2. The right timing


By 2006, startups were desperate for computing power but couldn’t afford to buy servers. AWS launched S3 (storage) and EC2 (compute) — cheap, flexible, pay-as-you-go services that fit the startup economy perfectly.

This was years before Google or Microsoft even believed in renting out infrastructure.


Bezos’s genius wasn’t predicting AI or the cloud economy — it was believing infrastructure could be a retail business.


⚙️ 3. The invisible moat


Once developers built their apps on AWS, they stayed. Every new tool — database (RDS), analytics (Redshift), AI/ML (SageMaker) — deepened that moat. The “switching cost” became enormous.

By the time Azure and Google Cloud got serious, AWS already had a massive installed base.


🌍 4. The culture of builders


AWS didn’t just sell servers — it sold autonomy.

Each service team acted like a startup, and customers (mostly engineers) trusted that AWS would keep iterating fast and reliably.

That’s why government agencies, hedge funds, and then Fortune 500s followed the startups — the cloud had become a default utility.


📈 5. The compounding effect


Every dollar AWS earned was reinvested into more data centers and R&D. At one point, AWS accounted for more than 100% of Amazon’s operating profit, subsidizing retail. That made it even harder for competitors to catch up — AWS could run profitably and invest aggressively.


🧠 The prescience


Bezos’s insight wasn’t mystical; it was systemic thinking.

He saw that:


The internet would eventually be built on top of infrastructure — and whoever owned the infrastructure would own the margin.


He once told his executives that “your margin is my opportunity.”

With AWS, he reversed it — your infrastructure is my opportunity.

The timeline for AWS decisions between 2000 and 2020


 

The Cloud Layer And The Subcontractors -- November 5, 2025

 

AI prompt

I understand this, that CoreWeave is the largest AI / LDC operator in the US, and that Cisco is huge in wiring servers, so one would think that the two (CoreWeave and Cisco) are competitors, but my hunch is that CoreWeave is the "Big Kahuna" responsible for the overall LDC from start to finish and Cisco is simply one of the many subcontractors. Thoughts?

AI reply:

Excellent read of the situation — and you’re essentially right.

Let’s unpack it cleanly:


1. What CoreWeave actually is


CoreWeave isn’t a “networking” company like Cisco; it’s a specialized AI cloud infrastructure operator — essentially, a next-generation data-center company optimized for large-scale GPU computing (NVIDIA H100s, GB200s, etc.).

They design, finance, and operate AI-focused data centers that can rent GPU capacity to OpenAI, Microsoft, Anthropic, etc. In short: they are a vertically integrated cloud provider focused on AI workloads.


Think of them as “AI’s version of AWS”, but stripped down to the essentials:

  • Power procurement

  • Data center real estate and cooling

  • Network interconnects (via partners)

  • GPU clusters and virtualization software (their secret sauce)


2. What Cisco actually does


Cisco’s domain is infrastructure plumbing — switches, routers, security systems, interconnects, fiber backbones, and now some AI networking systems (like Ethernet-based AI fabrics). Cisco sells the underlying hardware and network architecture, not the compute or the cloud platform.


So while CoreWeave uses networking gear, it doesn’t build it. Cisco, in turn, sells to CoreWeave (and Equinix, and AWS, and everyone else). The two aren’t direct competitors — they’re complementary within the same value chain.


3. The hierarchy (simplified)


Here’s how the AI data-center stack looks, roughly in order from bottom to top:

 



4. The “Big Kahuna” view


Your hunch is exactly right:

CoreWeave is the “prime contractor” — responsible for designing and running the full AI compute environment. Cisco and others (Arista, NVIDIA, etc.) are the subcontractors providing critical hardware and interconnect solutions.


In military or construction terms:

  • CoreWeave = general contractor (designs, coordinates, delivers the full system).

  • Cisco = electrical subcontractor (delivers wiring, switches, and connectivity).


Would you like me to sketch a visual map of this AI Data Center Value Chain — showing where CoreWeave, Cisco, NVIDIA, and the hyperscalers each sit? It’d mirror your earlier “Operating System Ecosystem Map” and make this hierarchy instantly clear.

A visual map of this AI Data Center Value Chain


 

 This, of course, leads directly to Saudi Arabia's Humain:

Yes — the Humain project in Saudi Arabia is essentially a microcosm of the entire AI infrastructure stack in motion:

  • AWS as the cloud spine (provisioning, orchestration, APIs)

  • NVIDIA / AMD / Qualcomm supplying compute diversity — GPUs for training and inference, plus edge chips for local deployment

  • Cisco as the connective tissue — ensuring high-bandwidth, low-latency interlinks from desert-scale data centers to urban and edge nodes

And with Saudi capital underwriting it, they’re effectively compressing twenty years of Western cloud evolution into one national leap, skipping over legacy systems and going straight to AI-native infrastructure.

It’s fascinating — the first time we’re seeing a state-sponsored attempt to build a complete AI ecosystem from the ground up, using U.S. technology partners as modular layers rather than as dominant owners.

If you’d like, I can build a companion map — “The Humain Stack” — showing how Saudi Arabia’s project mirrors and diverges from the U.S. CoreWeave / AWS model. It would make the geopolitical and technical structure visible at a glance. Would you like me to generate that? 

Visually:

From the earlier conversation, do you want to try the visual again,  showing how “The Humain Stack” parallels the U.S. hyperscaler model (AWS, Azure, CoreWeave)?

 

Sunday, November 2, 2025

iPhones / Androids -- Hardware / Operating System -- Another Look

IA prompt:

iPhones / Androids. iPhone is a "brand" or a "make." Android is an operating system. Among Apple products there are multiple operating systems. Are there multiple "android operating systems" based on the hardware (desktop, laptop, table, phone, etc)? I assume Google Pixel uses a particular operating system. Do non-Apple / non-Chrome computers use an "android" operating system. Is there yet a third operating system, "Chrome operating system"? How do you sort this issue of operating systems  across different hardware devices?

Summary

Please generate that "Operating System Ecosystem Map -- 2025."

Saturday, November 1, 2025

Substrate Promises 2NM-Class Chipmaking At A Tenth Of The Cost Of EUV -- Tom's Hardware -- November 1, 2025

Link here.

However, the tool that Substrate is developing does not appear to be compatible with existing equipment and production flows, so the company will have to reinvent the whole supply chain to be successful. However, Substrate does not plan to sell its tool, but to build its own fab and provide foundry services. 

As integrated circuit features are getting smaller, chipmakers are using increasingly intricate lithography tools that now cost around $235 million for an ASML NXE:3800E Low-NA EUV scanner or around $380 million for an ASML EXE:5200B High-NA EUV scanner. As a result, fabs are becoming increasingly expensive to build, and chips are becoming more expensive to produce.

Substrate models that a leading-edge fab will cost around $50 billion by 2030, leaving semiconductor production to a handful of companies with very deep pockets. Furthermore, such fab expenditures are expected to increase the cost of a 300-mm wafer, which it claims could balloon to $100,000 when using leading-edge fabrication processes. This will make the development and production of advanced chips prohibitively expensive for small companies. Substrate intends to change that and reduce wafer pricing to just $10,000 by 2030. 

"At Substrate, we have a pathway to reduce the cost of leading-edge silicon by an order of magnitude compared to the current cost-scaling path we are on," a statement by the company reads. "By the end of the decade, Substrate will produce wafers closer to $10,000, not $100,000."

Note that Substrate is by no means the only company exploring particle accelerators as light sources for EUV or beyond-EUV lithography. In the U.S. alone, two companies — Inversion Semiconductor and xLight — as well as researchers at Johns Hopkins University, have revealed that they are working on lithography systems harnessing particle accelerators over the past 12 months. Chinese scientists and Japanese researchers are also testing particle accelerators for semiconductor production.

Substrate's x-ray lithography.

Substrate is developing a new type of lithography system that uses a particle accelerator to produce short-wavelength X-ray radiation (or light) for chipmaking. The goal is to replace ASML's expensive EUV lithography scanners with compact, low-cost machines capable of printing transistor patterns at a 2nm-class process technology (or even more advanced, the company claims). The firm claims the machine should reduce chip production costs by 10 times by the end of the decade.

At the core of Substrate’s technology is a custom particle accelerator which propels electrons (produced by an unknown emitter) to near the speed of light using radio-frequency cavities. As these electrons pass through sporadic magnetic fields, they gain kinetic energy, accelerating to speeds very close to the speed of light (a relativistic speed), which allows them to produce special types of light when manipulated. These fast-moving electrons fly through a series of  magnets that flip back and forth, wiggling the electrons and causing them to release their energy and produce coherent bursts of intense x-ray light (or radiation).

That light is 'billions of times brighter than the sun,' likely producing pulses intense enough to achieve the desired resolution and dose. The X-ray pulses are then focused by 'a succession of perfectly polished optics' to project a photomask onto a photoresist-coated silicon wafer. Note that Substrate never mentions reticle and resist in its official description, only claiming that 'bright pulses of light' are collimated and transported 'all the way to the silicon wafer,' which implies maskless direct-write lithography, which is good enough for research purposes, but orders of magnitude too slow for the mass production of chips. However, this remains speculation on our part.

In fact, Substrate's description of its technologies is very brief and lacks detail (perhaps for competitive reasons), making it difficult to analyze. However, since the company mentions X-ray, we are dealing with electromagnetic radiation with wavelengths ranging from 0.01nm to 10nm and energies from about 100 eV to 100 keV. The shorter the wavelength, the finer the structure that can the printed with improved accuracy, but the harder the light is to manage and work with. 

Given Substrate's achievements so far, we are likely dealing with soft X-rays (wavelengths of 1-10 nm, lower energy) rather than hard X-rays (wavelengths of 0.1-1nm, higher energy). 

Since short-wavelength light (including EUV and X-rays) is strongly absorbed by most materials, managing it requires a set of perfectly polished mirrors that reflect light at grazing angles (to avoid absorption), ultra-precise alignment, and vacuum environments. Also, X-ray lithography requires all-new resists that can handle high-energy photons without damage or blur. 

2nm-like CD and T2P spacing

To prove that its XRL method works, Substrate has shown off images of a random logic contact array with 12nm critical dimensions (CD) and 13nm tip-to-tip (T2T) spacing printed with high pattern fidelity, as well as random vias with a 30nm center-to-center pitch, possessing superb pattern quality and critical dimension uniformity. If such metrics could be achieved for mass production today, this would largely revolutionize the lithography industry, as it would enable scaling across both axis at 2nm-class nodes (and lower) without using multi-patterning.

Modern EUV scanners with 0.33 NA optics can achieve critical dimensions of 13nm–16nm in high-volume manufacturing, which is sufficient to print a 26nm minimum metal pitch (good enough for 2nm or 3nm-class process technologies) and a 25nm T2T interconnect space with a single exposure.

Such disproportions emerge because chipmakers tend to optimize resolution in the Y direction (CD) to get the tightest metal-pitch line-space pattern, but at the cost of resolution in the X direction, which means that T2T prints poorly or inconsistently, leading to bridging defects, stochastic defects, yield loss, complicated design rules, and slower scaling. To mitigate this and avoid blurred or inconsistent line ends at tip-to-tip spacing, Intel applies pattern-shaping tools in the X-direction with its 18A fabrication technology, but this complicates the overall production flow and does not fundamentally solve the issue.

Substrate's tool (assuming these are real lab results, not a simulation) can already outperform existing Low-NA EUV scanners in terms of achievable CDs with single-resolution patterning, and it leaves them behind dramatically when it comes to T2T spacing printed with high fidelity. This means that Substrate's X-ray lithography tool could possibly replace costly EUV multi-patterning used for sophisticated 3nm and 2nm-class process technologies or pattern shaping used for Intel 18A. 

Our friends at SemiAnalysis have managed to get more performance claims from Substrate, which look even more impressive. The company claims it has achieved overlay accuracy of under 1.6nm, full wafer critical-dimension uniformity (CDU) of 0.25 nm, line edge roughness (LER) of under 1nm, and local critical dimension uniformity (LCDU) below 1.5 nm.

If accurate, this performance would match or surpass ASML's Twinscan NXE:3800E in uniformity, though its overlay precision is slightly worse than the 0.9nm machine-matched overlay standard in the latest EUV scanners. Also, the line-width uniformity of contacts on an image provided by Substrate is rather poor.

Assuming the results presented by Substrate are real and achieved in a lab environment, this means the company has solved three critical challenges with X-ray lithography. First, build a light source featuring an electron gun and a particle accelerator; second, create a grazing-incidence mirror system to reflect and focus X-rays at very shallow angles; and make the whole thing compact enough to fit into a lab.

However, Substrate still has a lot of work to do, turning its X-ray lithography technology from a lab success into a viable production tool. Substrate must prove that its X-ray lithography system can maintain beam stability, optical precision, resist compatibility, overlay accuracy, and commercial throughput simultaneously, something no X-ray platform has ever achieved. 

Existing photoresists are incompatible with X-ray radiation, as they are optimized for EUV radiation with considerably lower photon energy. So, Substrate will have to invent a proper resist and then produce it at volume. The company will also have to develop photomasks that can sustain X-ray radiation. Grazing-incidence mirrors for X-rays are also not in mass production, and it is unknown whether they can be mass-produced cheaply and reliably by existing producers like Zeiss. 

Substrate will also have to ensure that X-rays do not damage the underlying transistors or introduce stochastic defects. Achieving overlay accuracy below 1nm (to match ASML's production-level alignment precision) remains another challenge for the company. This is perhaps because the company still has to address issues such as wafer handling, stage repeatability, and other factors related to high-precision mechanics, which ASML has taken decades to solve.

Beyond that, the tool must reach commercial throughput and consistent yield, something that took years for ASML's EUV tools. In fact, ASML's EUV journey timeline is quite exemplary: it has taken the industry 12 years to evolve from an alpha demo tool (2006) to mass production (2018), and about seven years to go from the first pre-production system (2010) to a mass-production-capable scanner.

Speaking of mass-production-capable X-ray lithography tools, it is important to note that Substrate has no intention of selling them to third parties such as Intel or TSMC. Instead, Substrate plans to build its own fabs in the U.S. (a move that could give the company geopolitical importance in the eyes of the U.S. government), install additional tools, and offer foundry services, thus challenging existing chip contract manufacturers. 

However, this strategy adds complexity and cost. Constructing even a single high-end semiconductor fabrication plant would require tens of billions of dollars in investment and a large ecosystem of suppliers and service infrastructure, which currently does not exist for X-ray lithography production.

Substrate would also need to integrate its XRL litho machines with hundreds of other tools in the fab, or persuade its suppliers (such as Applied Materials, KLA, Lam Research, etc.) to help it do so, which likely involves further investments from the company, making its first fab particularly expensive.

Also, running both a toolmaking activity and a chip foundry would stretch Substrate's technical and financial resources, which would make it particularly hard to achieve its promised per-wafer price of $10,000 by the end of the decade, as its investors will likely demand returns after pouring tens of billions of dollars into the company.

However, if Substrate succeeds in both roles, it could shift the balance of the semiconductor supply chain back to the U.S., as the company will likely outpace ASML's tools in terms of resolution and performance, and TSMC in terms of design cycle time and potentially volume.

Chatbot Timelines -- November 1, 2025

When -- what year -- was the average American able to download a chatbot; timeline of chatbots?



Computers: "In The Beginning," How Many GUIs Were There?

A review:

So, rather than a single number, the "beginning" involved a competitive landscape with several companies developing and releasing their own distinct graphical interfaces. 

Look at the time frame

  • baby footsteps: 1973
  • ten years later, 1883: the first commercial personal computer - Apple Lisa GUI
  • another two years, 1985: Microsoft Windows 1.0