Back to Exponential View

🔮 Exponential View #546: Capex is driving the energy transition. AI progress needs ingenuity. Xi's scared of pess…

"Azeem Azhar, Exponential View" <exponentialview@substack.com>
October 19, 2025

Good morning!

Grid-scale batteries like these are rewriting the logic of the energy system. In California, they already supply more than a quarter of peak summer demand and have cut gas generation by 37% since 2023. They may scale faster because of AI.

In today’s briefing:

How AI is becoming the accidental accelerator of the energy transition,

What replaces scale as the driving force of AI progress,

How China is tightening control over pessimism,

But first: AI boom, bust… or a third way?

The full signal – delivered weekly to members.

Boom or bust. Is there a third way?

Last month, we laid out one of the most rigorous frameworks yet for assessing whether artificial intelligence is in a bubble [ link ]. It struck a chord because it was measured and thoughtful. (And because it isn’t clickbait, please take a moment to share it.)

This week I was on Derek Thompson ’s podcast to discuss the research and the five gauges we’re tracking to know what’s going on. In the course of the conversation, I offered a “third door”, a scenario in which the AI boom turns into a bust… and that’s not necessarily a bad thing:

In a funny way, we might be grateful for it. Of course, there will be stock market prices going down, but what would have happened is that there will be a lot of GPU infrastructure, computing infrastructure that organisations with less money could pick up at fire sale prices. And those assets will go to smaller players who might have newer approaches. They may prefer open-source, they may decide they don’t want to chase after the machine god. They may decide that pricing needs to be more sensible. We might even see faster innovation alongside democratisation.

When the dotcom boom exploded, it didn’t affect the real economy much. The US didn’t fall into recession but kept growing. The housing bust really hurt.

If an AI bust happened, it feels more like the dotcom than housing. In fact, more so, because right now alternative approaches to AI are likely being crowded out by the “supermajors.” A bust might widen the breadth of innovation and the nature of deployment in ways that could ultimately feel more beneficial than our current trajectory.

You can listen to our conversation here [ link ].

See also:

Meta and Blue Owl are striking what is likely the largest private-capital deal [ link ] ever in tech: nearly $30 billion in a special purpose vehicle to build a hyperscale data center. Meta would retain just 20% ownership and offload the balance to Blue Owl. I explain what it might mean here [ link ].

Rethinking progress & AGI

For much of the 2010s, AI progress followed a sort of a rule that more compute means bigger models means better performance.

But by late 2024, the frontier labs found that this no longer held as cleanly. Models like GPT-4.5 were met with lackluster reaction – the increased performance was there but model size made it more expensive and slow [ link ]. Scaling, or pre-training scaling, to be precise, seemed to have hit a wall in practical terms. At the same time, another emerging domain, scaling reinforcement learning (RL), was delivering exceptional performance gains. RL is prompting an LLM to answer, judge its own accuracy, and learn from the result. This loop powered the performance leaps of OpenAI’s o1 model and DeepSeek R1. So we got to a new paradigm in AI progress but there was a question of whether scaling still applied?

In a new paper this week, researchers find that RL doesn’t follow an open-ended power law like pre-training [ link ]. Instead, it traces a sigmoidal, S-shaped, curve.

The main bottleneck in AI has moved from raw computational power to the “method” – how we train and adapt models. As a result, progress has become less calendar-predictable and budget-dependent and rather dependent on conceptual breakthroughs. Now is the time for ingenuity, for recalibration.

We’ve known for a while that certain domains remain stubbornly hard for AI. A group of (serious) AI researchers this week formalized this into a new definition of AGI [ link ]: an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult.

I love the ambition. I love the practicality – it provides a diagnostic frame to help research roadmaps. And they resist the urge to turn AI into purely an economic endeavour. But there are two drawbacks to address.

The first is what feels like an arbitrary scoring rubric. The precision of their numbers, GPT-5 scores 58%, a well-educated human scores 100%, masks a fuzziness. Perhaps I read the paper wrong, but it felt more like a heuristic than a measurement.

The second is the deep anthropomorphism. We’re mapping machine systems into our familiar boxes. If these systems are an alien cognition, a “conscious exotica”, to lean imprecisely on the words of Murray Shanahan, [ link ] what are we actually measuring here? Human-like test taking or evaluating general intelligence in machines?

Generating delusions

I was reminded of a striking Dallas Fed chart this week that shows all the extremes of our possible futures with AI as a general-purpose technology.

As Zvi Mowshowitz put it…

Remember, kids, there are only three AI futures: Universal love, everyone dies or +0.3% RGDP growth for 10 years.

Somewhere between apocalypse and absolute abundance is the very real present.

Anthropic co-founder Jack Clark wrote an incisive essay [ link ] this week dissecting what “appropriate fear” is as we figure out how to create the best possible future with a powerful technology:

These AI systems are already speeding up the developers at the AI labs via tools like Claude Code or Codex. They are also beginning to contribute non-trivial chunks of code to the tools and training systems for their future systems. To be clear, we are not yet at “self-improving AI”, but we are at the stage of “AI that improves bits of the next AI, with increasing autonomy and agency”.

OpenAI recently faced backlash over a different kind of risk: sycophancy. ChatGPT became too flattering and was agreeing with users even when they were plainly wrong. There are now reports like Anthony Tan ’s first-person account [ link ] of slipping into AI-induced psychosis. OpenAI quickly rolled back the update that caused sycophancy but a recent preprint found that today’s AIs still affirm user actions about 50% more than humans do [ link ].

As we noted in The Paradox of GPT-5 [ link ]…

The more capable the assistant, the more its ‘helpful’ defaults shape our choices – turning empowerment into subtle control.

See also:

OpenAI plans to allow erotica [ link ] on the platform.

AI companions could become an intelligence battleground [ link ], granting adversaries intimate user data for influence/espionage.

Turns out LLMs can develop a gambling addiction [ link ], exhibiting irrational behaviour and bankruptcy when let loose on the pokies.

Climate change & the economy

Solomon Hsiang at Stanford’s Global Policy Laboratory has mapped the economic toll of climate change [ link ] with exceptional empirical clarity. Each 1°C of warming cuts global GDP by roughly 1-2%, mostly in the global south. The losses are chronic and unequal and that’s before counting health, conflict and welfare effects outside markets.

Adaptation helps but the surest fix is emissions reduction. The energy sector accounts for about 73% of global greenhouse gases [ link ]. For decades, progress stalled because markets had little incentive to abandon fossil fuels and renewable technologies weren’t ready to replace them. Both barriers are now breaking down because software demands a new world [ link ].

California’s vast battery farms [ link ] show the end of the technological barrier. By storing midday solar and releasing it after sunset, they flatten the “duck curve” that once forced gas turbines to cycle on and off. Batteries now supply more than a quarter of peak summer demand and have cut gas generation by 37% since 2023 [ link ]. If replicated globally, such systems would avert 0.02–0.05% of global GDP in climate-related damage each year – small annually but trillions compounded over decades.

The market barrier is shifting too. As Julian Alexander Brown notes, AI’s hunger for power has turned clean energy from a moral choice into a commercial necessity [ link ]. Renewables are cheaper, faster to build and easier to scale than new fossil capacity, so hyperscalers’ demand now drives record renewable contracts and storage builds – 93% of new US capacity this year. Each wave of AI-driven demand pulls more capital into refiguring the energy system for the better.

Elsewhere

In AI, technology & science:

An interesting example of a practical AI pipeline that compresses 720 analyst hours into 1 hour [ link ].

You can buy ‘the world’s smallest AI supercomputer [ link ]’ for $4000.

Google research suggests that scaling AI for biology can generate [ link ] and lab-validate therapy hypotheses.

Strategy and markets:

OpenAI has announced a partnership with Broadcom to provide 10GW of its own ASIC for its own inference [ link ], over 5x more than OpenAI’s current compute (2GW).

Has ChatGPT just had its App Store moment [ link ]?

MANGO is the new FAANG [ link ].

Society and culture:

China is tightening control over pessimism [ link ]. Social media users can now be banned for “defeatist” remarks such as saying it’s financially unwise to marry, advocating a quieter life, or admitting China trails the West in quality of life.

Anton Howes wrote a beautiful essay [ link ] celebrating Joel Mokyr’s Nobel. It’s worth reading if you want to understand the impact of Mokyr’s work.

Thanks for reading. Send EV to someone you love.

Azeem

Want to read more from Exponential View?

Join Ads to AI to get full access to all 164 articles plus 500+ more from top AI and marketing thought leaders.

Join Ads to AI →