Good morning from London!
In todayâs briefing:
Why simpler AI may be the next frontier in AI capability
The rise of anticipation markets and new ways to price the future
One AI model invests better than all the rest
Bionic eyes are hereâŚ
Letâs go!
Simpler, smarter AI
Todayâs frontier models have gorged on internetâs noise. They are brilliant mimics with blurry reasoning, as argues OpenAI founding member Andrej Karpathy [ link ]. The problem is that true reliability canât come from these feats of memory but from deeper understanding. Future AI systems will need this.
Andrej proposes an austere remedyâ reduce memorisation, preserve the reasoning machinery and pull in facts only when needed. He pictures a âcognitive coreâ at the 1-billion-parameter scale that plans, decomposes problems and queries knowledge. It is a librarian, not a library.
Philosopher Toby Ord points out that the very approach thatâs given us the surprising capabilities of âreasoning modelsâ like o1 is reaching its own limits [ link ]. These systems extract gains from post-training reinforcement learning (refining answers through trial-and-error) and extended inference-time reasoning. Compute is paid per query, not once during pre-training. Ord estimates that this burns 1,000 to 1,000,000 times more compute per insight than traditional training. Returns shrink faster to get to the next milestone. Even OpenAIâs o1 reasoning model improves only when itâs given more RL cycles and longer âthinking time,â which raises the cost per task.
How should we make sense of this? Technological progress advances through overlapping S-curves and rarely follows a smooth exponential. Both Ord and Karpathy are pointing to a similar direction of less brute memorization, more search and recursion. Less unlimited inference and more careful allocation of reasoning budgets. Away from monolithic models and toward tool-using, modular agents.
As the cost of using AI systems (rather than training them) becomes dominant, pricing will shift to usage-based models. Firms that deploy AI will precision will be rewarded. And as a result we could see a broad, rapid seep of AI into many corners of the economy, rather than a sudden leap in GDP.
The weight of the Web
OpenAI has entered the browser arena with its own browser Atlas [ link ]. We argued [ link ] a few times in the past thatâŚ
[t]he company that owns the browser owns the user session, valuable behavioral data and the ability to steer the revenue funnel. Whoever captures the front door to the web gets to watch, and eventually automate, everything we do online.
In practice, Atlas feels similar to Comet [ link ] and Dia [ link ] (our team uses both). Its edge is beneath the interface, in Agent mode, which lets ChatGPT act on your behalf. Googleâs stock dipped ~4% on the news [ link ].
On some tasks, Atlas still feels halfâbaked. It struggled with straightforward actions (for human users) like booking show tickets or locating a nearby bookstore with my preferred authors. Yet it handled other tasks flawlessly, for instance, going through our signup flow (selecting the free plan on our website, entering a test email, deciding to click ânextâ a few times, and in the end producing a usability analysis to help us improve our onboarding).
The biggest worry right now is the new attack surface. Agentic browsers blur the line between trusted user intent and untrusted web content. When an AI can act for you, a malicious actor can act through you without your knowledge. Attackers can hide prompts in images or background colours [ link ]; once a model âreadsâ the page, OCR extracts the invisible text and treats it as an instruction â potentially handing over control of a logged-in session.
We need a way to mark and enforce the boundary. One âsimpleâ idea would be to only act on instructions that come from verified, trusted sources; and blockchain might be useful here. Protocols like ERC-8004 [ link ] could underpin cryptographically-signed instructions and allow âAI-leashâ policies [ link ] to whitelist counterparties.
AnticiâŚ
Change is outrunning our ability to make sense of it, and weâre increasingly forced to act first and rationalize later. In this gap, anticipation is the operative skill and Alex Danco goes as far this week to argue that an era organized around prediction is the successor to postmodernism [ link ]:
How early or late you are to something is now an essential component of your relationship to that thing. The timelines and reels that represent âwhat is going onâ are increasingly about a single meta-topic: are you predicting it, or is it predicting you?
Markets, social feeds and AI models already operate this way. Cultural identity, increasingly, revolves around being early, to a meme, a trade, a platform or an idea. Even aid is becoming anticipatory â in Bangladesh, GiveDirectly triggers cash transfers on flood forecasts [ link ], acting before harm, not after it.
The deepest expression of this shift is economic. For a century, Wrightâs law [ link ] said that costs fall as experience accumulates (learning by doing). AI flips that logic as machines learn before doing [ link ] and collapsing the curve through simulation. That means extraordinary speed and adaptability, but also a new kind of fragility. Systems optimized for prediction can mistake their models for reality. When every process is tuned to whatâs likely, a true black swan, the improbable, can hit harder.
Speaking of AI breaking patternsâŚ
A long arc of work and technology
For two centuries, technology has raised output while reducing the labor needed to produce it. A new paper tracks this long arc [ link ]: when machines replaced muscle from the 1800s onward, employment shifted toward higher-skill, better-paid roles for men and women. Automation displaced some jobs, but it also opened new ones that were richer in skill and autonomy. AI could break this pattern after 200 years.
Productivity is rising again, but without corresponding growth in high-skill employment. In the US tech sector, real output has climbed while employment has fallen by roughly 90,000 since 2023 [ link ], the steepest and longest decline since the dot-com bust. The model in the paper suggests that, unlike previous waves, AI would redirect demand toward lower-paid, lower-educated, more male-dominated forms of work, especially physical roles. Yet even that harbour is uncertain: Amazon, for instance, could avoid hiring more than half a million US warehouse workers through robotics by 2033 [ link ].
The model doesnât account for at least three important possibilities: AI could create entirely new kinds of work; it might boost workersâ capabilities rather than replace them; and it could shift income toward owners of capital⌠But AIâs impact on labor is not preordained; we need to shape it, fast.
See also:
Chinese robotâmaker Noetix Robotics has opened preâorders for its latest humanoid robot, âBumi,â for $1,400 [ link ]. Roughly the price of an iPhone 17 Pro.
Elsewhere
In AI, technology and science:
Frontier AI models go headâtoâhead on whoâs the better investor, with $10,000 each [ link ]. As of writing, Qwen3 Max is in the lead, followed by DeepSeek. (What are you doing, ChatGPT?!)
Bionic eyes, anyone? A photovoltaic retinal implant [ link ] has restored partial sight in patients with ageârelated macular degeneration. This is the first therapy to move from slowing loss to restoring vision.
Google has demonstrated a verifiable quantum algorithm on its new Willow chip that runs 13,000x [ link ] faster than the best classical supercomputer.
Society and culture:
The Dutch Data Protection Authority has warned of AI bias for its upcoming election. In tests of four chatbots, in more than half the cases the chatbot [ link ] suggested either the farâright PVV or the leftâwing GroenLinksâPvdA.
Eric Hoel argues that we can quantify and intentionally build emergence into complex systems [ link ].
A good overview of the current patchwork of AI legislation in California [ link ] â especially how lesser-noticed bills are quietly reshaping markets and innovation.
Thanks for reading.