In this newsletter:
Vibe engineering
OpenAI DevDay 2025 live blog
GPT-5 Pro and gpt-image-1-mini
Python 3.14
Plus 4 links and 2 quotations and 2 notes
Thanks for reading Simon Willisonâs Newsletter! Subscribe for free to receive new posts and support my work.
Vibe engineering [ link ] - 2025-10-07
I feel like vibe coding is pretty well established now [ link ] as covering the fast, loose and irresponsible way of building software with AI - entirely prompt-driven, and with no attention paid to how the code actually works. This leaves us with a terminology gap: what should we call the other end of the spectrum, where seasoned professionals accelerate their work with LLMs while staying proudly and confidently accountable for the software they produce?
I propose we call this vibe engineering, with my tongue only partially in my cheek.
One of the lesser spoken truths of working productively with LLMs as a software engineer on non-toy-projects is that itâs difficult. Thereâs a lot of depth to understanding how to use the tools, there are plenty of traps to avoid, and the pace at which they can churn out working code raises the bar for what the human participant can and should be contributing.
The rise of coding agents - tools like Claude Code [ link ] (released February 2025), OpenAIâs Codex CLI [ link ] (April) and Gemini CLI [ link ] (June) that can iterate on code, actively testing and modifying it until it achieves a specified goal, has dramatically increased the usefulness of LLMs for real-world coding problems.
Iâm increasingly hearing from experienced, credible software engineers who are running multiple copies of agents at once, tackling several problems in parallel and expanding the scope of what they can take on. I was skeptical of this at first but Iâve started running multiple agents myself now [ link ] and itâs surprisingly effective, if mentally exhausting!
This feels very different from classic vibe coding, where I outsource a simple, low-stakes task to an LLM and accept the result if it appears to work. Most of my tools.simonwillison.net [ link ] collection (previously [ link ]) were built like that. Iterating with coding agents to produce production-quality code that Iâm confident I can maintain in the future feels like a different process entirely.
Itâs also become clear to me that LLMs actively reward existing top tier software engineering practices:
Automated testing. If your project has a robust, comprehensive and stable test suite agentic coding tools can fly with it. Without tests? Your agent might claim something works without having actually tested it at all, plus any new change could break an unrelated feature without you realizing it. Test-first development is particularly effective with agents that can iterate in a loop.
Planning in advance. Sitting down to hack something together goes much better if you start with a high level plan. Working with an agent makes this even more important - you can iterate on the plan first, then hand it off to the agent to write the code.
Comprehensive documentation. Just like human programmers, an LLM can only keep a subset of the codebase in its context at once. Being able to feed in relevant documentation lets it use APIs from other areas without reading the code first. Write good documentation first and the model may be able to build the matching implementation from that input alone.
Good version control habits. Being able to undo mistakes and understand when and how something was changed is even more important when a coding agent might have made the changes. LLMs are also fiercely competent at Git - they can navigate the history themselves to track down the origin of bugs, and theyâre better than most developers at using git bisect [ link ]. Use that to your advantage.
Having effective automation in place. Continuous integration, automated formatting and linting, continuous deployment to a preview environment - all things that agentic coding tools can benefit from too. LLMs make writing quick automation scripts easier as well, which can help them then repeat tasks accurately and consistently next time.
A culture of code review. This one explains itself. If youâre fast and productive at code review youâre going to have a much better time working with LLMs than if youâd rather write code yourself than review the same thing written by someone (or something) else.
A very weird form of management. Getting good results out of a coding agent feels uncomfortably close to getting good results out of a human collaborator. You need to provide clear instructions, ensure they have the necessary context and provide actionable feedback on what they produce. Itâs a lot easier than working with actual people because you donât have to worry about offending or discouraging them - but any existing management experience you have will prove surprisingly useful.
Really good manual QA (quality assurance). Beyond automated tests, you need to be really good at manually testing software, including predicting and digging into edge-cases.
Strong research skills. There are dozens of ways to solve any given coding problem. Figuring out the best options and proving an approach has always been important, and remains a blocker on unleashing an agent to write the actual code.
The ability to ship to a preview environment. If an agent builds a feature, having a way to safely preview that feature (without deploying it straight to production) makes reviews much more productive and greatly reduces the risk of shipping something broken.
An instinct for what can be outsourced to AI and what you need to manually handle yourself. This is constantly evolving as the models and tools become more effective. A big part of working effectively with LLMs is maintaining a strong intuition for when they can best be applied.
An updated sense of estimation. Estimating how long a project will take has always been one of the hardest but most important parts of being a senior engineer, especially in organizations where budget and strategy decisions are made based on those estimates. AI-assisted coding makes this even harder - things that used to take a long time are much faster, but estimations now depend on new factors which weâre all still trying to figure out.
If youâre going to really exploit the capabilities of these new tools, you need to be operating at the top of your game. Youâre not just responsible for writing the code - youâre researching approaches, deciding on high-level architecture, writing specifications, defining success criteria, designing agentic loops [ link ], planning QA, managing a growing army of weird digital interns who will absolutely cheat if you give them a chance, and spending so much time on code review.
Almost all of these are characteristics of senior software engineers already!
AI tools amplify existing expertise. The more skills and experience you have as a software engineer the faster and better the results you can get from working with LLMs and coding agents.
âVibe engineeringâ, really?
Is this a stupid name? Yeah, probably. âVibesâ as a concept in AI feels a little tired at this point. âVibe codingâ itself is used by a lot of developers in a dismissive way. Iâm ready to reclaim vibes for something more constructive.
Iâve never really liked the artificial distinction between âcodersâ and âengineersâ - thatâs always smelled to me a bit like gatekeeping. But in this case a bit of gatekeeping is exactly what we need!
Vibe engineering establishes a clear distinction from vibe coding. It signals that this is a different, harder and more sophisticated way of working with AI tools to build production software.
I like that this is cheeky and likely to be controversial. This whole space is still absurd in all sorts of different ways. We shouldnât take ourselves too seriously while we figure out the most productive ways to apply these new tools.
Iâve tried in the past to get terms like AI-assisted programming [ link ] to stick, with approximately zero success. May as well try rubbing some vibes on it and see what happens.
I also really like the clear mismatch between âvibesâ and âengineeringâ. It makes the combined term self-contradictory in a way that I find mischievous and (hopefully) sticky.
This post was discussed on Hacker News [ link ] and on lobste.rs [ link ].
OpenAI DevDay 2025 live blog [ link ] - 2025-10-06
I spent Monday at OpenAI DevDay [ link ] in Fort Mason, San Francisco. As I did last year [ link ], I live blogged the announcements from the kenote. Unlike last year, this year there was a livestream [ link ].
Disclosure: OpenAI provides me with a free ticket and reserved me a seat in the press/influencer section for the keynote.
You can read the liveblog on my site [ link ]. I joined Alex Volkov for a ten minute debrief directly after the keynote to discuss highlights, that segment is available on the ThursdAI YouTube channel [ link ].
Note 2025-10-06 [ link ]
Two of my public Datasette instances - for my TILs [ link ] and my blogâs backup mirror [ link ] - were getting hammered with misbehaving bot traffic today. Scaling them up to more Fly instances got them running again but Iâd rather not pay extra just so bots can crawl me harder.
The log files showed the main problem was facets [ link ]: Datasette provides these by default on the table page, but they can be combined in ways that keep poorly written crawlers busy visiting different variants of the same page over and over again.
So I turned those off. Iâm now running those instances with --setting allow_facet off (described here [ link ]), and my logs are full of lines that look like this. The â400 Bad Requestâ means a bot was blocked from loading the page:
GET /simonwillisonblog/blog_entry?_facet_date=created&_facet=series_id&_facet_size=max&_facet=extra_head_html&_sort=is_draft&created__date=2012-01-30 HTTP/1.1â 400 Bad Request
quote 2025-10-06
I believed that giving users such a simple way to navigate the internet would unlock creativity and collaboration on a global scale. If you could put anything on it, then after a while, it would have everything on it.
But for the web to have everything on it, everyone had to be able to use it, and want to do so. This was already asking a lot. I couldnât also ask that they pay for each search or upload they made. In order to succeed, therefore, it would have to be free. Thatâs why, in 1993, I convinced my Cern managers to donate the intellectual property of the world wide web, putting it into the public domain. We gave the web away to everyone.
Tim Berners-Lee [ link ], Why I gave the world wide web away for free
Link 2025-10-06 GPT-5 pro [ link ]:
Hereâs OpenAIâs model documentation for their GPT-5 pro model, released to their API today at their DevDay event.
It has similar base characteristics to GPT-5 [ link ]: both share a September 30, 2024 knowledge cutoff and 400,000 context limit.
GPT-5 pro has maximum output tokens 272,000 max, an increase from 128,000 for GPT-5.
As our most advanced reasoning model, GPT-5 pro defaults to (and only supports) reasoning.effort: high
Itâs only available via OpenAIâs Responses API. My LLM [ link ] tool doesnât support that in core yet, but the llm-openai-plugin [ link ] plugin does. I released llm-openai-plugin 0.7 [ link ] adding support for the new model, then ran this:
llm install -U llm-openai-plugin
llm -m openai/gpt-5-pro âGenerate an SVG of a pelican riding a bicycleâ
Itâs very, very slow. The model took 6 minutes 8 seconds to respond and charged me for 16 input and 9,205 output tokens. At $15/million input and $120/million output this pelican cost me $1.10 [ link ]!
Hereâs the full transcript [ link ]. It looks visually pretty simpler to the much, much cheaper result I got from GPT-5 [ link ].
Link 2025-10-06 gpt-image-1-mini [ link ]:
OpenAI released a new image model today: gpt-image-1-mini, which they describe as âA smaller image generation model thatâs 80% less expensive than the large model.â
They released it very quietly - I didnât hear about this in the DevDay keynote but I later spotted it on the DevDay 2025 announcements page [ link ].
It wasnât instantly obvious to me how to use this via their API. I ended up vibe coding a Python CLI tool for it so I could try it out.
I dumped the plain text diff version [ link ] of the commit to the OpenAI Python library titled feat(api): dev day 2025 launches [ link ] into ChatGPT GPT-5 Thinking and worked with it to figure out how to use the new image model and build a script for it. Hereâs the transcript [ link ] and the the openai_image.py script [ link ] it wrote.
I had it add inline script dependencies, so you can run it with uv like this:
export OPENAI_API_KEY=â$(llm keys get openai)â
uv run link âA pelican riding a bicycleâ
It picked this illustration style without me specifying it:
(This is a very different test from my normal âGenerate an SVG of a pelican riding a bicycleâ since itâs using a dedicated image generator, not having a text-based model try to generate SVG code.)
My tool accepts a prompt, and optionally a filename (if you donât provide one it saves to a filename like /tmp/image-621b29.png).
It also accepts options for model and dimensions and output quality - the --help output lists those, you can see that here [ link ].
OpenAIâs pricing is a little confusing. The model page [ link ] claims low quality images should cost around half a cent and medium quality around a cent and a half. It also lists an image token price of $8/million tokens. It turns out thereâs a default âhighâ quality setting - most of the images Iâve generated have reported between 4,000 and 6,000 output tokens, which costs between 3.2 [ link ] and 4.8 cents [ link ].
One last demo, this time using --quality low:
uv run link \
âracoon eating cheese wearing a top hat, realistic photoâ \
/tmp/racoon-hat-photo.jpg \
--size 1024x1024 \
--output-format jpeg \
--quality low
This saved the following:
And reported this to standard error:
{
âbackgroundâ: âopaqueâ,
âcreatedâ: 1759790912,
âgeneration_time_in_sâ: 20.87331541599997,
âoutput_formatâ: âjpegâ,
âqualityâ: âlowâ,
âsizeâ: â1024x1024â,
âusageâ: {
âinput_tokensâ: 17,
âinput_tokens_detailsâ: {
âimage_tokensâ: 0,
âtext_tokensâ: 17
},
âoutput_tokensâ: 272,
âtotal_tokensâ: 289
}
}
This took 21s, but Iâm on an unreliable conference WiFi connection so I donât trust that measurement very much.
272 output tokens = 0.2 cents [ link ] so this is much closer to the expected pricing from the model page.
Note 2025-10-06 [ link ]
Iâve settled on agents as meaning âLLMs calling tools in a loop to achieve a goalâ [ link ] but OpenAI continue to muddy the waters with much more vague definitions. Swyx spotted this one [ link ] in the press pack OpenAI sent out for their DevDay announcements today:
How does OpenAl define an âagentâ? An Al agent is a system that can do work independently on behalf of the user.
Adding this one to my collection [ link ].
Link 2025-10-06 Deloitte to pay money back to Albanese government after using AI in $440,000 report [ link ]:
Ouch:
Deloitte will provide a partial refund to the federal government over a $440,000 report that contained several errors, after admitting it used generative artificial intelligence to help produce it.
(I was initially confused by the âAlbanese governmentâ reference in the headline since this is a story about the Australian federal government. Thatâs because the current Australia Prime Minister is Anthony Albanese.)
Hereâs the page for the report [ link ]. The PDF now includes this note:
This Report was updated on 26 September 2025 and replaces the Report dated 4 July 2025. The Report has been updated to correct those citations and reference list entries which contained errors in the previously issued version, to amend the summary of the Amato proceeding which contained errors, and to make revisions to improve clarity and readability. The updates made in no way impact or affect the substantive content, findings and recommendations in the Report.
quote 2025-10-07
For quite some I wanted to write a small static image gallery so I can share my pictures with friends and family. Of course there are a gazillion tools like this, but, well, sometimes I just want to roll my own. [...]
I used the old, well tested technique I call brain coding, where you start with an empty vim buffer and type some code (Perl, HTML, CSS) until youâre happy with the result. It helps to think a bit (aka use your brain) during this process.
Thomas Klausner [ link ], coining âbrain codingâ
Link 2025-10-08 Python 3.14 [ link ]:
This yearâs major Python version, Python 3.14, just made its first stable release!
As usual the whatâs new in Python 3.14 [ link ] document is the best place to get familiar with the new release:
The biggest changes include template string literals [ link ], deferred evaluation of annotations [ link ], and support for subinterpreters [ link ] in the standard library.
The library changes include significantly improved capabilities for introspection in asyncio [ link ], support for Zstandard [ link ] via a new compression.zstd [ link ] module, syntax highlighting in the REPL, as well as the usual deprecations and removals, and improvements in user-friendliness and correctness.
Subinterpreters look particularly interesting as a way to use multiple CPU cores to run Python code despite the continued existence of the GIL. If youâre feeling brave and your dependencies cooperate [ link ] you can also use the free-threaded build of Python 3.14 - now officially supported [ link ] - to skip the GIL entirely.
A new major Python release means an older release hits the end of its support lifecycle [ link ] - in this case thatâs Python 3.9. If you maintain open source libraries that target every supported Python versions (as I do) this means features introduced in Python 3.10 can now be depended on! Whatâs new in Python 3.10 [ link ] lists those - Iâm most excited by structured pattern matching [ link ] (the match/case statement) and the union type operator [ link ], allowing int | float | None as a type annotation in place of Optional[Union[int, float]].
If you use uv you can grab a copy of 3.14 using:
uv self update
uv python upgrade 3.14
uvx python@3.14
Or for free-threaded Python 3.1;:
uvx python@3.14t
The uv team wrote about their Python 3.14 highlights [ link ] in their announcement of Python 3.14âs availability via uv.
Thanks for reading Simon Willisonâs Newsletter! Subscribe for free to receive new posts and support my work.
Unsubscribe link