Friday AI pick-and-mix
Friday > Frid-AI? Does that work?
As the old saying goes, “I didn’t have time to write a short letter, so here’s a long one” featuring a pick-and-mix of observations. Feel free to skim.
2026 as the year of putting AI to work
This observation rings true with my recent experience, which has caused this subject to rise above generic marketing activity into something resembling a signal.
OpenAI’s CEO of Applications (as distinct from their Research team, who are the people designing the models) Fidji Simo posted a few weeks ago about how this year a major focus was getting organisations to use the models because they’re already more than capable enough:
“I am incredibly excited about our research roadmap this year, and even more driven to turn those breakthroughs into everyday impact for society. To that end, I wanted to share a version of what I shared with the team internally about our plan to address capability overhang and ensure that everyone can get the full benefit of our models through exceptional products.”
Closing the capability gap between frontier AI and everyday use in 2026
“Capability overhang” just went straight into my list of favourite jargon. What a phrase.
Now, the person at OpenAI responsible for applications… hyping up applications at OpenAI is not in itself news. But! I agree with this direction when I’m thinking about the year ahead.
Each of the major providers is going deeper into specific use cases (Anthropic and Google speaking to different audiences in different ways) and this is going to continue as they seek to recoup the massive investment needed for research by owning the stack.
I’m positive about this work because they are still going to be providing fairly generalist products for now. I believe there is a market for individuals and organisations (like ours, obvs) - who have great empathy, relationships and taste - to make choices about the future, which arguably becomes more important because the barriers to adoption are collapsing and you can do much more than before.
Claude Cowork arrives
This might be old to some, but it’s notable in relation to the above. Claude Code has been a hot topic of conversation in development circles recently, particularly since the launch of Opus 4.5 - their newest, SOTA reasoning model. Very clever and very expensive, it can be set running in the background using Claude Code and comes back with some good results but it’s so expensive per call that it’s pushing people onto the $200 max tier. For some people (solo devs or entrepreneurs) they feel that’s still great value.
Seeing this potential, Anthropic added Cowork which is basically Claude Code wearing a suit and tie, or at least chinos and a polo shirt. They’re pitching the autonomous agent at broader professional use cases and they tell us was largely developed by Claude Code. A virtuous cycle indeed! It’s only available to subscribers right
.Sidebar: Claude isn’t one of my core subscriptions right now, but I still use the Claude Code app alongside Cursor (developer app) for experimentation using Pydantic AI’s Gateway service. I had a little bug that the team at Pydantic were super helpful at fixing when reported on their Slack channel so kudos to those guys! Love your products @pydantic
AI’s view of my week
I asked ChatGPT and Gemini to summarise my week based on the various chats we have during my dog walks or in the daytime. Here’s what they collectively came up with:
AI-generated weekly summary
This week I’ve been thinking a lot about what “AI adoption” actually looks like in the real world — not as a shiny capability, but as the messy, practical work of getting tools embedded into workflows in a way that people can trust. I keep coming back to the idea that AI should reduce friction and raise the floor… but it only does that when you’re clear about what humans still uniquely bring to the table: judgement, taste, accountability, and the ability to sense when something is “off” even before you can rationalise why.
A related thread I can’t shake is the difference between analytical direction and felt direction — that bodily, pre-rational sense of “this matters” or “this is wrong” that doesn’t arrive as a spreadsheet. As more analytical labour gets automated, that human layer feels less like a nice-to-have and more like the thing we should actively train and protect. I’ve been trying to translate that into practical principles: how you design systems, how you evaluate them, and how you help teams build confidence without pretending certainty exists.
On the more hands-on side, I’ve also been deep in the weeds on the unglamorous parts: tightening up toolchains, improving how things are measured and monitored, and making small UX tweaks that make daily work feel smoother. It’s not headline-grabbing, but it’s the kind of incremental progress that compounds — and it’s usually where the real “deployment” story lives.
What I’m taking forward: build for real use, keep humans meaningfully in the loop, and treat intuition/judgement as a skill to strengthen — not an inconvenience to eliminate.
Spread the link love: “The blood dimmed tide of agents”
I saw this post shared by James from Redmonk - came for the headline, stayed for the content. It gives you a window into the leading edge of AI agents running in swarms and working at a scale beyond the individual. Really worth a read: The Blood Dimmed Tide of Agents.
Related: an X thread by creator of Claude Code Boris Cherny went viral where he talks about how he uses the product himself. He’s running multiple agents and sees himself as the director of operations. In the future, the skills of strategy and direction will continue to be important when AI is capable at doing the implementation. Also, it already is. I dislike X so here I’m linking elsewhere to avoid giving them traffic: How the creator of Claude Code uses Claude Code.
Feedback welcome
I’m currently meeting with all our clients about their plans for AI in the next 12-24, but I’d be keen to speak to more people beyond that group so reply and let me know if you fancy a chat about that topic.
See you next time. If you liked this, tell a friend?
Attribution: Gemini / Nano Banana made the image. The one that didn’t make the cut, because I originally prompted it ironically but then the glitchy version above appealed to me more visually.


