AI isn't optional anymore
Introduction
I’m not the first and I won’t be the last person discussing the fact that AI isn’t optional anymore in the craft of product development.
Don’t get me wrong, it’s absolutely fine if you’re a single developer, or an open source maintainer, that doesn’t use or want to use AI. That’s your choice and yours alone.
It’s an entirely different problem if you work in an organisation and you don’t use AI. At this stage, if this is because the organisation doesn’t allow you to use it, you’re getting behind, you just don’t know it. My hot take is that you should find a better company to work at. If this is because you are against AI and you’re fighting your organisation, well, good luck.
Honestly, the debate about whether developers should use AI has become tedious. One camp insists LLMs are stochastic parrots destroying software engineering. Some go even further and shame developers using AI. Another camp claims AI will replace all developers.
All miss the point. None are quite right. The ones shaming are just wrong. A more useful question in my opinion is: how do you use AI well and how should leaders enable their teams to do so?
And that’s what I’d like to spend the rest of the post talking about.
After months of watching my engineering team at incident.io evolve over the use of Claude Code and Cursor, if you haven’t developed the skill of working with these tools, you’re now late and need to start working towards getting up to speed.
AI won’t replace humans, but humans who use AI will replace those who don’t.
How did I get here?
As a VP of Engineering, I’ve watched my team transform with Claude Code and Cursor. The tickets that folks create, end up with better context so that AI can do a better job at one-shotting the outcome. Folks now ask to fix a bug that a customer has just mentioned from Slack. The friction reduction is real and measurable. Basically, in the same amount of hours, more gets shipped.
As a software engineer, I use Claude Code for open source work. I’ve built acdc, an AsciiDoc parser in Rust, and lately, I’ve accelerated a bunch of features and bug fixes with AI assistance.
For me, the value add of AI isn’t a hypothetical anymore. The productivity difference is real!
Why am I talking about this?
Well, sometimes it can be quite counterintuitive to think about the use of AI for doing work.
For example, AI doesn’t make you a better engineer, at least not at this stage, and not without a lot of supervision if you want to build maintainable software. What a lot of folks fail to realise is that it makes you a better product (SaaS at least) engineer. I don’t think that’s in question anymore.
Let’s see why I think that. AI lets you spend more time on the parts of engineering that matter. Design, architecture, understanding the problem, experimenting with which approach to go with for a refactor. And it also helps you spend less time on the mechanical work of implementing what you’ve already decided. LLMs automate typing, not thinking.
Now, that’s not nothing and the compound effect is pretty significant from what I can tell. And it also applies to open source. I have the same hours to do open source, but I have more shipped. But only because I’m doing the thinking and the AI is doing the typing.
Therefore, engineers who use AI effectively are diverging from those who don’t. Not because AI is magic (although sometimes it definitely fools me), but because it removes friction from the mechanical parts of the work.
Ignoring AI at this stage is expensive. If you’re a leader in a product development organisation and you’re not sure yet, you’re behind.
What I’ve learned about using it well
Months of daily use (mine and my team) have given me a bunch of lessons that I believe are quite useful for product development teams to consider.
Only use AI for code you could write yourself
If you don’t understand what you’re building, AI will confidently produce something that looks plausible but isn’t. I only ask Claude to help with Rust because I know Rust. When the output is wrong, I catch it. When it’s right, I understand why. The people who get burned are those using AI to work in unfamiliar territory.
When you skip the "could write yourself" part, you end up building on foundations you don’t understand. I’ve seen this play out: someone ships a feature using patterns they couldn’t explain if asked, and then when it breaks (or needs to change), they’re stuck. They can’t debug it because they never understood how it worked. They just prompt again and hope for the best.
On the flip side, and I think it’s important: AI can be a genuinely good learning tool if you approach it deliberately. When I’m exploring an unfamiliar corner of Rust’s type system, I’ll ask Claude to explain its reasoning, not just give me the answer. I’ll ask for multiple approaches and compare them, and I’ll intentionally break things to see what happens. My belief is that doing so helps build skill, or at the very least prevent its atrophy.
|
Note
|
There’s an entire blog post I’d like to write about how to onboard and grow junior engineers, who in my opinion are in a bit of a precarious position if they don’t know how to use these tools effectively. For now: if you’re early in your career, use AI to accelerate your learning, not to skip it. |
Keep tasks small enough to review completely
Using Claude Code on acdc, I’m moving through
implementation at a pace that would have seemed unreasonable two years ago. Not because AI
writes perfect code, which it doesn’t. I do my best to keep each task focused: error
handling for this function, test scaffolding for that module, refactoring across a few
specific files. Small enough that I can verify every line. The only exception to this was
vibe coding a LSP [1] for AsciiDoc, which I did as an experiment. A
successful one mind you, and I use this LSP every day now (even with its many
shortcomings).
Review AI code more strictly than code written by a colleague
This sounds harsh, but it’s fairly practical. A person working in a given app/repository has context about the codebase, understands the constraints, and will defend their choices (sometimes too much). AI doesn’t quite have enough of the "human-like" pushback. It produces plausible code that might miss subtle requirements that a person would catch easily. I like to treat code that a LLM has given me as if it was from someone who’s never seen my codebase before.
Watch for when it goes off the rails
AI will quietly make changes you didn’t ask for. This is as frustrating as it is entertaining.
It will also attempt the same fix repeatedly when stuck. It will abandon work partway through without telling you. It struggles to admit uncertainty.
Claude Code is getting better but still does a fair amount of this in my experience.
Don’t fear refactors (and rewrites)
Remember when we all used to procrastinate around refactors? When a refactor meant potentially weeks of design work, or trial and error explorations? I don’t know a lot of people that like doing it. Some of us don’t mind. But most do. AI has pretty much made these free. Have an idea for a rewrite? Have a half hour conversation with Claude Code and then wait 10 minutes and you might have a rewrite done that would have taken you days or even weeks.
Use LLMs responsibly
I strongly recommend reading Oxide Computer Company’s RFD 576: "Using LLMs at Oxide". I pretty much agree with all of it. Definitely go read it. There’s one concept in there that is super interesting: the social contract between writers and readers.
The idea is straightforward. When you read someone’s code or document, there’s an implicit agreement: the writer has done more intellectual work than the reader. You trust that the person who wrote it understands it, because they’re the one who produced the body of work. LLMs break this contract. The writer may not have done the intellectual heavy lifting at all, and the reader can no longer assume the author understands their own output. The RFD has a name for this: ’LLM-induced cognitive dissonance’, and honestly I think it captures it well.
For code review specifically, I think this has real consequences. If I put up a PR with LLM-generated code that I haven’t reviewed and understood myself, I’m basically asking a colleague to do work I didn’t do. That’s not a fair trade at all. And then, if review comments get addressed by re-generation rather than fixes out of collaboration and discussion, you’re just re-rolling dice (I’m being slightly pessimistic to make the point).
What makes this framing useful to me is that it shifts the question from "is it OK to use AI?" (obviously yes) to "what do you owe the person reading your output?".
Just because a machine did the typing doesn’t mean you don’t have a responsibility. If anything, the responsibility to doing right by the reader increases, because the default trust is lower (at least at this point in time).
A final note on using LLMs with responsibility: AI isn’t accountable, humans are.
If you put up a pull request thinking that "if this goes wrong, AI did it" you’re about to have a wake up call. This is one that both me and Pete (CTO at incident.io) are very aligned on. AI isn’t accountable. You are. Act like it.
A word to SaaS leadership
If you’re leading a product development organisation and your AI posture is still "we’re exploring", the gap between you and teams that have adopted AI is growing at a pretty fast pace.
Here’s what I think you (SaaS leaders) need to get right:
-
Invest in discipline instead of vibe coding: AI amplifies your existing engineering culture. Strong review and testing discipline? AI accelerates good outcomes. Weak discipline? AI accelerates problems. I like how Atharva Raykar put it in his post on AI-assisted coding for teams that can’t get away with vibes : what helps the human helps the AI. Testing infrastructure, CI/CD, documentation, clear task breakdowns. The boring stuff pays double when AI is in the mix.
At incident.io, we’ve always been relentless on fast build times [2][3], short feedback loops, automating as much as possible. Our defaults and our stance on making sure that tests, builds, etc run as fast as possible, serve us incredibly well with AI. As Rory elegantly put it in this blog post:
When you can generate code ten times faster than before, every part of your toolchain that can’t match that speed becomes unbearable.
— Rory Bain -
Adjust to the shift in the bottlenecks: As code gets cheaper (in terms of developer time), the constraint moves upstream to product thinking, product taste, and even further up towards product strategy. If all you’ve celebrated is "the team ships faster" without improving what you ship and why, you’ve just accelerated more efficiently in the wrong direction. That should worry you more than whether your engineers are using Cursor or not.
-
Remove friction, but don’t mandate: Shopify’s approach is the most instructive I’ve seen. They got legal to be supportive early and they put no spending caps on AI tool tokens. Farhan Thawar, their VP of Engineering, explicitly warned against leaders who clamp down on token costs, calling it at odds with the goal of driving adoption. You don’t need mandates. You need to get out of the way. This aligns with the Oxide approach I mentioned earlier, and it works better.
When I think about why AI works at incident.io (when so many other leaders discuss how it doesn’t work for them), I end up on those 3. We have very little friction, developers already have great product intuition and taste, and we were already investing a lot in documentation that sits close to the code.
One more thing. How you communicate about AI adoption matters almost as much as what you actually do. Pete (CTO at incident.io) was incredibly explicit: assume money is no objection when it comes to using AI to enable and amplify your ability to be the best product engineer you can be.
Conclusion
I think the best distillation of how to think about LLMs comes from Oxide’s RFD 576, and it remains the most thoughtful stance I’ve seen for the most part:
-
No mandates. No one should force you to use LLMs. Although if you aren’t using them…
-
No shaming. The people using these tools aren’t cheating, they’re doing what it says on the tin: using tools.
-
Responsibility stays with the human. The tool doesn’t absolve you of owning your output.
That last point connects directly to a core principle of mine at this stage: AI can be held accountable only by people. The moment you treat AI output as trusted, you’ve made a mistake. The moment you ship code you haven’t reviewed, you own the bugs. Not the AI.
Having said all that, AI is here to stay and is already one of the most impactful tools I’ve witnessed in the industry. The argument isn’t really "AI is good" versus "AI is bad", or "AI isn’t useful" versus "We’re entering the singularity". My argument here is that it’s about whether you’re going to develop the skill of using these tools well.
The craft of software engineering hasn’t changed. You still need to understand the problem, come up with a solution and implementation that makes sense for the real world you live and work in. But AI shifts where you spend your effort. Less typing, more thinking, iterating, and verifying. And if you haven’t tried voice tools like SuperWhisper or WisprFlow, you should. You won’t even need to do typing.
You don’t have to love AI. You can be critical about its limitations, concerned about its implications, cautious about its use.
But the tools have changed. The editor isn’t king anymore.