Vibe coding is not the same as coding assisted by AI

A couple days ago I posted the following on Bluesky [1]:

Vibe coding is not the same thing as coding assisted by AI.

Both can be useful but not in the same way.

Don’t confuse the two.

— Me, on Bluesky

And the moment I did so, I thought that deserved more than just a Bluesky post/tweet (not sure what to call it). Anyway, here’s roughly the thinking that led to that post.

I’ve been seeing people sharing projects built with AI assistance and immediately they get tagged as "vibecoding". It’s happened to my own project too.

Yes, some people "vibe code". Some people build carefully, think thoroughly, and guide AI to the outcomes they want. But people seem to either confuse or conflate two fundamentally different things and it’s making things worse. It’s even diluting the points some folks make against AI. See more about this last point in my afterword.

Vibe coding

As far as I can tell, this was coined by Andrej Karpathy in February 2025 [2]:

There’s a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. […​]

— Andrej Karpathy

That’s, funnily enough, really specific and a pretty decent starting point. I’d perhaps adjust my own definition to "You describe what you want, AI generates the code, you run it, see if it works, adjust with more prompts. You don’t review the code. You might not even read the code. You code by feel".

What’s difficult to get from whichever definition you go by, is realising that vibe coding can be useful.

Here are a few ways:

  • You can experiment cheaply. You have an idea and want to see it materialised. You don’t care if it’s production ready, or well designed. You just want something real to evaluate whether the idea has merit.

  • You can test the ergonomics of an approach. Want to know if an interaction pattern feels right? Want to know how a rewrite will pan out? Just go for it. Do 3 at the same time. Bad implementations and approaches can also teach you something about the problem or your thinking.

  • You can more easily think about the product. This one is so easy to ignore, especially for software engineers. You can iterate on product ideas so quickly. You can do a feature in 20 minutes, and test your hypothesis about whether it is worth building, before you invest more.

And here’s arguably the most important property of vibe coding: you can always discard it and not feel bad about it because it was cheap for you to do so. You’re not pretending it’s production software (and if you are, oof). You’re exploring.

Here’s a concrete example, my very own LSP server for AsciiDoc as part of my acdc project. It was more "vibe coded" than "coded assisted by AI", and therefore I added this warning to its README:

Note: This tool was heavily built using Claude Code and has not yet been fully reviewed. Use with appropriate caution and please report any issues you encounter.

I run it every day and I’m pretty happy with it, but I haven’t fully validated the code beyond a cursory look. I don’t think that should invalidate all the free time I spent for a year working on acdc.

So, where does that leave us when it comes to "coding assisted by AI"?

Coding assisted by AI

When I write code assisted by AI (which is a lot nowadays), I’m planning, I’m making decisions, I’m steering, I’m reading all of the code produced by the AI. Think of it as AI being a tool in my hands.

I have domain knowledge, I understand the system I’m working on, I know what I want at the end, and so I review every change with at least the same rigour I’d apply if it was written by me, or a colleague, or a contributor putting up a new pull request.

In a previous post, I wrote about treating AI-generated code more strictly than a colleague’s and that hasn’t changed here. What has changed is how much more I can do [3].

What is code assisted by AI then? For me I find it useful to:

  • Bounce ideas off AI the way I used to bounce them off a whiteboard, or whilst having some shower thoughts. "Here’s how I’m thinking about this architecture. What am I missing?" It’s not always good but just by the fact that it forces me to articulate my thinking, I’m already getting value.

  • Build visual aids. I’m a visual learner and so it’s a lot easier to produce these with AI. It is trivial now to produce amazing visual aids, where I can see the shape of something, take in the complexity of an entire system, and do it all in a few seconds rather than spending half an hour in a diagramming tool.

  • Challenge myself by telling AI to ask me questions that challenge my current best understanding of a problem. It’s wild to me that sometimes they expose a blind spot.

  • Refactor fearlessly. This is easily my favourite one and I’ve written about it before [4]. I remember when refactors meant weeks of careful but also anxious work. Now I can take 4, 5 different approaches in parallel, steer each for a couple of hours and get to a point where I can deeply understand the tradeoffs of each approach, pick one I want, then either do the refactor myself, or use AI to achieve it much more carefully.

  • Investigate problems that require lots of set up to test. This has been happening quite a lot in my work in slack-go/slack, where someone will open an issue and I’ll just ask AI to write me a Go example demonstrating the problem that I can run against a sandbox environment. I estimate this has saved me weeks of work in the last year, making my maintainer life that much more enjoyable.

The point is: I’m actively learning, directing, reviewing, and making judgement calls at each step. And my rule of thumb is: If I can’t explain why the code does what it does, then it’s not being assisted, it’s vibe coded. That’s roughly the most clear distinction between AI-assisted coding and vibe coding.

So what?

The problem with collapsing the two concepts into one label is one of people.

Engineers who carefully use AI as a tool, in a disciplined way, get lumped in with people who throw prompts at an LLM and ship whatever comes back. Their work gets dismissed not on the merits of its quality, but because of its association with AI. I’d love to get to a place (and we’ll get there!) where the only question that matters is "Is this well engineered?".

It also goes the other way: if everything that has AI is labelled "vibe coded", actual vibe coding doesn’t get the scrutiny it deserves either. I do want people to know when something was "vibe coded" and what that might mean: unreviewed, unvalidated, "code and prayers" kind of engineering.

A few months ago the r/rust moderators published a thread asking for community input on moderating AI-generated content. It’s worth a read in full for sure. Go ahead, I’ll wait. I think the thread captures the struggles that are showing due to lots of AI content appearing, especially in open source communities. These are thoughtful folks for the most part, trying to protect the r/rust community from low-quality content. The hard bit is how to distinguish in practice between "vibe coded" and "coding assisted with AI". Without that distinction in practice you end up either too permissive or too restrictive.

Conclusion

I’m not sure I have a grand conclusion on this one. I vibe code sometimes, and I also use AI to carefully engineer software where I actually do review every line.

Going back to the r/rust thread I mentioned earlier, I think evaluating which is which on a post is important, and I think it’s the job of the author to state it explicitly. If it’s vibe coded, say it is vibe coded.

Knowing when something was vibe coded allows folks evaluating your work to ask better questions than "was AI involved?" or even worse, make comments like "this is just AI slop". We all write shit code sometimes, except now it’s always labelled as AI slop. Sometimes it’s just shit code.

In production systems, as a reviewer, and especially in big companies, ask better questions than "was this AI?". Ask questions like "What decision led to this? What tradeoffs did you consider? Which parts do you think are solid vs weak?" That’s a lot more useful, both to you and the author.

If you’re building assisted by AI, be honest about what you’re doing. If it’s vibe coding, call it that. Own it. If you say you stand by your code, be ready to defend and argue your decisions.

Last but not least, nothing wrong with vibe coding or coding assisted with AI, but knowing when one vs the other was used to produce software is important, especially for the reader.

Afterword

I mentioned in the intro that I’ve seen my own project get tagged as "vibe coded", so I’m going to use it to make the point here.

Someone was thoughtful enough to send me an email mentioning they posted one of my projects on lobste.rs. It was really just a heads up. A few hours later I opened the lobste.rs post and the first comment (from a different person) was, verbatim:

This submission functions as an endorsement of genAI, a technology with a very large body of evidence and discourse about the many ways in which it is harmful. I would like to see fewer submissions like this.

— A person worried about the ways in which they think AI is harmful

Putting personal experience aside, there’s also the open-slopware [5] project, a list of software "tainted by LLM developers". And even though the original author took it down after backlash, someone forked and resurrected it.

Don’t get me wrong, nothing is above criticism, including AI. There are legitimate reasons for folks to care about how software is made. For example, software used in the military (I don’t work in the space but I can understand how a code of ethics, rigorous review, and avoiding whole classes of errors must be the norm, as this software can cause harm or death accidentally).

I’ve heard concerns on the environmental impact, both on energy and also water consumption by AI infrastructure [6]. I also think that licensing questions around AI training data are real and deserve serious discussions on how we approach and solve those issues. And I also think not enough experts are having the discussion.

But being anti without any real positive effect doesn’t help anyone build better software; especially when you’re just pointing out the negatives and pretending it doesn’t have any positives. I think that some of the ongoing discussions have gone well past legitimate concerns and morphed into something else: effectively, a rejection of anything touched by AI, regardless of human judgement, review, or craft applied. What it feels like is that these anti AI people are using AI as a purity test in software development, which is not useful or productive for anyone.

And it dilutes the real points about AI being also harmful.

I’ve mentioned earlier that one way for people to help with "vibe coding" vs "coding assisted by AI" projects being conflated is to be honest about when something is "vibe coded" vs "assisted by AI".

Well, if on top of that, authors get labelled as evil if they use AI, then I think the following will happen:

  1. People will try to hide their AI use. If being transparent about which process you use gets you labelled and fully dismissed, then folks will not even say they’re using AI. They’ll get caught but it seems so silly they’d even feel like they have to hide it.

  2. People will stop experimenting openly. This is the one that worries me the most. So much of progress is made by experimentation. Experimentation, exploration, depends on being able to share it with others, and if you’re not able to do so without being told you’re destroying the world, experiments, and lessons won’t be shared.

What can the doomsayers do then? One is to have conversations openly but without shaming. The other is to stop the silly lists (like open-slopware [5]). And lastly, also acknowledge the positives, don’t be blinded by the current obstacles AI creates.


3. Some of this is being challenged in studies but my own experience is that I am getting the outcomes I want more frequently.
4. I wrote about this in more detail in AI isn’t optional anymore.
5. I’m not linking to it on purpose, but it’s easy enough to find if you search for it.
6. I’m not yet fully convinced by the short-termist framing of the argument, but I acknowledge the concern is genuine and worth engaging with seriously.
~ fin ~