MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
code
Search

9 habits of the highly ineffective vibe coder

Monday August 4, 2025. 11:00 AM , from InfoWorld
Is vibe coding really as easy as they say? Consider the butler, the meat-space equivalent of an AI. There are schools that specialize in teaching new butlers skills like how to serve breakfast or make a perfect martini. But did you know these same schools have a parallel track teaching rich people how to get along with their butler? That’s right, rich people learn how to hold the teacup correctly, so the butler can gracefully fill it with tea. They even learn what kinds of requests are appropriate and which are not. It’s the sort of thing that can’t be taught in a two-minute TikTok video.

Being waited on hand and foot isn’t easy. There’s a right way to hold the teacup, and it’s important to know the difference between appropriate requests and ones that are doomed to fail.

The same goes for vibe coding. Oh, sure, it’s amazing what artificial intelligence and generative AI can do. A good AI coding assistant can piece together working code that does much of what a developer wants, often based on just a few sketchy sentences and some random hand-waving (aka prompt engineering). Some days, you can type a few lines, and the AI will do in minutes what would otherwise take hours, or days. But those are the good days. The limitations of AI-generated code can be subtle, but they’re always present. And to make matters worse, we’re not exactly sure what they might be. We’re all just learning—humans and machines alike.

Here are nine ways software developers can go wrong with vibe coding.

Trusting the LLM

This morning, I asked an AI to compile a list of URLs. It replied in seconds, with a nice list that fit the format I needed and looked correct. But when I checked, they all generated 404 errors. Every single one broke.

When I told the AI, it responded, “You’re right, my apologies for that! Website links can change frequently, and it looks like my previous information was outdated. I have re-verified each entry and updated the list with the correct, working URLs.” But none of the URLs on the new list worked, either.

The human who wrote the secret system prompt for many of today’s standard AIs has done us all a disservice. The dominant personality for these large language models seems to be that of a tireless, ingratiating toady. The AIs are prompted to be agreeable and helpful above all else, and so when they can’t figure something out, they just spit out something worthless and insist it works.

The first deadly mistake of vibe coding is ever trusting the LLM in the first place.

Assuming all models are alike

It’s easy to think that one large language model is the same as any other. The interfaces are largely identical, after all. In goes some text and out comes a magic answer, right? LLMs even tend to give similar answers to easy questions. And their names don’t even tell us much, because most LLM creators choose something cute rather than descriptive.

But models have different internal structures, which can affect how well they unpack and understand problems that involve complex logic, like writing code. Some models have more elaborate mechanisms for breaking the problem into multiple parts and then creating loops that work separately on each part. These can make a big difference.

The number of LLM parameters is also a rough indication of how much knowledge is packed away inside the model. More parameters are generally better—except when they aren’t, and sometimes you’ll only learn this through experimentation.

LLMs also are trained on different sets of data, and the composition of the training set is often a mystery. Did your LLM learn on JavaScript scraped from the open web, or well-documented code from a working repository? Did it ingest enough COBOL along the way to be of any use processing your old legacy code?

Sometimes, the only way to find out is to trust your coding problem to the machine.

Treating your LLM like a dumpster

Many developers don’t realize how much LLMs are affected by the size of their input. The model must churn through all the tokens in your prompt before it can generate something that might be useful to you. More input tokens require more resources.

Habitually dumping big blocks of code on the LLM can start to add up. Do it too much and you’ll end up overwhelming the hardware and filling up the context window. Some developers even talk about just uploading their entire source folder “just in case.”

Commercial models tend to bill by the input and output token, so long fishing expeditions can be slower and much more expensive. All those bits of code cost something to process.

Lots of code can also distract the AI and may even create confusion. The LLM could focus on a section of the code that doesn’t really matter for what you are trying to achieve. While they’re often smart enough to slog through the details, dumping a huge codebase on your AI coding assistant can backfire.

Assuming AIs think like we do

They talk like we do. They hallucinate and misremember things like we do. It’s easy to imagine AIs are just like us. But they’re really just clever mimics, splicing together bits and pieces from their training data into something useful. That doesn’t mean they’re exactly thinking.

AI assistants do best when they’re focusing our attention on some obscure corner of the software documentation. Or maybe they’re finding a tidbit of knowledge about some feature that isn’t where we expected it to be. They’re amazing at searching through a vast training set for just the right insight.

They’re not always so good at synthesizing or offering deep insight, though. Oh, they can be amazing at times, but that’s often because they are parroting a clever human who wrote some document in the training set. It’s best not to expect your AI coding assistant to be a genius every time.

Creating a patchwork quilt of code

Most development shops have a collection of coding standards that impose rules on the code, all for the goal of harmonizing the output. AIs aren’t so disciplined. Indeed, they often inject enough randomness into the process that the coding style in the output changes from call to call. Repeating the same prompt will often generate code that’s entirely different each time. It works, but the variations in style can be jarring.

Vibe coders tend to ignore this and just cut and paste everything together. The code runs, but it looks like a fool’s motley. There’s no real consistency or standard to it. They’re just crossing their fingers and hoping they won’t need to wade through the mess and try to figure out what’s going on.

Ignoring the LLM’s programming biases

We’ve all heard it said that AI is only as good as its training set. Everything that goes into the LLM ends up influencing what comes out. Many developers have war stories about how a small bias ended up exploding in their code. Some talk about “recency bias,” which kicks in when programmers just reuse the same design pattern over and over again. Others talk about the “not invented here” bias, where teams lean toward their own pet creations. There are dozens of biases in the training sets, and they’ll all make their way into the LLM’s output one way or another.

For many vibe coders, the whole idea is not worrying about these details. That might work for basic programming chores, but these inherent programming biases can negatively affect larger codebases and critical projects.

Ignoring the costs

AI tools look cheap from the outside, especially next to some human coder who is going to want health insurance and vacation time. But they charge by the token, and the tokens, just like cloud machines, can start to add up.

Vibe coders tend to make the same requests repetitively. They throw large blocks of code into the context window and let the AI sort it out. The tokens just keep adding up. Somewhere, there’s an electricity generator burning ancient dinosaurs to keep the whole thing running. Between the fuel and the overpriced GPUs, the bills can really add up.

Handing over the reins

Whereas computer programming can be mind-numbingly repetitive, LLMs tend to inject randomness into the process. It’s dangerous to trust them with certain roles and responsibilities because they do things a bit differently every time. A bit of randomness is essential to their design.

Some vibe coders have found this out the hard way. In one particularly scary story, the AI deleted a production database. Was the data lost forever? Did the LLM even care? We’ll never really know because it just moved on to the next prompt.

Chasing AI hallucinations

One of my worst programming days in recent memory happened after trusting an AI coding assistant. The machine spit out reams of beautiful code with long and exquisitely formatted comments that looked very reliable in my color-coded editor. Some of it even ran perfectly.

The problem was, the AI just hallucinated the existence of a perfect library call that would solve my prompt. The glue code wrapped around the API call worked fine, but the library call didn’t exist. The library didn’t offer the called method or anything similar. But I didn’t know this until I had spent a few hours looking through old documentation and source code to make sure that the AI hadn’t just misspelled the name.

When I pointed out the problem, the AI would just apologize profusely with faux sincerity: “I’m so sorry. You’re right,” it would say. Then it would generate more incorrect and unusable code that was just as wrong in a different way.

Sometimes, it is easier to write the code yourself.
https://www.infoworld.com/article/4029093/9-habits-of-the-highly-ineffective-vibe-coder.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Aug, Mon 4 - 17:59 CEST