October 24, 2025
By Hubert Brychczynski
Artificial Intelligence,
Software Engineering,
Developer Tools,
Large Language Models

Eight lead engineers and a writer walk into a Slack channel and start chatting about AI and coding. Several messages later, the conversation stalls (note to self: developers aren't exactly crazy about chatrooms), but the writer gets referred to a person who's allegedly a power user of AI tools in the company. The two schedule a call and have an hour-long discussion that provides enough material for two blog articles.
I am the writer, and you are now reading the first of the two articles that distill my interview with the gregarious and outspoken Federico Zambelli (commonly known as Zambo), a Senior Data Engineer at Janea Systems, about his insights on generative AI and the craft of coding.
How does genAI affect a developer? Why is "hallucination" a misnomer? What determines code productivity and value, and can generative AI provide them? What are the paradoxes of generative AI in coding and its somewhat counterintuitive benefits? Can we quantify those benefits with relative certainty, or does something stand in the way? These are only some of the things I heard from Zambo, which we'll cover in this article.
No tool is agnostic. For example, habitual use of GPS for navigation has been linked to a decline in spatial memory. In a similar vein, code autocomplete in AI coding assistants initially seemed like a blessing for Zambo, who suffers from carpal syndrome, but it had ultimately quite insidious side effects. After a time, Zambo got so used to autocomplete that he expected it to work even when it wasn't on or available. "I would sit at the keyboard, type a bit of code, and wait for it to 'write itself' with my hands suspended in the air." That's when he decided to disable autocomplete entirely. "I became concerned about losing my expertise and the ability to see through LLMs' bullsh*t."
What Zambo (and many others) call "bullsh*t," the industry has decided to dub "hallucinations." Once believed to be a transient flaw that would eventually disappear with sufficient advances in the technology, today even OpenAI admits that, for better or worse, hallucinations are here to stay due to the very nature of how large language models operate.
On this matter, Zambo notes his disdain for the term "hallucinations." He views it as a misnomer, or—perhaps more accurately—a deliberate euphemism that adds a magical aura to what is essentially a pervasive system error.
Large language models can often generate excessive amounts of code from a single prompt. Zambo knows this unfortunate tendency all too well from experience. What makes it even more unfortunate is that some might conflate such verbosity with productivity. But volume, according to Zambo, is a misguided metric for productivity in coding. In fact, the goal of every self-respecting programmer should be the opposite—to use as little code as possible for a desired outcome. Succinct code requires less mental resources to create, works faster, and is easier to understand and debug. This is something that adds actual value for businesses and users alike, because—as Zambo aptly notes—"code in itself carries no value. The important thing is what it does, and how."
There is a curious contradiction arising in our conversation about AI and coding. On the one hand, Zambo identifies as a heavy user of the technology; on the other, he struggles to put a number on how exactly his productivity has gone up as a result. And the further we go down that alley, the more puzzled I become.
Zambo admits that LLMs can often provide complete solutions on the first try if they're dealing with a simple problem, a common use case, or a popular framework or language. However, when faced with a rare or more complicated challenge, most AI suggestions initially miss the mark. It takes quite a few iterations and human contributions to get them right; and the usually high volume of output doesn't help. Zambo likens it to "having to watch an entire film on fast-forward in the space of two minutes."
This begs an obvious question. So far, Zambo has blamed LLMs for cognitive decline (through overreliance on autocomplete), winced at their superfluous verbosity, and lamented the need to iterate over their output time and again.
So why does he use LLMs at all?
"These are phenomenal tools," he declares, as if completely forgetting all the criticism he has just hurled at them. Surprisingly, though, what he's about to say afterward will make sense.
Yes, LLMs make mistakes, Zambo concedes, especially with broader-scope problems that involve many variables. Even then, however, the output stays relevant to the prompt: one way or another, you get what you ask for. Sometimes, though, turns out what you ask for isn't going to cut it. Maybe the code is plain wrong or inefficient; maybe it works but makes you think of a better approach. Whatever it is, the very back and forth pushes you forward, even if the initial results were imperfect.
Other times, it doesn't even matter if AI-generated code is imperfect. If you need to create something fast as a proof of concept and know an ad hoc solution won’t ruin the project, “vibe coding” might be the quickest way to get it done. Suppose a developer wants to add a mock interface to their backend but doesn't know frontend well enough to build something on their own. Back in the day, it would take weeks to fill that knowledge gap. Today, an LLM can whip up a working solution in seconds—good enough for the purpose even if occasionally unrefined.
LLMs are also lightning-fast repositories and dispensers of domain-specific knowledge. Zambo recalls the pain of combing through Stack Overflow archives to find a solution for a particular problem. Sometimes the solution wasn't even there because no one had thought of bringing it up or even faced a similar issue. In such cases, the only option was to open a new thread—and good luck waiting for an answer if the problem was niche or arcane. Today, LLMs can instantly provide the answer in most cases, because they have likely ingested the entirety of Stack Overflow (and other such forums) in training.
Perhaps less obvious but still significant benefits of employing AI for coding include: a) using it as a sparring partner when the developer's mind gets stuck, and b) putting it to work on simple coding tasks, and c) deploying it for tedious “maintenance tasks” such as updating changelogs or documentation. All are use cases that Zambo swears by. “I could easily type some starter code by hand,” he explains, “but at nowhere near the speed of an LLM.” That’s a measurable gain of using AI for coding.
LLMs can benefit coding in many ways, but what's the net result? Unfortunately, those who expect hard numbers and definitive answers will be disappointed. A general uncertainty around the impact of genAI on coding might be driven largely by the lack of consensus on measurement: where one might value time, another values brevity, and yet another volume. Research is also divided, with some studies showing an improvement and others a decline.
Maybe it's too early to tell. From a business perspective, AI is still in its infancy—despite astronomical growth in adoption, expectations, and valuation. When the temperature around generative AI subsides, the market should offer a correction and we might be better able to discern the actual benefits. For now, in the words of Zambo himself, it's still very much “open season”. Everyone scrambles to use the technology, poke holes in it, and extract as much value from it as possible. Only time will tell how much of that is actually real.
In the next part of our conversation, Zambo will touch on his two favorite LLMs and their differences; we'll talk about AI coding tools (as opposed to LLMs themselves), and how they enhance the AI-assisted coding experience; finally, we'll also discuss the differences between agents and agent mode.
While waiting for part two, you might want to revisit (or visit, if it's your first time here) our own research on AI and coding. A few months old, it's admittedly a bit outdated given the pace of AI development. Still, many of the findings echo Zambo's sentiments from this conversation. For example, our evaluation didn't yield improvements in all the domains we tested, while developers across the board cited AI usefulness for writing simple (i.e. non-complex) code and as interactive knowledge assistants. Since these tendencies don't seem to have changed a lot yet, the research may still contain valuable insights despite its publication date.
Ready to discuss your software engineering needs with our team of experts?